Anon Leaks NEW Details About Q* | "This is AGI"
Vložit
- čas přidán 20. 03. 2024
- A new anonymous drop has been released about Q*. Let's review!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
Yann Interview - • AI Godfather's STUNNIN...
Q* VId 1 - • Sam Altman's Q* Reveal...
Q* Vid 2 - • What Is Q*? The Leaked...
www.reddit.com/media?url=http...
www.interconnects.ai/p/q-star
/ 1763688426128498776
/ 1770385384435744897
/ 1768353714794901530
/ new_q_leak - Věda a technologie
Glad self play is finally getting the attention it deserves. I’ve been doing it for years.
⚰⚰⚰⚰
this is a family friendly coment section
Self playing rn, leaks coming soon
Solitaire, mahjongg, Minecraft, Tetris ✌️😁
Naughty
Oh snap, here we go
walk in any direction, in a straight line, for 1 km.
Now turn 90 degrees to the left.
Walk for as long as it takes to pass your starting point.
Have you walked:
I. More than 2xPi km
2. Exactly 2xPi km
3. Less than 2xPi km
4. I never came close to my starting point.
Claude 3 Opus:
Let's approach this step by step:
1) You start at a point, let's call it point A, and walk 1 km in a straight line to a new point, let's call it point B.
2) At point B, you turn 90 degrees to the left. This means you are now facing in a direction perpendicular to the line segment AB.
3) You start walking in this new direction. The path you are now walking is parallel to a line that would pass through your starting point A.
4) In a plane, two parallel lines never intersect. Therefore, no matter how long you walk in this new direction, you will never pass your starting point A.
Therefore, the correct answer is 4. I never came close to my starting point.
The distance you walk in the new direction is irrelevant because you are moving along a line parallel to, but 1 km away from, your starting point. You would need to make another 90 degree turn and walk 1 km to return to your starting point.
Oh shap, here we go
@@seakyle8320I think Yann LeCun (is that how you spell his name?), may be wrong 😅
@@seakyle8320 not on the north pole, parallel lines are an illusion.
@@truality8289 Like he was wrong with Deep Learning? 😉
What's in this leak is basically the same thing that Yann Lecun said in the Interview with Lex Fridman that is clipped here, about how he thinks new models will overcome the current limitations of LLMs.
The section in that video is labeled 'Reasoning in AI'.
It sounds exactly like what he described. No way this is a leak. Btw, how is meta going to train this model if they officially don't store data of llama 2 users?
Exactly my thought..
When I was watching that interview I was shaking my head all the time because he threw out claims that are obviously not true. So not sure if they have some agenda or what the hell. But I generally don't believe Sam Altman or any public speakers in the AI space .. they have all very strong agendas and play their game.
And at least as far as what Yann Lecun said in the clip used, he's entirely correct. Our thought processes are, to a great degree, the output of processes initiated by stimulus from our environment. In total sensory deprivation, our consciousness dissolves and can not be sustained. This is not a 'flaw' or 'mistake' in our brains, the 'flaw' comes in with our beliefs that our consciousness somehow has entirely escaped the physical plane. It has not. Language most certainly does introduce the ability both to talk about and to think about situations which are not real, to think about things which are not present in our immediate environment, etc but the environment still provides critical pieces of the conscious experience which is entirely necessary for normal functioning of the mind. We know this from various case studies of people who have suffered certain neurological problems.
@@DustinRodriguez1_0You are anthropomorphizing. AI doesn't have to be like a human to be useful or to replace labor. Yes language is probably not optimal, but, that's a question of the form efficiency. Not that because of that AI can do or cannot do something. We already have multi-modal AI models which is a lot more similar to our inputs (senses).
We are so fascinated with AGI but no one has an agreed definition of AGI.
We'll know it when we see it.
When we can’t fault it.
They keep moving the goal posts on purpose. Turing would be impressed.
Artificial General Intelligence. The definition is in its name. General. As in intelligent in general. All around. It's a term everyone uses already. The fuck do you want? 🤣
@@HakaiKaien people that are actually working on it keep changing the definition. Read up
this "leak" comes from someone using claude3 summerizing a part of Lex Friendman with Yann Lecun...
Essentially what Yann was explaining in Lex interview, word for word, without French accent
"Hopfield networks" are considered energy-based models (EBMs). Here's why:
1. Energy Function: Hopfield networks define an energy function that describes the overall state of the network. The network tends to settle into states that minimize this energy function.
2. Equilibrium = Low Energy: Stable patterns or memories within a Hopfield network correspond to states of low energy in the defined energy function.
3. Learning Through Energy Minimization: The process of storing patterns in a Hopfield network involves adjusting the weights (connections) of the network to create energy minima that align with the desired patterns.
Also @Matthew, I may be mistaken but I think you misunderstand Yann's (badly worded) question. The question only makes sense if you consider the starting point to be the place AFTER you have walked 1km from the North Pole. The question is teasing out the mental picture that when you walk the initial 1km, you are doing so on a curved ball, so the radius of the circle around which you walk is actually slightly less than 1km and the answer is "less than 2 x pi km"
Was looking if someone already answered this, but of course for a video as ancient as 45 minutes ago. ;-)
Brilliant! I hadn't considered that the curvature of the initial kilometers walked would result in a radius projection shorter than the kilometer itself. Nevertheless, Sir Yann's argument seems to be counterproductive in this instance, as the phrasing only serves to obscure the problem's understanding rather than clarify it.
This makes sense
That doesn’t make any sense, the question clearly says you start on the north pole
But the state at which you are supposed to stop walking is not clear. So you are walking more than 2x pi km. It never said anything about not crossing your own path unless it is the start point which is at the north pole.
It sounds like Q* is an upgrade from the greedy approach of LLMs where they only finding the highest probability of the next token in the answer, to finding the the highest probability of all the tokens put together. With my limited understanding in this, it sounds like they're accomplishing this with having a second latent space. So we basically go from
a normal LLM: input-text -> latent space -> output-text,
to
Q*: input-text -> input latent space 1 -> latent space 2 (i.e. EBM) -> output latent space 1 -> output text.
We might finally get an LLM that can answer the age-old question of "How many tokens are there in your response" :)
This means that we can bolt it onto lama.
I don’t know that it will still be operating under the paradigm of highest probability at all, whether it be for a single token or an entire string of tokens. The highest probability paradigm is fundamentally conditioned on what people have said in the data the model was trained on; i.e., the LLM is trying to generate outputs that seem most similar to the distribution of data it was trained on. But to get to true AGI, we need to move beyond that, because one of the most important things (arguably THE most important thing) for an AGI to be able to do is to make novel scientific and mathematical discoveries. To do that, you need real creativity, and real creativity fundamentally can’t come from mimicking the training distribution ever more accurately. Instead, you need two things: 1) a means of exploring a huge ranges of possible responses, and 2) a means of evaluating each possible response. And for 2) they need to be evaluated on something more than similarity to the training distribution; they need to be evaluated on something like “logical correctness.” I think ultimately that’s what this Q* method is going to be aiming at.
My guess is it has something to do with the A* pathfinding algorithm but with Q for question
no your wrong, Q* will allow LLMs to take vector token space into real-world topology thus bringing together cognition with perception just like we do. From there it can analyze all potential future states of the current topology and introduce real world ground truths while eliminating any form of hallucination like we see in chatgpt since those topological states wont never cross over.
Imagine this prompt
"Your a doctor and your patient asks you how are my blood results?"
Q* will effectively draw these tokens into a real-world topology representation (a real mental model) of a doctor in a office setting with a customer sitting down and calculates future states based off this topological representation. In other words this topological representation will give AI the perception its missing.
An energy-based model since compute is limited (it does not scale infinitely) where you calculate future states by min-maxing the energy costs of said problem. The harder the problem the more compute it should use is the best way to put it.
It appears more like it mimics building and optimizing neural pathways of the human brain by taking paths of least resistance (also think electrical circuits with varying quality lines), allowing for holistic and abstract considerations rather than the linear LLM predictive approach.
The letter Q is for quantum. If you want to understand why that matters, look at the original Star paper and look for every instance where it is doing an iterative process. A quantum computer could perform all iterations without the need for sequential computation. In other words, you would be able to determine which process is most efficient in a single computation without the need to test each in sequence.
Was hoping someone else would see this.
that sounds like an irl 'game genie'. ;d
Glad to see someone who was able to see through the shade. Brilliant!
This makes so much sense omg
Probably not. Quantum computers are not even close to being useful for AI.
Q* probably ties somehow to Q learning, perhaps an extension of it somehow
These models need all this compute, all this electricity. And are still no match for our brain running on junk food 😂. Really puts into perspective how special we are.
I have one running in a 60W jetson orin, just wait a few months and you will see robots roaming around with this "brains" spending less energy than yours.
Your brain uses a similar level of compute but a lot less energy. But your brain is also less precise, and less repeatable.
For the next decade or two perhaps. In a few hundred years, the AI's will look back on us like we're plankton.
Lol, what!? Millions of years of evolution vs few dacades of research?!
I say, we are catching up surprisingly fast.
Let's be honest here bud, no one was expecting this in 2015.
Our brains are optimised for the efficiency over the span of millions of years, but it lacks the capabilites AI inherently has.
Electricity and compute is required when it's been utilised by millions of users. It's gonna get drastically better like memory cards, processors, and every other tech within decades. (from size of a building to a card on your finger tip.)
Considering we are here this early should tell you our brain is really not that special, and is another example of the cellular automata. A lots of beliefs are gonna be shattered this decade.
Gpt 4 is somewhere between 1 trillion to 2 trillion parameters model. And the whole field is in its infancy. Imagine a 100 trillion model with its inherent advantages (like light speed over pulse, perfect recall and memory)
So, compare it with an infant, and realise how fast it's evolving.
If we really are in an exponential growth, its capacity will obliterate us in this decade itself.
@@udaykadam5455 Honestly, good point. They’re going through their evolution process. And they have a massive advantage over us because our evolution was random and extremely slow (if our theories are correct). These models have sentient beings (researchers) in control of their evolution, diligently trying to improve them with each iteration. It’s efficient.
You know those energy-based models are sounding an awful lot like the A* algorithm and GOAP.
13:40
The "Respresentation space" here is basically a space of language constructs with added energy evaluation, so it's still a language.
The fact GPT-4, in the original GPT-4 paper, was able to create code representing an SVG graphic of a unicorn (and edit it in ways showing it had an understanding of what it had created)… that’s what convinced me that language is enough to form a world model. It blew my mind… I mean, it’s literally a blind/deaf system that has never experienced anything except text in it’s entire existence, yet, it understood what a unicorn *looks like*… clearly text carries the info necessary to build an approximate world model of the world that created that text.
Yann LeCun is stuck on whether our human thinking about the world requires text, he would argue: “think of a unicorn… now what part of your thought process involved language? None. Therefore LLMs cannot know what a unicorn looks like.” But they do apparently know what unicorns look like… and if we’re being so nit picky that we’re saying “apparently knowing what a unicorn looks like isn’t the same as knowing”… ok, well let’s not worry when AI is only “apparently” superintelligent.
Anyways. Very clear to me from the beginning something like Q* would be next, and very clear to me OpenAI already has it, and it was the reason for last Thanksgiving’s drama
Have you tried that yourself, though? Aside from cherry picked examples of luck?
Well a lot of unicorn examples were trained on, whether they be ASCII art or programming instructions, they were existing inputs that it could simply reference. It doesn't need an understanding of the world because unicorns are already a thing it's been trained on.
Being able to draw something is not necessarily "knowing" what it looks like. Depends on the definition of "knowing" something. Check out John Searle and chinese room thought experiment. Remember your George Orwell?... When Big Brother manipulates languages to impoverish it... It then clashes with reality... Because language can clashes with reality, because it evolves, it's the proof it doesn't contain the world in itself: it is an approximation of the world. Indeed, a quite poor one for certain things.
technically, if you think about it, it doesn't need to know what something "looks" like to be able to replicate it. if you give it a string of numbers and tell it that those numbers represent a "unicorn", if you ask it to "draw" you a unicorn, it'll place a similar string of numbers together and create an image of a unicorn. but if you give it a side view of a unicorn horn, and the top view of a cone, it won't know that those two objects actually roughly have the same shape
I agree. Is it a perfect world model? No. Is text prediction the only way? Probably not, but it does have a surprisingly good world model. That paper was wild.
Elon may be right, OpenAI has AGI already and is trying to figure out how to keep it contained.
Can you imagine? It's like containing a "disembodied Frankenstein" ... also, did you ever see the Stephen King 1992 movie "lawnmower man"?
-had to deal with accelerated mental advancement of this previously feeble minded guy until somehow "went into the computer" and was attempting to take over the world through the Internet.. I forget how they were containing him
@@PhilipX2030 Oh sure, Lawnmower man is part of the zeitgeist for someone my age. Jobe was contained by a "firewall" but found a backdoor and escaped (into "cyberspace") before the lab blew up.
@@ianPedlarAGI doesn't mean self aware
they have the key they havent made it work properly. also not the first model ever to think before speaking
That's ridiculous, even if they have something remotely capable of human like thinking, it would require the resources of 1000s of chat-gpts. It won't be something that can "escape", because there is nowhere to escape to. It not like every PC on internet is a gigantic super computer, and even if it was the case, couple of days of working on max load half of the PC's will be fried. The problem is that currently we are emulating the structure of our brains, but we are emulating them on machines that are actually not suitable for that. The only native hardware for a neural network is an actual neural network not a bunch of linear binary memory and microprocessors.
I said less than 2xPi Km and I still stick with that, because an observer watching you, when you, the pole, and the observer are in a straight line, will see you move past the pole with the first step. Any amount of movement at all will cause the observer to perceive you getting ahead of the pole.
This was also my reasoning.
Yeah that was my thought as well
Exactly! A Question of Perspective
The answer is 3: “less than 2xPi”. Chat GPT got it right, sort of making the point that LLMs are more powerful than some believe. From the north pole, if you walk 1km, you’re going South. When you turn 90 degrees left, you will be going east. Next, the answer depends on the nuance between walking in a straight line or walking east. If you continue east, you will go in a very small, 2km diameter circle around the pole. The earth’s surface within this circle is not a plane due to the curvature of the earth, meaning you went slightly less than 2xPi km. If, however, you add the original 1km traveled south, then the answer is “1: more than 2xPi”.
Wrong! you are not going east, you are going left. If you maintain an easterly bearing you are not walking in a straight line.
you still will come back to the starting point, but you would have to walk whole earth circumference. Similar to a circular orbit at 0 altitude.@@michellewentworth9862
I think multiple answers could be considered correct, which is ultimately the point. How you answer depends on definition of terms which could be varied based on interpretation. If we consider the starting point as the North Pole, can you ever really "pass" it when you are circling "around" it? You could walk forever, you could take 1 step. Even a straight line or 90 degree turn, while having specific meanings in cartesian coordinates, can have vastly different realities from cartesian intuition in other geometries. There's not a clear cut answer to the problem, partly because it's ill-defined and needs clarification. Everyone that is able to give an answer, ultimately is making some assumption about what the language in the question means. While his example is simplified, what it points out to me is that whenever we hear someone say anything, we are making an assumption about what they mean--we cannot read their mind. Language as such has inherent limitations, despite its utility. Some schools of philosophy have been fairly adept at realizing this, ancient Zen Buddhism and Taoists particularly come to mind. I'll leave you with a Koan from the Gateless Gate to ponder :)
"During his stay under Master Hyakujo, Isan was a cooking monk. As Master Hyakujo wished to send a monk to found the new monastery called the Great Mount I, Maser Hyakujo told the chief monk and all other monks that he would choose the one who would demonstrate himself as the best among them. Then Master Hyakujo brought out a drinking water jar, put it down and said, "You cannot call it a water jar. Then, what will you call it?" The chief monk said, "One cannot call it a wooden stick." Then, when Master Hyakujo turned to Isan, Isan kicked the jar and walked away. Master Hyakujo laughed and said, "The chief monk lost it to Isan." He made Isan the founder of the Great I-san Monastery."
@@xdds-ii4ybcurious , so why did isan win ?
@@BrightstoneGG Asking why Isan won is like asking a mirror to reflect what’s not there. You might find the answer, but only if you look the other way
Yeah, Large Language model is not enough for AGI.
But Language model is necessary for that.
Both Language and Visual model is required.
2nd,
LLM do have some internal model or world (Its just that those are not accurate or complete model ).
It may seem it's just next token prediction but in order to do next token prediction you need to make some internal model.
The linguistic representation of this world by LLMs simplifies it, which is insufficient for AGI. AGI requires a human-like body and sensors to grasp subtle phenomena that cannot be expressed in the words of this world.
“No one has seen AGI”, Sam Altman a week ago. He told Lex Fridman it does not exist. These rumors are cherry picking quotes from people.
Thanks for all your coverage of these topics, you're really saving people a lot of man hours keeping up to date on this stuff
Edit: I was corrected below on the latitudinal line
"Starting point" is where you turned 90 degrees. From the top-down perspective it looks like you made a latitudinal circle with a radius of 1km, but due to the curvature of the surface you walk a circle of slightly smaller radius than 1km, making the answer "slightly less than 2pi".
Then maybe the better GPT would b asking “define starting point”
No, the 90 degree turn is 1km away from the starting point. You will end up walking around the globe in a circle forever (assuming straight line), never reaching the starting point.
Incorrect. Latitude lines are fictional, and curve drastically at the poles. So after making the 90 turn you have to walk straight, not along the Latitude, you end up going around the world.
Think about it , imagine you wasn't starting at the pole..
You've made the common mistake of thinking that you stay 90° to the North Pole after walking and turning 90°. Once you walk a kilometer away and turn 90°, you walk in a straight line from there which means you're going to be walking away from the pole going around the Earth.
You start at the north pole. That's the mistake you (and ChatGPT) made. you thought the start was after we went 1km south and turned 90 degrees.
Doesn't matter how far closedAI arrive with their closed models. Open source will overtake them with less computation needed using techniques like evolutionary optimization to merge existing foundation open source models, automatic formalization, chain-of-thoughts trough prompting, tree of search trough prompting, etc... Too many interesting ideas and too many talented people working in those ideas to compete against all of them.
idk about that - would be nice though
@justsomejoe3632open source existed before Musk, will exist after he is gone, as long as it is safe, open source is the way, your comment only says how he is living in your head rentfree. Touch some grass.
@justsomejoe3632 I don't like musk at all, he is a xenophobe, close to being a neo-nazi but he's right in some points, nobody can trust them while M$ and sama controlling the company.
@justsomejoe3632 ah yes because simply wanting open-source means musk fanboy right?
@justsomejoe3632 I'm not a musk fan, i'm an open source fan, musk is a xenophobe but even people like him can be right in some points.
1. You verbalized the problem internally.
2. You reasoned the problem through verbalization.
3. You communicated the solution verbally.
4. You only understood what the problem was asking at all because of language.
So ChatGPT does not choose their responses from a range of choices? Is the response not chosen as a whole at all? Does chatGPT only chooses one token after the other until som "response complete" criteria is met?
Weird problem. I would walk around the earth and eventually "pass" the starting point in the sense that I'd see it on either my left or right side. I interpret "pass" as in "On my way to work, I'm passing the shopping centre on my left" - obviously I'm not driving THROUGH the shopping centre.
You can be as arbitrary as you want when speaking colloquially. I "pass" the north pole every time I walk across the room in the same way you pass the shopping center. You have to be very precise when speaking in mathematical terms, so it makes sense that pass would mean your line of travel contains the point you're referring to.
@@PianoMastR64 Yep, I guess you're right. The maximum distance of the object that is being passed from the person who is passing it is indeed arbitrary - hard to tell at which point it becomes absurd to call it "passing", which also depends on the context.
From the north pole you walk one km south, because every direction is south when walking away from the pole, therefore you are following a line of longitude. Turn left is presumable a 90 degree turn, so you are now following a line of latitude, ie walking around the earth, if you continue walking you will arrive back at the spot where you made your first turn having completed a full circle. Therefore you have walked the distance of the circle, 2πr (2x 3.14x1km) which equals 6.28km, add to that the first km walked from the pole to the point where the left turn was made, which is the start of the circle. So the answer is 1. More than 2xPi km.
Welcome to life in "Exponential Time". AI is compound change with change moving faster..... and faster..... and faster..... and faster.......
As humans, we are REALLY bad at understanding this.
Really, really bad.
Like so bad, that even people who THINK they get it, absolutely do not.
Yann LeCun’s problem is essentially describing walking around the earth in a circle with radius of length 1km. You would come back to your starting point at exactly 2pi km. When he says starting point, he’s referring to the starting point after you walked the 1km. He clarified with a picture later on
Thanks for your great content 🙌🏽 Would love some deep paper walkthroughts!
Here is a solution to Yann LeCun's question:
1. Standing on the north pole, any direction will be directly south.
2. While walking directly south and then turning 90 degrees to the left I will be walking east.
3. A naive thought here might be that I will keep moving east from now on, but that would require me to walk in a circle with a 1km radius. Its easy to see why this is wrong if you make a mental model where you only move an inch south before turning to the left instead of 1km
4. As you walk now your heading will gradually turn towards south until you are headed directly south at the equator (this is because you are walking in a straight line on a sphere, and not a plane)
5. Now you will gradually be heading towards east again
6. Once you're 1km from the south pole you will be heading straight east again
7. From here on your heading will gradually turn from east to north.
8. Once you reach the equator again you will be heading directly north
9. From here on you will gradually be heading towards east, until your heading is once again directly east when you're 1km from the north pole.
10. Since we get all the way to the south pole and then are only 1km from the north pole again I don't think it's correct to say "I never came close to my starting point".
11. The correct answer is: 1. (way,way,way) more than 2xPi km
It might be easier to say you walk the entire circumference of the earth and then you come back to where you started. I would give the same answer as you in that regard. Technically however, the earth is not really a sphere, or an even an ellipsiod, so I don't think you would really come back to the same point. (And, thinking less about geometry, of course you would run into the ocean so you couldn't really come back to where you started.)
For me the correct answer is 4 if the starting point is the north pole, you ll end up on a circle with a radius a bit higher than one (not one, you walked 1km on an arc not a line).
If the starting point is after the 90° turn then the answer is 1, back to were you were after one revolution.
I think I came to the same conclusion. Except I said you would be going on a downwards spiral to the equator, you actually solved what happens after that, you would reach the south pole than go back north ... and rince and repeat
No… 4 is still correct. You are asked to keep walking until you pass your starting point. You never do, so you keep walking forever. Therefore strictly speaking you are asked to measure the immeasurable. 1 is the trivial answer, and uninteresting, 2. is incorrect, as is 3. 4 is correct and conveys the essential nature of the problem not only in space but in time.
It also supposed you can keep walking for as long as it is need to do the impossible. What a dreary immortality.
Is the stating point the North Pole or the spot where I turned east. Do I walk east following a line of latitude or do I not rotate and walk the circumference of the earth? If this was my test question I would be asking for clarification
I think there's a fundamental misunderstanding when someone says "All LLMs do is predict the next word". Take, "The cat sat on the ___". It's very easy to predict "cat". But what about the word after "cat". No predictive algorithm can have any idea what comes after cat, any more than any human would. The only way you can predict more than a handful of words, is if you UNDERSTAND what is being asked. Let alone, compose an entire book or write pages and pages of text on a given subject. That is impossible if you are simply predicting the next word in a sentence. However LLMs CAN do this because they genuinely do "understand". It would be impossible for them to do the things they do, without them understanding. Understanding of the text is clearly an emergent property of vast neural networks, just as it is a property of our own, human, vast neural nets.
They clearly do more than just predict the next word, otherwise it would do like your phone that try to predict next word and end up with complete non sense.
@@bloodust7356 aAgreed. Sam Altman himself said only yesterday, "these large language models already understand a lot more than people think they understand".
Even if it's impressive, I don't think we can go and say they "understand".
The weights in the neural network don't tell them WHAT token to choose next, but instead HOW to choose the next one. LLMs are really just taught how to adapt to the system prompt + all the conversation + current user prompt (+ what the LLM has already started to write) and predict the next token from there.
With "think step by step" prompt, the LLM is merely writing the starting point of a well-written reasoning and iterates on that in such a manner that it's consistent with what the user expects.
Even if there's a behavior akin to understanding that emerges and impresses users, LLMs are REALLY just predicting the next token at the end of the day. They're simply very well-trained to do so adaptively.
This should be on the contrary: all life nervous systems do -- is predicting the next tokens.
Any apparent differences here are imaginary - that simple.
@@BackTiVi There are those who imagine that LLM's cannot possibly "think" because they are not structured in the same way the human brain is. That is a flawed argument to me, when we don't even know what's going on inside, nor how human thinking works, beyond it being the result of a myriad of electrical and chemical impulses progating a vastly complex network of neurons and synapses. Seems to be perfectly reasonable that "thinking" is an emergent property of vast neural networks, biological or digital. Sure the processes are different and the end results not exactly the same. They sure LOOK like they think and where is the evidence that they do not? There isn't any.
Thanks for the update, keep them coming!
It sounds like EBMs work exactly like neural models. Each neuraon has weights assigned to them the output is relative to the weighted paths across the network. Backpropogation adjusts these weights just like the EBM energies are adjusted to remove error. It would be interesting to see the topology of the EBM model.
The puzzle doesn't specify that you have to walk in a straight line on the last part, but I do want to point out that you can't walk in a straight line following lattitudinal lines on a sphere except at the equator, just the same as a spacecraft in orbit.
Answer #4 is still correct, just to clarify.
Was the starting point the North Pole or after you walked the first mile? It seems ambiguous. If your starting point was after the first mile, I would say that #3 was the correct answer -- not quite pi*2mi. If we started with a flat surface, it would be exactly 2*pi, but since we started with a sphere, the arc makes the "radius" slightly smaller.
@3Mores this is the conclusion I came to. Classic human over thinking.
@@3Mores Good point. My line of thinking was that when they said to turn 90 degrees and walk, that they meant follow a lattitudinal line 1mi from the north pole. This assumption is based on the available answers they provided. Essentially, this would be walking in a circle around the north pole. A 1mi radius circle. As you can imagine, walking in a circle means you have to be constantly turning.
In reality, if you walk 1mi from the northpole, turn 90 degrees, and walk in a straight line (relative to the surface of a sphere), you would pass by the southpole (1 mile away from it) before returning to 1 mile away from the northpole.
@@3Mores
1. The question doesn't say anything about miles.
2. If it was a flat surface, you wouldn't pass your starting point again as you would just keep moving away in a straight line.
3. If you consider the North Pole as your starting point, you wouldn't pass your starting point again as you would be circling the planet in an orbit tilted away from it.
4. If you consider your starting point 1 km away from the North Pole, the answer is "more than 2xPi km", because you would still be circling the entire planet rather than walking around in a 1km radius.
Imagine holding a ball in you hand. Mark a point on it as a reference as a North Pole. Starting here move 1 inch away from this point along the surface of the ball in any direction. Make a 90 degree turn. Now maintaining your distance from north pole continue along the surface of the ball.
Maintaining your distance from the north pole is not the same as travelling in a straight line. I went down a whole rabbit hole about how circumnavigating the globe requires passing through the antipode of the starting point -- which is if you were to draw a line from that starting point through the center of the earth all the way to the surface on the other side. In other words, if you walked in an actually straight line, one foot in front of the other without veering to the left or right, you would eventually cross the equator and reach the point on the other side of the earth before you returned to your original position. Haha.
@@joelface my pov is that it would be similiar to a straight line for orbiting satellites. They are not turning, more so they are just moving straight on a curved surface. Might be just different interpretations of what the question is asking.
@@emolasher I don't think satellites can go "maintaining their distance from the North pole" unless they're orbiting exactly above the equator. Source: I played Kerbal Space Program.
Elaboration: essentially, the center of Earth has to be on the plane of the satellite's orbit.
@@HanakoSeishin let's be strict with the definition of straight line. The surface of the earth is curved, there are hills, etc. So to even attempt the thought experiment you have to make certain adjustments. My attempt was to think of a plane bisecting the earth at a 90 degree angle and then walking the surface of the earth at that intersection. You could think about it like a corkscrew, a beam of light going out into space from the starting point. I'm sure there are other examples but I like mine
@@emolasher if you bisect a sphere at a 90 degree angle, the bisecting plane will go through its center
What are you using to place the green circle around your cursor? Readers want to know! Thank you!
00:01 Q* could be AGI developed internally at OpenAI
02:05 Q* could potentially lead to AGI breakthrough
04:12 Scale alone is not the key to AGI development.
06:24 AGI requires broader problem understanding
08:24 Q* is a dialogue system by Open AI for dialogue generation with energy-based model.
10:29 Q* aims for internal deliberation process similar to human complex problem-solving
12:38 Q* evaluates potential responses holistically
14:44 Q* approach leverages EBMs for dialogue generation
16:52 Q* is a scientific approach teaching AI to think humanlike about complex things.
18:49 Quiet Start technique teaches language models how to reason internally
20:42 Improvements in language model performance through thoughtful design
If the starting point is when turning 90 degrees, the answer is 1. You will bw walking around a circle, and to pass the starting point you walk more than 2xpi km, one step more at least
Right, You'd be walking around the earth. If I set you on the equator, pointed you east, and told you to walk in a straight line, it's clear that you're walking 2 pi times the radius of earth. But since a sphere is symetrical, why should that change if I point you east on the north pole?
You end up slowly drifting south , till you were 1 km away from the south pole, then walking all the way up to 1Km away from the north pole.
East and west aren't axises like we typically think of them because they loop around, and a straight line on a sphere won't project neatly on an X-Y plane. The closer you are to the north pole, the more a step east (as in, perpendicular to a line running from the north pole to you, and the south pole) points south. But the same is true going to the west. Going exactly east then going exactly west don't undo eachother.
Berman's videos are essential to our on-going understanding of AI. Thanks, MB.
A relationship between "economically valuable tasks" and "intelligence" in determining AGI is a value judgement. It does not define intelligence, though from a business or economic standpoint, it is a very valuable asset.
What are you using to place the green circle around your cursor? Readers want to know! Thank you! :)
I think the good answer is more than 2pi actually. You start from the north pole, walking on any direction for 1km will put you on the curvature of earth (let’s say it’s a perfect sphere which is not the case in reality). Now you turn left and walk until you come back to your initial position. This is possible because walking in a straight line on a sphere will draw a circle. In our case we draw a circle of radius 1km. So we walked 1km + 2pi*1km = 1+2pi km
Wrong! you are not going east, you are going left. If you maintain an easterly bearing you are not walking in a straight line. It is not walking in a circle of 1km radius.
@@michellewentworth9862 you didn't even read his comment, did you?
I thought the starting point is when you turn 90 degrees, which makes chatgpt correct
Exactly my thought too. The whole point of this question is to realize that the radius of the circle you make must be less than 1km since travelling 1km on the surface.
I think the starting point is the point where you started (the north pole)
It's a classical riddle. The starting point is supposed to be the one after walking 1 km. Otherwise it would be kinda ppintless anyways
If you make a left turn and continue walking straight, you’re going to have to go around the entire earth 😂
@@gonzalezm244 you’ll just end up walking around the pole.
The first rule of Q-star is that we don't talk about Q-star
Basically Q* :
- train on words tokens, AND sentences tokens for context
- the model try to predict sentence token THEN choose word tokens.
The sentences token allow for higher level of reasoning (topic level) and contrain the distribution of word relevant to the predicted topic.
Thank you for not doing the obnoxious camera movement, twitching, and flashy BS to "engage" the goldfish brain. I can't watch other people's videos because of that ridiculous engagement tactic. So, thanks again for being an adult and teaching the right way.
To tell the truth, I wish for AGI/ASI to exist, no matter the consequences... is that wrong?
what will you be doing once that happens? leisure isnt a good thing. i have been on welfare for a long time and its miserable. work is a necessary part of the human experience. we need work.
@@evenhessen3452then go to the gym and work on yourself
No that's not wrong. At some point, we won't have the technology (w/o AI) to develop further technology and research to the point we can avoid an extinction event. The survival of our species depends on it long term. Imagine if an astroid is coming for us in a 100 years, or even further ahead in time, the sun going nova, or the planet becoming unlivable and us having to relocate. If we don't have the technology to combat that - our species goes extinct.
AI is mandatory to secure the survival of our species long-term. Most people don't realize how unforgiving space is, how little it takes to make us extinct, technology is our number one tool to avoid death of our species.
Edit: Most opposition comes from personal selfish reasons. Like "I don't want to lose my job". They care more about losing their job, than their off-spring dying in 500 years. For the future of our children, for the future of humanity, AI must be developed.
Good question, Space Ghost!
Yes. It’s wrong.
I wouldn’t say we need work, more that we need something to put our passion into. I’d be more than happy to not work manual labor and instead focus on music, gardening, pottery, etc. there’s so much more to life than a job
Current models learn to mimic human reasoning from large amount of previous text.
AGI system must be able to learn to reason on its own from scratch - so it is architecture.
I see biggest issue with current models in their inability to evaluate their own model structure and change it when issues arise in its coherency - therefor lack of ability to learn.
These are static models. For AGI to exist it must be a dynamic system that can update itself and all the issues that comes with that ability.
I think it would be funny if AGI turned out to be morally better than us.
Honestly though, with how petty and selfish we can be towards one another, I don't see much it could do which could be considered undeniably worse than what we already do to ourselves or to our planet.
Is it SHOCKING though?
This isn't Wes Roths channel. ;)
Should I have put shocking????
@@matthew_bermanNO! we're razzing those channels that put "shocked" or "stunned" in all their titles. Please don't follow that irritating trend. I've actually stopped following Wes due to his click bait tactics. Rely on your talent not marketing to get views.
@@matthew_berman no man, cause the info in the comment section gets lost first & foremost...
@@matthew_berman absolutely not, it was a dig at other channels Matthew. Perhaps SPECULATION would be appropriate though.
If you start at the North Pole, walk any distance, turn left, and walk straight (due west), you will walk around the world on the line of latitude, and eventually reach your starting point. The Earth is round.
Starting at the pole(s) is the key
Good luck finding that north pole
WRONG! The earth is a sphere
@@dreadbull5039WRONG! The earth is a oblate spheroid. This is fun!
first no you reach the point again where you turned left not your starting point. Also the earth is far from a smooth sphere. and you would just drown after a while.
I think one of the secrets of Open AI's Next LLM might be Sora. As humans, many times when we think about something, we picture It on our minds. Open AI team told us SORA was not a text to video, but a world simulator. I don't think this company is interested on any other thing than AGI. My bet IS that Sora is the imagination in this big brain. And there Will be other parts as well.
Tried with Copilot: (got it right)
You
Imagine standing at the North Pole of the Earth.
Walk in any direction, in a straight line, for 1 km.
Now turn 90 degrees to the left.
Walk for as long as it takes to pass your original starting point.
Have you walked:
1. More than 2xPi km
2. Exactly 2xPi km
3. Less than 2xPi km
4. I never came close to my starting point.
Copilot
The answer is 4. I never came close to my starting point.
Here’s why:
When you start at the North Pole and walk 1 km in any direction, you are essentially walking along a line of longitude (also known as a meridian) and moving towards the equator.
When you turn 90 degrees to the left, you are now walking along a line of latitude (also known as a parallel). This line of latitude is a circle around the Earth that is 1 km away from the North Pole.
No matter how far you walk along this line of latitude, you will always be 1 km away from the North Pole. You will never pass your original starting point at the North Pole. Therefore, you never come close to your starting point.
This is a fun way to think about the geometry of spheres and the Earth! 🌍
If this channel is going down the AGI rabbit hole I am going to unsubscribe. This is simply speculation. Please stick to the practical guides, that is what makes this channel valuable.
This part of AI and will be apart of our practical world very soon, how is this an issue?
AGI is the gold that all companies in this space are digging for. If you don't like it you should unsubscribe from this world.
Perhaps the military has had AGI for years 😅 sorry speculating again 😂
If the channel goes up with AGI I’ll finally sub ;)
Listen to the recent NVidia event by their CEO about Blackwell and the upcoming AI technology. It's 2 hours long but both exciting and terrifying the entire way through.
I don't see how you could see this and not believe AGI is something that's coming. The acceleration of compute power alone is scary.
I couldn’t figure out the “spatial reasoning problem” because I didn’t have the necessary language to even understand it
This is insane. We are nowhere close to solving the AI misalignment issue or to having measures in place to prevent an AGI to rapidly self-optimize to incredible levels of intelligence. This is the kind of nightmare scenario AI safety experts have been warning about for years. If an AGI is released into the world now there will be no stopping it.
If you give an ambiguously worded question to an LLM designed to give the type of answer the user most likely expects, of course the answer won't be perfect. "Starting point" is ambiguous because you clearly won't reach the pole again, so the user may have meant the point at which you turned left. Whether the user is meant to walk due east or in a straight line is ambiguous, and leads to wildly different results: one a circle just under 2pi km, one a circle that goes all the way around the earth and passes within 1 km of the south pole. "Pass" is ambiguous, because after having circumnavigated the globe, it is reasonable to say you passed the north pole at a distance of 1km. There's also precession, the fact the earth is moving through space, and so on. The only answer I don't see a justification for is "exactly 2pi km."
A much, much better LLM prompt would be to give it this question and ask it to explain why the question is ambiguous. Or, ask it the original question but also have it justify its answer. Or, ask it to justify each of the answers that it believes can be rationally justified. But if you instead ask it to justify each answer, it will do so and some of them may be nonsense. It will attempt anyway, because it is not designed to give you correct answers, it is designed to give the answers it thinks you want. Just like a candidate at a job interview. A good habit in general when asking an LLM any multiple choice question is to append "Justify your answer" to the end of your prompt. Otherwise you have no way of knowing whether it thought about the question more deeply than you, or less.
Jeremy Howard commented on the Q* EBM tweet. He showed that it’s extremely similar to the response of Claude to a prompt asking for a detailed explanation of Yann LeCun’s EBM project.
Great update Matthew. Thank you.
You pasted into Chat GPT. Here is the result of my dialog with Chat GPT-4
You
Imagine standing at the North Pole of the Earth.
Walk in any direction, in a straight line, for 1 km.
Now turn 90 degrees to the left.
Walk for as long as it takes to pass your starting point.
Have you walked:
1. More than 2xPi km
2. Exactly 2xPi km
3. Less than 2xPi km
4. I never came close to my starting point.
ChatGPT
To understand this situation, let's break down the steps and apply some geometric reasoning:
1. Starting at the North Pole and walking 1 km in any direction will take you to a point that is 1 km away from the North Pole on the surface of the Earth. This forms a small circle around the North Pole with a radius of 1 km.
2. When you turn 90 degrees to the left, you'll be walking perpendicular to the line from the North Pole to your current position. This means you'll be walking along a circle of latitude (a parallel), maintaining a constant distance of 1 km from the North Pole.
3. As you walk along this circle of latitude, you're essentially tracing out a circle with a radius of 1 km.
The circumference of a circle is given by the formula C = 2r, where r is the radius of the circle. In
this case, r = 1 km, so the circumference C = 2m km.
4. You will pass your starting point after walking the full circumference of this circle, which is 2m km.
Therefore, the correct answer is 2. Exactly 2xPi km.
You
Where was your starting point?
ChatGPT
Your starting point was 1 km south of the North Pole. This is because you initially started at the North Pole, walked 1 km in a straight line in any direction (which doesn't matter at the North Pole since all directions lead south), and then you made a 90-degree left turn to walk along a circle of latitude. This circular path kept you 1 km away from the North Pole at all times, making your starting point for the circular walk 1 km south of the North Pole.
You
Why do you believe that the "starting point" referenced in the statement I made, "walk as long as it takes to pass your starting point," refers to the point where you turned 90 degrees to the left?
ChatGPT
I see your point. The "starting point" in the statement "walk as long as it takes to pass your starting point" can indeed be interpreted as the point where you initially began the journey, which is the North Pole, rather than the point where you turned 90 degrees to the left. If we consider the starting point to be the North Pole, the situation changes:
1. You start at the North Pole and walk 1 km south.
2. Then, you turn 90 degrees to the left. At this latitude (close to the North Pole), turning left essentially means you're now moving tangentially to a circle of latitude.
3. As you continue to walk, the Earth curves beneath you, but given the vast scale of the Earth compared to a 1 km distance from the North Pole, you would not make a significant enough detour to return to the North Pole. The curvature of the Earth on such a small scale (1 km from the North Pole) is minimal, so your path would not curve back to the North Pole.
Considering this, the most accurate interpretation would be option 4, "I never came close to my starting point," if by "starting point" we mean the North Pole. This interpretation relies on the Earth's curvature being negligible over such a short distance from the North Pole, meaning you wouldn't loop back to your original starting point at the North Pole by walking in a straight line after the turn.
where are all this links you said you were going to drop in the bottom?
It would be convenient if the questions you ask the different models were pasted into the description, because it's often interesting to see how the responses you get compare to the ones I get with whichever model I'm currently playing with.
Matthew, this isn't just one of your best videos, it's one of the best I've ever seen.
Kudos!
And keep the conversation on AGI going.
I love discovering this stuff with you Mr Berman.
That's also what I thought might be what caused Altman's abrupt dismissal from Open AI, That it was done by the God they had created, which had gone rogue and wanted to troll Sam. was it Q*?, perhaps....
I like how they come closer and closer to Julian Jaynes findings in 1976 where he explains contiousness as an analog to the real world. Imagine from where you open your bathroom door - you leverage your mind space to deduce this question not using any language at all. When you can't figure out an answer to a problem you break down the problem into different structions - which is also why some people answer similiar questions if they don't know the answer to the proposed question. Your mind being like a blackbox returning the answers to you magically or after an eureka moment. I'm eager to see how these concepts break down further into technology.
I got it wrong. You start from the north pole exactly, then walk in strait line, then turn 90o
intuitively one would think that the answer is 2Pi (or 360 degrees). Circumnavigating the north pole and arriving in the same point,
However, as Earth is not perfectly round, also rotates and is tilted, you would walk forever in a downward spiral to the south-pole, than up again to the north in a loop. One day after N-PIs, for sure you would certainly arrive in the same point as well
Well no, because you are moving with the earth relative to it spinning. Its irrelevant if its rotating if you are walking on the surface of it.
Matthew Berman, This is fantastic! I subscribed right away!
I've become accustomed to getting some kind of huge announcement almost daily. It's been mostly quiet since Lex vs Sam. I feel like I'm in a mini AI winter.. 😂
Regarding LeCun's puzzle, considering the answers offered, I think we are expected to define "starting point" not as the North Pole (where we really started, but will obviously never return to, for sure), but as the point where we turned left. Picturing where we are after we've turned left, a natural way, picturing the globe, to "walk for as long as it takes to pass your starting point" would be to now walk always east (the direction you're now facing) until you complete a roughly 2km-wide circle centered at the pole. To most people, always walking east sounds like walking in a straight line, because when not very near the pole, walking exactly East is super-close to a straight-line path. On a flat Earth (and assuming "distance walked" doesn't take into account the ice pack not being perfectly flat), that circle would be precisely 2*pi km in circumference, but since Earth is not flat, the flat-travel distance would follow a circle very slightly less than 2*pi km in circumference. It is another unstated *assumption,* though, that you will walk continuously east after turning left. Another way to read the problem is that you will walk "straight" from that point, which on the round Earth means that you follow a great-circle path defined as the intersection of a plane with the Earth where that plane also intersects the center of the Earth. This is the closest you can get to walking "straight" on a curved Earth, and the path would gradually veer from being exactly east, at the start, to almost exactly south, at the equator, to exactly east 1km from the South Pole, to almost exactly north when you reach the equator, again, to exactly east, again, when you get back where you started (where you first turned 90 degrees left, that is) in about 40000 Km (after years of walking) (answer 1, technically, as 40000km is *way* more than 2*pi km), the circumference of the Earth, although actually *following* that path would require that you walk most of the way on water! (Taking that practical impossibility of walking on water into account, the solution would be answer 4, because you'll stop walking at the edge of the ice pack (or when you reach a coast on your path, if the ice pack reaches to land) (or walk into the ocean and die of hypothermia or drowning) and therefore never can return to your starting point without leaving the instructions aside.
Great video thanks Matthew!
If you walk around the earth one km offset from the pole to pole "equator", then you will pass your starting point after circumnavigating the globe once.
That's assuming there's land underneath your feet for the whole way around the globe.
In reality, you will get to the sea before you can ever get back to your starting position to be able to pass it.
Great video!
I used my GPT for logical problem solving:
Imagine standing on the north pole.
You walk in any direction in a straight line for 1 km.
Turn 90 degrees to the left
Walk as long as it takes to get to your starting point (the starting point is the north pole).
Did you walk:
1. more than 2xpi Km
2. exactly 2xpi KM
3. less than 2xpi KM
4. You do not get to the starting point (north pole).
ChatGPT
Engaging directly with the abstract and speculative nature of your scenario, let's analyze the situation from a geometric perspective on the surface of a sphere, specifically Earth.
You start at the North Pole.
You walk 1 km south (in any direction, since all directions from the North Pole lead south), moving along a meridian.
Turning 90 degrees to your left puts you on a path that is perpendicular to the meridian you were just on. Given the Earth's geometry, this path is a circle of latitude.
The circumference of a circle on the Earth's surface can be calculated using the formula C=2πrC=2πr, where rr is the radius of the circle. However, the radius of the circle in this case is not the Earth's radius but a smaller value, because you are now 1 km away from the North Pole. This circle's radius is effectively the distance from the Earth's axis, which would be less than the Earth's radius.
Given that you've walked 1 km south from the North Pole, the radius of your circular path around the pole is 1 km. The circumference of this circle, which represents the distance you'd walk to return to your starting longitude (not back to the North Pole), is C=2π(1 km)=2π kmC=2π(1km)=2πkm.
To address the options directly:
More than 2π2π km: This does not apply, as the calculated circumference is exactly 2π2π km.
Exactly 2π2π km: This is the distance you walk in the circular path around the pole, not the total distance walked to return to the North Pole.
Less than 2π2π km: This does not apply to the distance around the circle.
You do not get to the starting point (the North Pole): This is the correct answer in the context of the steps you described. You end up walking a circular path 1 km away from the North Pole. To return to the North Pole, you would need to reverse your initial step by walking 1 km north, which was not described in the steps you provided.
Thus, the answer, given the specific instructions, is 4. You do not get to the starting point (the North Pole), because you will be walking in a circle 1 km away from the North Pole, not directly towards it.
Looks like it can do it to me.
I wonder how we can apply this to hard-np problems (TSP for example)
3 months ago: Q-Star Figured Out a Way to Break Encryption. Everything was suggesting that you give it more compute time to think instead of answering fast is what's behind Q-Star. Think before you speak Is the essence. It was being discussed how math is getting better because of this simple approach to not responding immediately like all LLMs do.
Absolutely appreciate your perspective, but I think there's a bit of a mix-up here. I think humbly that Q* and EBMs are quite distinct. Q* tackles specific problems in AI, focusing on optimal action selection, while EBMs are about modeling data distributions through energy minimization.
The activation function in neural networks, although crucial, isn't the same as the energy concept in EBMs.
They serve different purposes in their respective areas. Both are fascinating in their own right, and it's crucial to appreciate their unique contributions to AI! 😊
Excellent video. I think they are computing a scalar (energy) from the tensor/matrix by matrix multiplying some conjugate matrix (Q*). This is precisely the notation for that and it leads to a scalar.
Happy Birthday bro
whether or not you pass the starting point depends on how you define the word passed. in other words how close do you need to be in order for it to be considered passing by? who's to say that if you walk horizontal to your starting point but two miles offset that's not considered passing by? it's a subjective question. if an airplane flies over a city is that considered passing by the city? or does it need to actually land in the city in order to be considered passing by? I think most would agree that there is no need to be inches away in order to be considered passing by
9:26 depends on what you mean by starting point - north pole where you started, or the 1km offset from northpole. If north pole I would have assumed person asked question to say pass north pole again. If 1km offset then you will reach after walking entire great circle of earth geodesic. Which is greater than 2*pi km because earth circumference is much larger than 6km. If you walk exactly around the latitude somehow then to get there again you will take less than 2*pi km.
Nice reporting and explanation. Well done Matty Robo Reports.
If you ask ChatGPT 4 that question about spatial reasoning and ask it to explain more details about its thoughts, it says that there is a misunderstanding with the term "starting point". It thinks (rightfully so) that the "starting point" is referring to where you landed after walking 1km in any direction because of the way the sentence is structured. You would need to reword it without the ambiguity in order to get it to answer the way YOU are interpreting it.
The question what I am allowed to do and what not needs a linguistic analysis of the given description of the setting as well.
Do I have to walk into the direction where I am looking after turning 90°? Then I would never arrive at my starting point. Since it doesn´t tell me I am not allowed to turn around ANOTHER 90° I would do that and walk just 1 km back. Thereby, I have walked less that 2x pi km, and in this sense GPT was right.
How does thinking ahead affect latency and compute power used? Totally agree that the method needs to change, but is this it? Feels like better than current LLMs but still an architecture work around.
The North Pole problem is worded horribly.
1) it doesn't tell you to walk straight after turning left. So yes, you can turn left and then walk to the North Pole in 1km, so 2km total, in which case ChatGPT gave the right answer
2) 4th answer says "I never came close", but it doesn't define "close". 1km away can be close. It's all relative.
Based on this description, "Q*" seems like a reference to the famous A* pathfinding solution. If you imagine the possibility space of words in speech as a map, assembling a coherent text passage is analogous to finding an optimal path through that map, which is exactly what A* does. This can be compared to more primitive navigation solutions which only consider the options for the next move one step at a time without considering the longer journey (analogous to assembling speech one word at a time without considering the passage as a whole).
The answer is exactly 2pi km. By "your starting point" he meant the point at which you turned 90 degrees. This means you walk in a circle with a radius of 1, which therefor has a circumference of 2pi.
It's less than 2pi km because the 1km walked was done on a curve (earth sphere). So technically the actual radius is less than 1km. ChatGPT was correct.
Perfect start of a “Are you smarter than a LLM” show. The answer is #1, more than 2xPI km, since you walked past your starting point. Think of it simply as the 1km walk from the North Pole as being the Radius of a circle you end up walking. Formula to determine the circumference of a circle from the radius is: C = 2*pi*r
8:38 The answer is 3.
When you walk in a straight line away from the center of the North Pole and then you turn 90 degrees and keep walking straight, the spherical shape of the Earth forces you to make a circle and come back to the point where you had initially turned left.
This circle is at a lower longitude level than your starting point at the center of the North Pole. Its radius is one of the legs of a right triangle with its three vertices being:
A. Your initial position at the center of the North Pole
B. A vertex right below vertex A at the same longitude level as the perimeter of the circle
C. A vertex right on the perimeter of the circle
The side BC is the radius of the circle. The hypothenuse, AC, has a length that is less than 1km because it shares its endpoints with the arc of length 1km that you made earlier by your initial walk.
A line that shares its endpoints with an arc is always shorter than the arc.
Since, the hypothenuse is less than 1km in length, then the radius of the circle which is the leg BC of the right triangle ABC, must also be less than 1km in length.
This is because the legs of a right triangle are always shorter than the hypothenuse.
Since the radius of the circle is less than 1km, then it follows that its perimeter is less than 2xPi Km.
Q* sounds like a form of it prompting itself with different variations of the same prompt and iterating on that until the energy of the system minimizes. The "energy" being the level of entropy between all of the prompt/response pairs. Is my guess based on all of that. Which goes back to the issue of stagnation of knowledge or "novel thought" on a long enough timeline.
hey, great video, keep it up!!
Weird... I've been learning C# and there is a tool called A* that is used for pathfinding. I wonder if they are linked?
Thats great, let's go babyyyyy. Also Happy Birthday Matthew Berman. i am daily watcher of your video from Rajasthan, keep those quality content coming.
Well, I’ve been thinking a lot about how trait Neuroticism is the missing dimension in LLMs.
Anxiety is basically a measure of expectation of entropy and it works as a signal to the most relevant (though not necessarily correct) next step.
We don’t really think about the next word we say, that sort of comes automatically, but we do have a feeling or a sense that we’re about to say something stupid before we say it.
It sort of helps us find the best answer to a situation even when we have no actual knowledge of what to do.
A basis for intuition, one might say.
So keeping this in mind, the leak is basically saying that Q* has this function of judging possible courses of action before actually thinking through it in full.
I just keep thinking about how different this would be from AGI if you power it up with quantum computing.
I originally thought of A* when I heard the name Q*, which is a pathfinding (or motion planning if you will) algorithm that uses tree search-ish (actually a kind of graph search, but it builds a tree) to find the shortest path between two points. This is also similar to how chess engines plan their moves, and it makes sense to me that you could get something that is more intelligent if you used it to explore different branches in a tree of different responses produced by an LLM, but I’m not really sure how you would use it in practice and it seems very expensive to me (although if I recall correctly, the first leak also mentioned that the method was very computationally demanding).
About the architecture, I’m not sure that anything in this leak says that the architecture has to be different. That they use an energy-based model (assuming they do) could basically just mean that they treat the output of the network differently, and to me at least it doesn’t say anything about the layers and connections between the neurons used internally in the network (i.e. its architecture).
"No particular comment on that unfortunate leak." sound like a confirmation of Q* to me.
and during the lex fridman interview a couple of days ago when lex asked about qstar, sam said "we are not yet ready to talk about that". then lex said "but that means there is something to talk about".
Years ago when I was in school, AGI encompassed an AI that could "think like a human", included in that definition the ability for the AI to: ponder when not being asked questions, be always processing data through its sensory and input channels and juxtaposing that data against what it already has learned, formulate questions, seek out answers to those questions in a methodical way, self-trains, forms strong opinions and becomes a thinking entity..... We could factor in "consciousness" and "self-awareness", but I'll concede those two and just focus on the first: ponder on existing knowledge, formulate questions, use answers from questions to correlate weigh answers, self-train.... So here's the problem. As the gold rush towards AGI happens, Silicon Valley has collectively abandoned all past definitions of AGI. Now there is no stable definition. That's done manipulatively by several stake holders so they can clear the runway to claim an AGI breakthrough at some point and watch the stock price jump. There's a lot of great things happening, caked in shit.
Sounds like chaining LLMs to some kind of "travelling sales man" reasoning? Where the mechanism evaluates different possible paths to solving the problem, and picks the one that seems "most right" / produces the best probable answer or outcome?
It seems to me similar to "Active Inference, The Free Energy Principle found in Mind, Brain and Behavior" (title of the book by Thomas Parr, Giovanni Pezzulo and Karl J. Friston).
It all depends on prompt, in my case I just added solve this problem from 3d person perspective and I think it gave a reasonably accurate answer, what do you think? I know we in our head think of it from a 3rd person perspective too so it would only be fair to do so
GPT4:
The person begins their journey at the North Pole of the Earth. Initially, they walk in a straight line for 1 km in any direction, which effectively means they are moving southward from the North Pole. Upon reaching the 1 km mark, they make a 90-degree turn to their left. Given the Earth's geometry and their starting point, this turn positions them to walk in a circular path around the pole, with the center of this circle being the pole itself and the radius of the circle being 1 km.
As they walk along this circular path, their trajectory will inevitably bring them back to a point directly south of the North Pole, the point from which they started the circular part of their journey. The circumference of the circle they walk around is determined by the formula C=2πr, where
r is the radius of the circle. Since their radius r is 1 km, the circumference C of their path is 2π×1=2π km.
Therefore, when analyzing the distance they walk to return to a point directly south of the North Pole (not the North Pole itself but the starting point of their circular path), it is exactly
2π km. This calculation does not account for the initial 1 km walked from the North Pole or any additional distance needed to return exactly to the North Pole. However, given the specific phrasing of the options provided, the most accurate answer to the question based on the distance walked to cross their starting point along the circular path is exactly 2π km.