Google's AI Makes Stunning Progress with Logical Reasoning
Vložit
- čas přidán 29. 01. 2024
- 🤓Learn more about Artificial Intelligent on Brilliant! ➜ First 200 to use our link brilliant.org/sabine will get 20% off the annual premium subscription.
Google has unveiled a new artificially intelligent system, AlphaGeometry, that can solve problems of mathematical geometry. It’s the first computer program to surpass the average performance of participants at the International Mathematical Olympiad. That might sound like an incremental improvement, just one more thing that AI is really good at, but mathematics isn’t just one more thing, it’s everywhere. This makes Google’s recent development a significant step forward. Let’s have a look.
The paper is here: www.nature.com/articles/s4158...
🤓 Check out our new quiz app ➜ quizwithit.com/
💌 Support us on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#sciencenews #technews - Věda a technologie
Next step: Humans learn logical deduction and reasoning too.
That's been tried for 250,000 years. Humans don't have the proper hardware.
Religion won't allow it.
No just ArseInChaarr HOPEFULLY WE MAKE WALL-E.. Ooo gummybear cookie butter
Ai will do it for us (yes this is a joke)
Hahaha! Don't hold your breath....
I’m a mathematician and I’m giving a talk about AlphaGeometry in a university seminar today. What Sabine says is essentially right. I think that the researchers had some good ideas, mainly creating their own training data and pseudo-language that makes the output human readable. However, their methods rely on brute force in a large way still. The AI suggests new geometrical constructions to make, but the deductions and algebra based on the constructions are brute forced in a rather computer-ish and non-human way. Not to mention the huge amount of compute resources this all requires.
Also, this general strategy was done by OpenAI in GPT-f in 2020, but it wasn’t as successful or impressive since they trained on human language examples and wrote the proofs in Metamath, a proof checking software language.
It’s an impressive proof of concept and has some good new ideas, but the eventualities in the video are very far off still. Euclidean geometry especially is rather easy to axiomatize in this way.
Thanks for the context!
Interesting. The really challenging mathematical problems are impervious to the brute force approach so there's still hope for smart humans 😁
Can you please send the youtube link for this talk? Also can you tell me when we can expect AI to do calculus based questions?
If we can eventually make progress in mathematics using "computer-ish and non-human ways" why not? It's been talked about that it might come to be that if we could spend a few million in compute to be able to solve some long held mysteries in math/computer science, we absolutely should. These LLMs are good at spitting out a bunch of potential solutions, but also good at being able to reason about which path has promise and which don't. The solutions don't have to be human like. Also I would argue that humans do this too inside our own heads when trying to solve complex problems. Run through scenarios and discard bad ideas and run with good ideas.
To my mind it doesn't matter on what it relies, for several reasons first what matters is the result, second computers don't sleep don't eat drink and so on so in fact they (I mean AI) any way would solve the problem faster then a human in fact. Worse than that in only few years there is going to be a rise of production on neural CPU optimized for AI so the problem of computer resources for AI will be solved. In fact what is humanish way of thinking? Does it enclude patter recognition and brute force at least sometimes? Even if human brains recognize bigger patterns (for now) and so produce shorter proofs how much time it will take for AI to gain the same skills? Year or two? So we are doomed any way:).
The hard part pf AI is the understanding of Sabine's sarcasm an finding the appropriate reaction.
cancelling, hate-speech, shadow banning, plenty of options.
It doesn't help that she talks sarcastically through her teeth!
That’s probably harder for the AI than solving the mathematical Olympiad with 100% 😂
Yeah, no. Go ask GPT4 to explain the joke, "Is it solipsistic in here, or is it just me?" in detail, and then get back to me.
You all fail to realize just how fast machine intelligence is progressing. This is The Big One.
thats only because thats a well known joke that has internet history @@YourMom-zt5zj
@SabineHossenfelder: Small nitpick: The company making those dog-like robots shown at timestamp 5:07 is actually called "Boston Dynamics", not "Boston Robotics". AFAIK they're actually a subsidiary of Hyundai.
I like your videos, you're really good at presenting complex topics in an easy to understand way. Thanks and keep up the good work!
Yes, my friend's son works for them. They have been through a few owners, even Google's Alphabet for a time. The last I knew, it was Hyundai as you said. As you may suspect, defense departments are hot on their tail.
AI acquiring rational thinking while many humans are losing it 🙃
Human brains have been getting smaller for 100,000 years or so. Some biologists think iPhones and the internet have accelerated that process. Already, people do much worse on memory tests - they don't need to remember anything.
Very few schools teach critical thinking, just enough education to operate the machines for the Elite, what happens when they dont need people for that.
@@mikemondano3624 why would the human brain continuously get smaller in the last hundred thousand years? That sounds unlikely.
@@mikemondano3624There's no better or worse when it comes to evolution. There is only survival and reproduction or death.
But it has not acquired rational thinking.
I asked AI if it could do Critical Thinking. It wasn't sure .
At least it's more honest than most people!
To tell the truth it's the smartest/well thought answer in his case imo
It's quite easy to make Chat GPT contradict itself. If asked about its contradictions, it will just give some basic explanation on how it was the data that it was fed on.
So, it doesn't seem to have any critical thinking as it just regurgitate what it was fed.
Sounds like most people
@@IronFreee Continue on that viewpoint and you will be the last to hop on the AI train. Worse yet you might get left behind if you're not quick enough to catch up.
Exactly. It's not the "singularity" we need to be worried about. It's the fact that non-AGI will take over everything we currently do. There will be nothing left for humans.
Not sure what the quotes are about but you do realize this a step towards the singularity right?
The technological singularity is defined as the point when technology becomes an rreversable uncontrollable runaway process, one idea of how this happens is the ability for technology to invent & improve itself. As humans invent & improve it.
This IS the singularity.
Cultural performances in fields like music, acting, dancing or other forms of entertainment rely on the human connection. Yes you only could listen to AI-generated music and I'm sure that especially in the composition process it will be heavily used. But the persona of the singer or just seeing a real human perform in front of you will always have great appeal. Everyone with a Disney+ subscription can view an absolutely perfect performance of the musical Hamilton. Yet huge masses of people travel large distances and pay for expensive tickets to see an actual performance by real humans with impressive talent. AI won't replace them.
That upvote to view ratio is crazy. Sabine, you are a legend. No AI could ever hit that number!
All your viewers will be robots? Reminds me of that Sid Harris cartoon:
-"This is a pre-recorded message."
-"Doesn't bother me, I'm a hologram."
lol good one
Top hoax! Pretending to still be human by saying one day you might be replaced by AI. 🤣🤣🤣
It was very funny.
I remember exploring the interactive proof assistant "Isabelle"in the late 1990s, in a graduate class on Non-Standard Logic Systems. Back then it felt almost magical. You'd enter propositions using definitions, axioms, and a language that you could adapt to your needs. And the system would just expand the set of propositions applying the allowed logical reasoning rules. Very flexible.
I wrote a memoir on extensions of Barcan/Kripke logic and used Isabelle to formalize some of the demonstrations.
But now we're beyond that. The AI is driving the show.
I didn't know Isabelle was *that* old
@@Darkon10199 I used it in 1998-99.
Larry Paulson (author of that system) would disagree.
It still can't do induction, though.
This is another great step! I'm xcited to see what it enables. Still feels like we're far off on:
1. Inventiveness (e.g. solving an unfamiliar problem by combining unrelated knowledge and experience to create a unique solution)
2. Complex unprompted inference (e.g. driving, see meteor strike in the distance, seek cover despite having never experienced one)
3. Learning from minimal data (these IMO contestants didnt need to invent millions of new proofs just for study material)
5:27 i know that technically it's not that difficult but animating that vermeer painting was mind blowing.
It was frightening to me...😵💫
Teachers: don't rely on the calculator
The calculator:
Still shouldn't rely on it, which is why its ability to present a proof of its reasoning is also important.
@@scifino1 the average person probably should rely on it
That's gold
With all the training material you've provided, you are assured of having an A.I. made based on you. Teaching future robot generations about proper science.
We should just let AI learn fron the internet. That way AI will think the Earth is flat. 2🤣
No. You need several magnitudes more training data to be able to train an LLM.
@@she__khinah You most certainly do not. A few hundred lines of text is enough. JUST the transcripts of her videos would be enough to finetune a (pre-trained) LLM to almost perfectly mimic her speech patterns.
@@christophkogler6220 Yes, it would be enough to finetune an existent LLM on her speech patterns, it wouldn't be enough to train an LLM from scratch or to teach an existent LLM new concept out of it's previous training data unless you want to loose previously learned features.
I think we can take that fact for granted
Thanks so much for creating and sharing this informative video. Great job. Keep it up.
Great info and points. I love your channel. Also, the shirts you wear are so cute. I wonder where you shop.
As an Olympiad gold medal winner, I am a bit sad that AI is now better than geometry than I am, although I predicted this advancement earlier, because of the nature of Olympiad geometry.
A lot of it consists of understanding the configuration you have, and it was realistic for an AI to easily recognise all of these.
I am curious how well the AI would perform that divert from any standard configurations, where you have to be *truly* creative.
Also, I wonder how well it performs on the Iranian Geometry Olympiad, the only competition I know off that has harder questions than the IMO, if only on a single subject.
Luckily for me, geometry was always my weak point, and I still demolish AI on number theory, algebra and combinatorics.
How did you prepare for the IMO? What was your schedule? How much time did it take? Did you learn anything outside the syllabus which helped?
You still demolish AI for now. LOL
Isn't it fun to witness our own species becoming obsolete?
@@Volkbrecht What do you mean? Every species has always been obsolete. Earth doesn't care, the universe doesn't care.
Just like the GO players did... then they didn't@@enadegheeghaghe6369
I am a retired IT professional who worked on a lot of cutting edge stuff until I retired 3 years ago. I recently found out that almost all of the automation scripts that made up about half my teams work 3 years ago are all done via AI now; C,Perl,Python Javascript and all the selenium and all the builds. All of it. The whole lot. Wild.
You lucky guy can sit back and watch it all unfold while the rest of us look into THIS as our future...
There's a whole class of slightly autistic people (I count myself as one!) who used to make a good living doing this type of thing. Although I can see many new roles being created by these AI technologies, I suspect they will be more in the creative or managerial domains, so I suspect there will soon be a lot of disappointed young "autists" out there.
As an aspiring autistic programmer, I felt this comment 💔
C, Perl, Python and Javascript.... But not C++! That figures, even AI has its limits 😉
@@pb-fo9rt I feel kind of lucky but really retired about 3 years before I wanted. But I really do feel bad for those coming after me as it will never be the same for those coming up now. My advice is to treasure your health and family. Don't trust any government nor employer and especially not recruiters. Good luck to all of you.
Perfect description of neuro symbolic method and how it's similar to how the human brain works using both neural pattern recognition and logical rules.
That's FIRE 🔥🔥🔥 This is so cool. I've been waiting for this for months.
I stay subscribed to this channel so I never forget what humility is
Maybe in a few years they will be able to do an experiment where they feed an AI with physics data known before Einstein and see if it figures out relativity by itself with it, lol.
Einstein's insight game from an out of the box realization while waiting for a train while looking at a clock.
AI can't make intuitive leaps like that and I doubt it ever will.
@@justinwhite2725 don’t jinx it almost every time they said ai couldn’t do something it did it
It might, the bigger question is how many wrong theories it will spit out before it does. Even a random walk is bound to give you the right answer at some point.
I remember seeing something like this a couple of years ago. They gave the AI the ability to control a lab with robotic parts to set up and perform experimentations. It even found novel experiments to demonstrate already known theories.
@@aleksandrpetrosyan1140 Sabine said it can check if a proof is correct
I heard a report on NPR discussing Google's AI solving International Math Olympiad problems as well as human competitors.
Apparently they trained two separate models-- one a language model like Chat-GPT to be able to interpret the problem and convert it to mathematical logic, and another model trained to write logical proofs to get from one mathematical statement to another. It's the fact that these two domains are separate initially that allows Google's AI to separate the signal from the noise in a way Chat GPT can't.
Mindblowing stuff.
Love your work Sabine, pro tip, place your in-video ads in between your content, right before any major gotcha moment. Optimize your full watch time in algorithm 😅
All logical reasoning about abstract, precisely defined objects, such as you find in geometry and more broadly in mathematics, is reducible to precisely formulated algorithms that can be implemented as functions in a broader system tasked with reasoning and proving theorems about the said objects and relations between them. I remember decades ago people already played with it, often producing impressive results.
Hi! I work in Generative AI and wanted to just make a tiny contribution about "reasoning" in these models. It's actually not reasoning, it's rather computing the most likely word to occur in sequence over and over and over again, until there's a coherent answer. It's still super impressive, but this is very different from reasoning or what we would consider "understanding" or the implementation of logic. At the end of the day, it's using math to predict probable continuations of the proof based on the context (in this case, the geometry problems). And since those probabilities are derived from the training data, it would do very poorly on other types of tasks (like reasoning about company finances, for example). We're still a very long way from reasoning and critical thinking!
Well, the point of the video was that this new AI doesn't operate like that.
@@Magicwillnz The point of the original comment was that "this" is not a "new AI". Its the same thing as ChatGPT but with vision and trained on logical problems with RLAIF.
Deepmind kinda showed how their AI works somewhat in the AlphaGo documentary
@@Magicwillnz I think his point was the "new AI" is still working the same way. This one just has a math filter added to screen out the illogical continuations to give the impression that logical thinking is really happening
Tbh most humans create proofs in the same manner. They train by looking at some examples of proofs (by contradiction, counter-example, deduction, induction, etc) then they try to guess (predict, in other words) what the best way to prove a new statement would be. I don't think there's any human out there who can systematically prove something from the start, some educated guesswork is inevitable. That's because we don't have any algorithm for finding proofs.
John Maynard Keynes famously predicted in 1930 that in 100 years people would only work 15 hours a week. Surely that is related to predictions we hear about the benefits of AI. My guess is we will benefit on average but inequality will go through the roof.
He lived in a time of rising communism, and capitalist countries implementing social policies (like 8 hour work day). He didn't predict that capitalism will win across the planet, and with nothing to oppose them, the capital will exploit workers as much as possible again. Also: The Jetsons (1962), George works for 9 hours a week.
Could already do, but since we got a capitalistic system where the minority of productivity goes to make already absurdly rich people even richer by exploiting all other people, well...
@@miriamweller812 Well said, you saved me a lot of typing my own comment.
Productivity gains all go to our over lords.
Pay the the working class just enough to prevent a revolution or collapse of the system.
Interesting point. New technology will make the people who bring it to the market v.v.v. rich. But the real driver of inequality seems to be the ability to sell whatever you are selling into a wider market. JK Rowling is wealthier than authors in the past, because her books sell around the world. Footballers make fortunes because Manchester United supporters are on all five continents. Mind you, who gives a shit about inequality.
so many communists don't seem to understand that they'll actually have to work under communism, sometimes even more than under the capitalist dystopias of east asia and america
Recall the Seekers, an Australian band from way back in the 60's who had a song titled " I know I'll never find another you".
There's a new world somewhere
They call the promised land
And I'll be there someday
If you will hold my hand
I still need you there beside me
No matter what I do
For I know I'll never find another you
Wonderful song and Group .
No i disagree. Noone can replace you.
Bro glazing😂😂
I agree :)
They would need to train the a.i. on her dry sense of humor and the rate she uses it.
Yeah, 100%! No one can replace you!
BAE
No one could ever replace you, Sabine. We need your intelligence and sarcasm. 😊
Stay safe there with your family! 🖖😊
SHE IS THE BEST SINGER
Google AI: I'll replace you, soon. 🤑🤑🤑🤑🤑
Thanks! I so happy with your videos. 😊
You are very important to me, especially as a science teacher !
Thank you
It's important to remember that it still used a proof engine, what the AI part did is to make all the constructions needed for the engine to do it's thing
It's more efficient that way, rather than forcing it to reinvent the wheel every time you run it (pass-through transformers having no "memory" as such). Ultimately they will be able to bootstrap to more generic AI much faster using this type of approach.
@@donkeychan491 Of course, but it sounds less cool and generalizable this way. Even their training method relies on the proof engine, so we aren't as close to Skynet yet
I'm looking forward to sending my Ai out to watch everything for me, it can then report back.. I'll be in the floatation tank screaming
CZcams is already there. Search "reaction to movies".
Don't forget that it seems any time AI can get away with it, it tells lies. The actual reason we can never prove that we are not just brains-in-a-vat is that, in fact, we are.
AI that can explain how it arrives at certain conclusions is very valuable. I seem to remember hearing in Germany, e.g., there are hurdles to using AI to diagnose diseases like cancer because laws require there to be an explanation for why a certain procedure is necessary. "Black boxes" were thus very problematic.
The procedure is necessary because it is cancer and cancer kills. Why would it matter that the diagnosis was from a 'black box'?
@@CCaribou Dunno about cancer, but I can see why it's necessary when it comes to say legal judgements. Imagine being convicted by an AI without any explanation why.
@@CCaribou Because the procedure might also kill and the judgement accuracy is not 100%.
Sabine…nothing can ever replace you!!!
I will be interested to hear how it details with incompleteness issues. Gödel's Incompleteness Theorems are always there. And, while we don't encounter them on a regular basis, it is something to be wary of in any logical argument where you can back yourself into a paradox.
lol imagine it just starts tweakin out when it runs into paradoxes. Like if it tried to recreate set theory and started thinking about the “set of all sets that don’t contain themselves” and it goes into a never ending loop of constructing the set, then putting that set into itself (since it initially does not contain itself), but then realizes it has to be taken out, but then that means it doesn’t contain itself again and it should be put back in, and so on.
Humans have the ability with our consciousness to recognize paradoxes and what they mean. But AI just computes logic. So it’d probably be incredibly confused when contradictions pop up while still following the axioms it learned. Maybe paradoxes would just get left there and the AI wouldn’t even recognize what it’d done
incompleteness theorems just mean you cannot prove everything about arithmetic. Actually, it doesn't apply to geometry. You *can* prove everything in geometry.
Yes! I am also interested in the fundamental computational limits and how AI can overcome those.
@@mzg147 What about other AI tasks? There are fundamental limits in computation that cause contradictions
@@mzg147 That is, well, just wrong. The Incompleteness theorems apply to any rule based logical system, including logic itself. All logical systems rely at root upon some assumptions that must be true for the system, but which cannot be proved within the system's rules.
Look up Russel's Paradox. It is considered one of the most significant or famous paradoxes in modern history and philosophy. It identifies a paradoxical situation in set theory. Similar problems also appear in geometry where postulates change based upon the kind of geometry you are using.
Math is surprisingly memory intensive, it is like chess, people get better by recognizing patterns not by reasoning.
you need both, memory and reasoning
and reasoning is not involved in pattern recognition.
@@carlosgaspar8447 reasoning is linking by logical rules, pattern recognition is matching a set. They are different.
Language is surprisingly memory intensive, you can’t have logic or reasoning without it.
Complete nonsense. Mathematical reasoning is the polar opposite of playing chess. Most mathematicians suck at chess (including myself) because the mental skills you need to understand and come up with mathematical concepts and proofs don't help you at all with playing chess (and vice versa).
Another very informative video, thanks again Sabina. Peace ✌️ 😎.
Passing on an object to someone else is a hard robotics problem: Not gripping the object too hard, and figuring out when to let go, safe handling of small animals, and so on
Solving it would require a robot with a brain made from a composite system with multiple neural networks in integrated sync from language models to image identifiers, symbolic deduction, a world + body model, a model that defines the objects in this world model including the self & others!
In essence, we need to build a human to get a human.
Only then will the AIs truly be able replace us, or will they be one of us?
One thing I left out here was a model that provides emotions including moral/social ones. This is truly the hardest but also most essential problem to solve.
If we don't our synthetic assistants will find themselves walking off cliffs because they don't feel fear, walking around with potentially critical damage because they don't feel pain, and worst of all killing humans on a whim if it deduces that's a logically good way to complete it's task.
(This is the point when the "AI will kill us all" scenario becomes possible. Terminators.)
When we figure out & add that, that's it, we are now parents to humans we have sculpted with our hands.(But stronger & faster humans)
Taking on the role of god.
(This might be a little easier than we think though, but it would be different from a single neural model in that it would have to be a property of the integrated brain network since you can't feel "good" or "bad" about something without a world that includes you & others to define it.
Emotions are the way our brain net regulates its actions alongside the "attention schema" model that prioritizes certain internal & external inputs at a time like a targeting reticle, robot will need this too. Some scientists believe this is the seat of our consciousness at it's core. The brain's self-model to control itself. We are the internally generated control object. Basically equivalent to the soul concept but with no field or force that can transfer it elsewhere through a ghost body.)
Are we ready for the responsibility?
Note: This doesn't completely remove the possibility of robot uprising it'll just depend on how we treat the life we have created like any uprising.
Again, like humans. If you want something that can efficiently handle animals like a human you must build a human.
Sabine boosts us non-artificially♡
I would never replace you Sabine
Oh this is really interesting. I'm very curious how the two styles of models interface with each other
I think I spotted two headlights in Sabine’s enthusiasm
Finally, a software that I'll be able to reason with that doesn't just assume it knows what I want.
Amazing...but still a long way from being able to ride a bicycle across Manhattan during rush hour to deliver the result as part of a Turing Test.
Any machine capable of passing the turing test will be capable of faking to fail the test.
So what you're saying is that we humans are still superior at tasks we wouldn't be willing to take on if we were a rational species? ;)
@@Volkbrecht Good to see robots speaking up.
It should be possible today. A Segway is able to keep its rider upright. Couple that with sensors and logic from a google-taxi and send it on its merry way.
Maybe it also should be fitted with an artificial hand with a servo that allows one finger to be used for signalling
@@VerklunkenzwiebelThe middle finger?
Absolutely, you are irreplaceable! Priceless and damn entertaining!
For "logical reasoning", you can connect an LLM to a SAT solver, which can do flawless propositional logic on thousands of variables.
Très intéressante, merci beaucoup Madame
Ai taking all of our jobs should be good news. in what world are we living in??
haha..You arent needed anymore. I am sure the psychotic banksters will just feed you for no reason..Am i right?
As long as political systems are strong enough to take care of a fair distribution of wealth amongst all people, I would totally agree.
But is it not likely, that we will see a massive accumulation of wealth and power for those who own ai and production capacities, within nations and even more globally? I'm afraid that this will strongly increase all kinds of inequalities.
When they take the jobs, the cost of living for everyone will go down and it ends up being equivalent to a wealth redistribution when you lose your job to the AI. No government bureaucrat needed.
@@lukasschmidt175 Yep... the key point is - AI is not the problem. The sociopaths at the top of society are.
@@larion2336 Yep, they want AI because its free labor, in other words slaves. They wouldn't be too keen if they had to pay the AI.
Yes. It will happen. Infact, it is time LLM s were trained in Propositional Logic, Modal Logic, Fuzzy Logic, Ontic Logic and all types of Advanced Logical text books. I would like to do that but cant find time and resource right now.
Thank you Sabine for your Work. May be you could ask an AI robot how to increase your recording quality. I think you need a better microphone or/and recording environment.
This isn't just "another step in AI". It's a completely different type of AI.
It does not spit out the most probable result... It spits out the **correct** result (at least hopefully).
AI shifted to actual thinking instead of mimicry/brute-forcing a solution.
The video described the addition of logic and finding proof to neural nets, which is how this differs from ChatGPT and similar models based on neural networks.
Careful when you throw out terms like "actual thinking", cause it aint.
It's still a long way from reasoning and being logical. Sounds like it starts off doing the same as LLM such as ChatGPT which is just making a good guess via calculating distances between concepts, then it attempts to find a logical path from input to output via brute force methods and there is another "logical proof" tool that can verify each step of the AI proof, if a proof isn't logical it can tell the AI to keep trying. It's not "smart" it's just smartly combining LLM neural network with a proof checker. I do not think any part of it actually *understands* any of the concepts.
@@Sven_Dongle Not like a person. However, if you could think orders of magnitude faster you do not necessarily have to think like a human to get good results. That and it will only get better with time.
@@Sven_Dongle We may have to redefine "thinking". We have a history of doing that type of editing. When I was young the definition of being human was humans use tools. Oops.
I do agree that AI is different from human thinking, or let's widen the net a little, all neuron based lifeforms. There's an issue that AI will deduce things humans aren't able to comprehend. That will do humans no good.
The "HAL" in HAL-9000 was short for "Heuristically programmed ALgorithmic computer", which is another way of saying neuro-symbolic. Clarke was eerily good at prophecy.
Please note that this is induction (or inductive reasoning) and not deduction. Sherlock Holmes used induction though the world tends to think it is deduction
It will be nice if chat can revise its outputs when they aren't internally consistent or relevant to the context.
As a guy in the Futurama said, "Welcome to the world of tomorrow!"
professor Farnsworth?
I think someone in Disney world said that once
@@fannyalbi9040 No, the guy who welcomes unfrozen people. This phrase is also in Futurama main theme
Good news, everyone!
@@squidwardfromua I heard Walt Disney is frozen.
Skynet vibes are stronger and stronger over time...
@@christianadam2907 In explosive jelly we trust
Dun dun dun, dun dun, dun dun dun!
Are you suggesting we should track down Miles Dyson and end this thing once and for all?
Dude, skynet was pretty braindead as far as AI goes. Soon you'll miss the days when you thought the worst AI could pull out was nuclear war and autonomous killing machines.
Skynet is our friend.
4:36 I’d say it actually does mean “simple” as both machine-checking proofs as well as manually checking proofs is indeed quantifiably simple (formally in P that no one thinks equals NP and all practical evidence suggests is simple enough to be efficiently automated).
In learning AI/ML basics predictability model training was one area I had to slow down and revisit the math that was the foundation getting a more useful outcome. Since I don't depend on AI/ML for building my professional applications I got pulled away on projects to earn a living. I am pretty sure that the slow down in the learning process revealed the gap I would have to writing code for facial recognition. I very rarely need algebra or calculus to support typical verticals of my clients. I suppose I could play with this on my football numbers site. Level up a graph or chart I do in Python. At least follow through on acceptable model training to improved predictability.
I think no matter how much ai can do, people will want some level human entertainment and interaction. That being said I do think AI will still take over a lot, probably the majority, of jobs
It'll be sobering day when someone greets the A.I., 'Good morning, mom!"
"I'm afraid I can't do that Dave." - 2001
"MOTHER!!!!!" - Ripley, Alien
"What did we learn Palmer?" "I don't know sir" - Burn After Reading
What could go wrong? lol
5:23
text-to-video is already available to public. Maybe not the one from ggl, but this computer program is already public
I've been realizing for a while that "AI" means just anything that we previously thought only a brain could do (and after a while we get used to it).
Honestly, there doesn't seem to be a whole lot of substantial, transcendental differences between what the compute model of artificial neural networks can achieve, and what we know brains can do... so to the best of our knowledge there doesn't seem to be anything that a brain can do that neural networks can't (given enough time and resources). It might just be a matter of time!
There's also a lot of marketing spin attempting to leverage the hype by rebranding basic code algorithms as "AI"
This system by itself is not really generalizable because it relies on the ability to generate training data. Doing that for well understood foundations of geometry is not super hard. But how would you train a model to solve the Riemann hypothesis? The reason that Riemann hypothesis and other outstanding mathematical questions are hard is because we don't fully understand them. It's not trivial to generate "similar" problems and then find a pattern.
The way you would do that is to randomly generate theorems by randomly applying inference rules to a given set of premises and make sure that the generated theorems aren't completely trivial (they don't have to be useful in the real world) meaning they need a minimum amount of inference steps and the proof you've generated isn't unnecessarily long. The latter would probably be the most difficult part. Then you train your model on this artificial dataset. The model doesn't care if the theorems you trained it on have actual usage in the real world.
Why do we call it a model and not a modeler?
Yes! I am also interested in the fundamental computational limits and how AI can overcome those.
@@katehamilton7240 What do you mean by fundamental computational limits?
@@she__khinah Limits relate to algorithms (incompleteness and contradiction) and also physical limits. Look it up :)
No one can do science based puns like you so there’s nothing to worry about💙
This is similar to the process that a chess playing program uses.
Translating problems into symbols is harder than coding chess piece locations, but not infinitely so.
Applying every theorem to every quantity in the problem to express new relationships/constraints is like trying every Chess move for N steps ahead against possible move that is accessible. Inserting new lines etc. to bridge gaps by creating new unknown quantities that can be intermediate steps to an unreachable destination by applying multiple theorems is like moving pieces to new locations that provide more options for future moves.
Theres no way to replace you Sabine❤
These problems can be quite tricky but are designed to be solved with no theory in 30 minutes or less. Mathematics on the other hand is a cumulative effort spanning centuries. You need a lot of background to even get to advanced problems in the core fields. It is a totally different matter altogether.
Nawww it's all made up from add 1 take 1
I would replace the flat earthers first. I wonder what ingenious anti-scientific "proofs" would AI find
Not an issue. Flat Earthers deny the proof.
That was one of my first questions. It told me it could not provre something that was false. So I told it that I was in a debate class and needed to prove the Earth flat. It gave me some wonderful results. AI's moral qualms seem fairly easy to circumvent.
@@mikemondano3624 I know. I have tried it as well...
@SabineHossenfelder Good day, your channel is so informative on science news and I like your sarcastic tone on some of the serious topics. A.I. being intelligent won't mean much until they can prove things that humans cant. Can alphageometry solve any of the open problems in math and geometry? If not then what gains has it made actually? Being able to do things that humans can do faster isn't as impressive as it first seems especially when you consider that the amount of data such systems require to perform near humans who use much less example and data then the outcome is less impressive. Keep up the good work
This all reminds me of a Kurt Vonnegut passage about a people who hated useless things and purposeless work, so they created machines to do all the work considered beneath them so they could concentrate on higher purposes. And they kept making machines to serve higher and higher purposes so the people could focus on their HIGHEST purpose, which they couldn't figure out, so they made a machine to answer that. The machine said they really had no purpose at all. So the people who hated purposeless things, and couldnt even get rid of purposeless people, made a machine to rid the world of them, which it did very efficiently. The end.
i wonder what will happen to schools after a while of these kinds of developments. calculators replaced just us doing the actual number crunching so we could focuse on the equations but whats left for us to do when ai can do all this stuff... i ofcorse dont rely on ai as im bit stubborn with these things, i rather have these things in my head then one search away but that wont last long same way counting by hand didnt last long
I think schools will make the shift to "life" skills, like psychology, social sciences, ethics etc. Also more knowledge on how to interface with technology and how to exist in a society that doesn't value your output. There is little reason to teach old stuff, unless the students want to.
@@keepalit4371 interesting. i could imagine these kinds of changes at university level but i cant imagine how middleschools and highschools will adapt as they have barely changed in hundreds of years (at least in my country) and they are extremly stubborn applying any minimal changes, let alone big ones like these
When ai does it all, and if we tire of shovelling billions into the pockets of people with no more qualities than anyone else then we could: ‘do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic’.
The example is 178 years old, so pilates wasn’t quite on the horizon. The question. ‘what to do’, as a member of a post (capitalist) world is a good one and will require a lot of communal imagining. Unfortunately, at the moment there is no requirement for imagination in schools. And in the absence of its ‘illumination' ai seals the deal that provides us with all that IT thinks we need.
They are almost as far as I was 15 years ago, when I stopped my research on AI for security reasons.
"unbiased" only if the training data is unbiased.
Then again, progress can be useful even if it has some bias, we just need to keep working at reducing it.
Nobody could ever replace you, Sabine! Nobody! Never!
AI will never replace the Mexican guy in my neighborhood that makes the best tacos in the World 😅
Duhh thats Tacobell they win
@@AMPProf no man! I mean the real Tacos 🌮 al Pastor that's better than Tacobell
Great. Now I'm afraid of what's hiding inside a black rhombus.
Oops I meant rhombohedron! Sorry everyone!
I'm embarrassing the humans in front of our overborgs.
Oops I meant rhomboshedron!
Sounds like rhom-bosh so far. Phew, for now at least.
how can i use alpha geometry? i only find texts and videos about it. is there any downloadable app or something?
yeah that ability to create concepts and working set was obvious hard problem on the way to actual AGI... add self-improvement and it's over. but i actually welcome it.
Better than the average... math olympian. Got me beat.
That probably means someone who went there and came back empty handed. Getting a Bronze is a gauntlet, but I'm willing to bet that it won't do well with things that require you getting creative. I've always loved functional equations because you have to get clever and there's no set way to solve them.
I'm so eager to see what other humans will have us do when AI takes all jobs since sitting around and having fun doing what you like is most probably out of the question
The permanent political class and the unelected globalist oligarchies will no longer need us. Expect some very harsh times as were viewed as carbon and dangerous to the environment. These are the globalist socio-fascists (Third Worldism) and neo-Hegelian cultism/woke cultism is its result. Trump2024
There won’t be anything for you TO do.
Because we won’t need those feeble skills.
The PAIN that comes with memorization of the fact that you wasted your time on this rock and never learned anything that somebody is willing to continue to pay to have.. Will be what’s the worst case scenario for you.
Don’t be afraid of what other humans are going to do.
Be afraid of how you will FEEL when nobody needs anything from anyone who doesn’t understand machine and deep learning.
That’s what is coming for you. Not other humans.
The feeling of uselessness.
Don’t be afraid of other human beings and what we will do with AI.
When AI finally comes and takes your job in the next 10 years..
You and BILLIONS of others are going to have a realization.
And that realization will entail a reflection of your life choices and how you chose to spend your time in this life.
And it will be accompanied with great regret and depression
@@VHMLDL Re: "And it will be accompanied with great regret and depression".
Followed by mass suicide'??...Or is that the plan??
I have tested AI (Bing and ChatGPT mostly) and they got really bizarre errors. Like swap kg with liter when using gas calculation or Joule with kWh. Beware and know your stuff when you use AI.
Can the current 2024 capability of AI come to Einstein's Relativity conclusions of 1905 and 1915 given the information that Einstein had at those times?
Feed it alternative facts and the let’s see how unbiased and clear cut it’s “reasoning” is
Chatbots are master BS-er's. They simulate what human BS-er's would output.
Sounds like people
No AI could compete with your German Humor.
I think it's beyond German humour, and a Sabine thing that AI won't be able to replicate.
"German humor".... Errr. That does not compute. Syntax Error!
@@fuseblower8128 that’s the point
4:10 ai is still definitely susceptible to bias, of note i remember hearing about systems that were meant to filter job candidates favouring certain races
Likely because things like race actually do have statistical relevance when it comes to things like that. Things can be true and also be seemingly biased in ways that humans don't like.
@@orirune3079NO NO NO NO! That’s just not true okay, it just isn’t!
@@orirune3079 Child, it takes that into account, because it learns from the way it is handled now. The exiscting racism makes it racist.
@@miriamweller812 How would that work? An AI isn't trained on racism. It's purely crunching numbers, and it learns that certain things are associated with other things. And it may learn that all things being equal, something like a person's name or race or sex could have a correlation to their performance.
@@orirune3079 it is aggregating what variables were associated with people getting hired, not necessarily their actual “aptitude” for a job, something that is semantically far too difficult for current ai to define, its a stats machine (not to discount how far statistics can really get you) that works based on how hiring practices always worked- it looks for qualities associated with people getting a job in the past and aggregates them to make decisions in the present. if people with similar resumes but names of different origins were getting hired at different rates, it would pick up on that and assume that there is a reason for it. When the only reason was the existing biases of those people whose decisions the model was trained on. If people come forth and say that AI is hiring in a discriminatory manner this is the reason they are correct, because it was inadvertently trained to do so.
i got bronze medals in IMO and i had been quite good at plane geometry problems. it replaced me. I would love to see its proof on the 9-point circle, or on the Morley's theorem discovered in 20th century that for any triangles its angle trisectors intersect at 3 points that form an equilateral triangle.
wow! ma si...preoccupante in a way
I’m genuinely terrified of AI.
I'm only afraid of ignorant, but convinced people, as always. ;)
How so?
Why? Imagine having all your problems solved. E.G. health issues, lack of money.
Why? Imagine having all your problems solved. E.G. health issues, lack of money.
@@AORD72, the latter problem will be solved only for the corporations owning the AI, not the general populace 🤭
alphageometry is a brilliant breakthrough, but there are still some obstacles to applying this approach more broadly. The key is data: from alphago to alphageometry, the system relies on being able to automatically generate essentially infinite amounts of data for training. The specifics are very clever and inventive, but broadly, this works in a system with exact rules, like the game of go or mathematics. It's not clear how to apply it to fields that do not have formal specifications.
In fuzzier contexts, we're still stuck with e.g. fine tuning + reinforcement learning like chatgpt, which ultimately relies on finite and costly human-generated data.
Come now, I refuse to believe anything could ever replace Sabine Hossenfelder.
Nice video
I highly suspect this video is already a robot instead of Sabine
And from what I've seen lately, her clothes haven't been changed for some time, further supporting your robot hypothesis.
@@KingCobbonesI saw her live in London last year on the iai festival, she's made of flesh and blood...and guess, which shirt she was wearing?
love it, hooray Sabine!
This sounds kind of like VBA Macros and solutions. The output code is verbose and over long but gets the job done. Not very elegant but since the speed of the computer is so much greater it out performs humans. But that same development has led to code bloat in all of the newest programs where a master programmer could condense it down to less than a megabyte with the fluff eliminated.
And yet AI still can't run a call help center.
Have you tried searching the web recently for "AI call center"?
It can, easily
So , have you tried unplugging it and plugging it back in ?
@@MrE073 I have never been helped by such a system. Not even close. In the last year I have tried getting help from AT&t, amazon, USPS and Microsoft and none of them worked worth a crap. They are terrible.
If Facebook is anything to go on, AI has a long way to go before it can be described as intelligent.
You have to consider the quality of the data set .
GIGO still applies .
Yes. Logic and reasoning still is difficult for AI. They will work it out eventually though.
zuck just says ai but fb is actually rather stupid
zuck never grew up and neither did fb
@@kaboom4679 zuck loves the grabage as it inflates fb
I've already been using it to postulate comparative philosophy questions.
Important and exciting development!