Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription! This video was sponsored by Brilliant. Thanks a lot for the support!
The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.@vereor66
Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize. So we become pets. The cost is our freedom of self determination. But it's survival.
not even among ourselves so.... but then again we descend from chimps, which are psychos just as we are if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps) usually empathy is also associated with higher intelligence
Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.
@@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want. The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.
A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.
I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.
But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible
That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.
We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.
The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons. It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated. However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output. Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.
isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?
I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.
humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.
Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.
@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.
You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk) This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.
"My new boss is a robot!" But did you know ...? Robots are SMARTER than you Robots work HARDER than you Robots are BETTER than you Volunteer for testing today Valve foreshadowing reality 13 years ago xD
As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is. As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is. (ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)". From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks. (moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can. * I'm a PhD student studying reinforcement learnings applications in traffic management. ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.
All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time
yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.
I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs). We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.
Imagine if the whole AI thing evolves to a type of "Humans are stupid, i need to protect them" Because he ends up learning to respect the fact, as stupid we are, we did make him So, in reward, he ends up holding everything around the world, in a perfect manner, seeking the comfort of every human around We end up being like some bio-monument.
“Bio-monument”, interesting. I think we make more bad than goods and makes more easy to bite task than the hard one, especially online. “We will die because of our laziness” is what i want to say. There’re a lot of topic I wanna talk about, so I will chop them to small pieces(which prove my point, “easy to bite”) For the most of human advancement goal for the last few years focused on “make things easier” more than “make dreams comes true.” This focus alone will trigger down fall of humanity since “why have a dream when life is already easy?” Those who think like this(most of us) will become more or less like NPC. This will eventually leads to Monopoly since soon it will come to a point where “why create Ai, when Ai from [this company] could create Ai for me” and a similar scenario for everything else.
Easy peasy, also just thinking about what a conscious artificial intelligence is capable of doing by distracting us humans by simply placing us all next to a few NPCs in a simulated projection of reality, while they calculated that this would be ethical acceptable. We're fucked, until we object!
@@BlockyBookworm I certainly don't let children tell me what to do :D thats the recipe for raising your kids wrong. And I also don't understand people who own cats. Dogs all the way.
Some notes from an AI engineer: - It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at. - An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life. - Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance. I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.
Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation
Yeah everyone forgot the relationship between energy and being tired We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks To really archive AGI the world will need to generate way more energy than it produces
Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.
@@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.
"Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.
Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers In the end they made Advertisement for CO2 Footprint trackers...
A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”. Intelligence doesn’t equal happiness or longevity. Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.
I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.
They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...
That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.
Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather. I believe we can give it complex morality and goals rather easily. As an example, tell it to: "Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity" Boom, alignment solved
In 20 years, probably sooner. We'll sit down on our couch, log onto our profile on the TV & ask the AI to create a movie with whichever actors we want, perfectly tailored to our taste and preferences by previous liked/disliked movies or even our digital footprint. The future is awesome & frightening at the same time.
The only thing frightening to me is that while predicting the future in 20 years, all you can think about is what movies you will be able to watch at that time.
@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.
That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.
I think there're a few notes I could make here as an CS PhD and AI researcher myself. First, we DO understand how machine learning and deep learning algorithms work. Sure, not everybody (and certainly not the general public), but same can be said equilaterally about any science field. That's why we can say with confidence that GPTs and transformers in general are very simple statistical models that learn how to build most plausible sequences. They do that very well, but as you mentioned in the video that's just one very simple and specific task it excels at. Second, modern AI research is skewed towards ANN. We should not forget that they (and, well, almost all other AIs) are just formal systems - and, therefore, they are inherently incomplete by design. There's also the fact that the model of information processing employed in ANN is only taking into account the electrical level of communication between neurons but not the chemical or biological one. Third, our current approach to AIs is inherently flawed. That is, our "AIs that took over the internet" do not posess any artistic skills whatsoever. They just present you with a compilation of works they saw during their training, unable to create something anew. This is closely related to my points #1 and #2. This is both their strong and weak point. If anything, I think we're steadily heading towards another "AI winter" and have nothing to worry about... For now. I'm certain AGI is impossible, but we will see for sure a few waves of new AI generations that will surprise us with their abilities at specific tasks.
We need to understand that Mathematics is limited therefore these machines are limited. We cannot even define human intelligence, let alone artificial 'intelligence'. The human brain is organic and more complex than a machine or set of machines can be, right?
This explains why I can't stand to watch videos created by AI. Most people can spot them a mile away. They just lack a certain something and it's off-putting to me
If I learned anything about AI over these past few years, it's that AI will keep surprising us, they will keep getting better, and tasks that nobody thought for the longest time that AI could do, AI will do them, so saying that AGI is impossible might become as outdated of a sentence as in saying "the Earth is the center of the universe".
I also study CS, and tbh its kind of shocking that in your third point you mention that AI generates nothing novel. It very much does, but its novelty is predicted using the collective works of the internet mapped to a tokenized prompt. Saying AGI is impossible is silly, if natural selection can produce us, there is nothing preventing us from defining that process to begin with and accelerating it. The only bottleneck I see is compute.
Hi, AI researcher here 🤚 We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic. I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.
@@user-mh9gh2jx4r AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.
4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces
Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.
Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.
NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.
Asimov (my favorite writer) predicted the rise of a super AGI (Multivac). In his world, Multivac would not only constantly improve itself, but would also solve many problems, answer fundamental questions, and overall boost humanity into lightspeed scientific and administrative progress. I believe such a scenario is pretty close to what would happen if we manage to create AGI. I hope to still be alive by the time it does.
@@joshyjosh8795 The last question, a short tale, is my favorite. There are many other works in which Multivac has been mentioned, though. Jokester, Franchise, All The Troubles In The World, The Machine That Won The War, etc.
@@joshyjosh8795 however, Asimov's magnum opus is definitely the Foundation trilogy. That I really recommend you to read asap (although it doesn't feature Multivac directly).
@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago! It absolutely is a pump and dump scam currently and many companies are realizing this
I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.
AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.
Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
@@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years. most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.
@@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.
this is like how they talked about phones in the 80s and the internet in the 90s. Now phones are used constantly and the internet is an excellent business tool that is most productive. I agree that it could (or, can) be a groundbreaking transformational advancement
@@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.
@@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather
There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.
The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."
@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.
@@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain
as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'
@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number. This function can be approximated to any abitrary precission, by an increatingly longer polynomial. eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n This is a mathematical fact. this polynomial could be represented as a matrix. so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.
If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality
@@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.
Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years". TLDR: don't listen to the Sillicon Valley bros
i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.
You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.
@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen" You literally sound like one of those folks who keep saying the second coming is nigh.
"There will be some winners and losers." That's one way to put it. Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.
What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity. (bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)
As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world
My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat. (If this sounds terrifying, that's because it is, in fact, terrifying.)
Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.
They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.
The solution would be one person, isolated from the AGI, who would switch off the AGI if it starts getting out-of-hand. If the AGI copies itself everywhere, then just turn off power world-wide, and try to create a better AGI that will stop the previous AGI
@@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.
Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.
i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.
Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.
That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.
Actually for many other viewers out there, this might scare you all guys a lot. But for me, as being a person from the bright side of life, when this channel explained how humanity thrived using their intelligence, I really felt proud of being a human. You know, humans have come a great step forward in history, in dominance, in nature, in everything. And now, here we sit, dominating the entire planet. I hope this continues. Proud to be a human
Ah yes. Creating Alpha fold which used to be 1 PHD worth of work turned to mere minutes/hours is just to empower them. AI on weather where hours of modeling turned into mere seconds which expands the scope of uncertainty for future predictions to save lives turning weather projections additional 3-5 days of accuracy to save lives is just a way to keep wages low by keeping more people alive. Yeah Evil big tech is evil cuz you say so
Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network. The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.
No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.
@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.
That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.
@@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D
As a Computer Science graduate, my last existential crisis was the first time I used chatGPT, I never thought I will live the day where I will be talking to a computer like I’m talking to a human.. and every time openAI updates ChatGPT I get more creeped out
PhD student in neurosymbolic AI here. The main force driving AI forward currently seems to be hardware improvements rather than architectural changes. While there have been significant advancements in aspects of the transformer architecture, the real game changer appears to be the powerful GPUs from NVIDIA, which are used to train neural networks. It feels like achieving general AI might just be a matter of scaling up GPT-4 by a factor of 1,000 or so. This progression could happen quickly; models have roughly scaled up by a factor of 10 every two years: GPT-2 (2019): ~10 billion parameters GPT-3 (2021): ~100 billion parameters GPT-4 (2023): ~1 trillion parameters I also like to compare this with human brains: humans have about 100 trillion synapses, which might roughly translate to parameters. So, this could be in the ballpark of GPT-6 (?). Of course, this comparison is complicated because a synapse, with its channels and neurotransmitters, is far more complex than a parameter in an artificial neural network. However, it's still an open question whether this synaptic complexity is truly necessary or if it's just an evolutionary quirk that happens to work. Edit Since a lot of people commented: -The code of GPT-4 is not openly available, so we don't know if its architecture is very different to old models like GPT-2. However, we can compare GPT-2 with recent open-source models like Llama3. And there the underlying architecture is very similar but just scaled up in terms of size and more training data. -Even though the models did scale up by a factor of 10 about every two years that is not just because of the GPUs becoming faster. Also because companies are more willing to spend a lot of money on them.
Apparently you haven't been following AI research despite your PhD then, because if you were you would know that performance superior to GPT-4V has been achieved by much smaller models thanks to architecture and training improvements.
@@GeoffryGifarijust my guess but it is the path finding. As you (and I) learn, we basically go through a tree with different branches and twigs. As you learn about what can and can not be done, your path "narrows " but your efficiency improves. Figuratively speaking. We want to write essays, while we are just basically learning how to hold the pen. Let alone putting it to paper and trying to write a single letter... In an environment like this, this really needs a humongous amount of energy.
@@GeoffryGifari Because biology is frighteningly efficient and complex, hell, you got trillions of microscopic turbines inside your body, some can last your entire lifetime. Even trying to run a local LLM require a machine that consume more power than the rest of the house several times over.
i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!
@@rosyidsyahruromadhonalimin8008 robots. Now, hear me out. The rich have machines made that look like us, think Detroit : become human. They make them affordable, incredibly so. This makes the populace more content as they can easily do the things they enjoy, thus hand waving most of the evil shit the rich want to do with the earth and us.
i think Ai will rather save us not kill us , knowing that the universe and earth will not really be supporting of life , ai might take care of our species
As long as liberals are programming AI, I am not worried about it becoming in any way a thinking rational system. It's no where near that now and ends up in a circle jerk when asked about anything concerning tyranny and freedom.
What if a select group of powerful people use AI to design a virus to get rid of 90% of people? What if a few years laters they change they mind and decide they need 99% gone?
10:57 "now imagine an agi copied 8 million times" Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself. You know what they say, during a gold rush sell shovels.
ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context
I imagine an artificial super intelligence would be like an eldritch god to us. Completely unknown motives, goals and morality and probably would make you go insane if you try to rationalize it. Which is absolutely terrifying.
not to mention, pure intelligence and logic doesnt necessarily lead to good outcomes, so we shouldnt just trust it and treat it like a god. like, not having children reduces all potential suffering, and its not like having a child is a material requirement for humans to live. therefore an ai would be inclined to believe birthrates should be lowered till extinction even if they have a rule to not harm humans. we would need to control AGI by making them hold a set of axioms that most humans hold. such as life and reproduction of it to be important. at least the AGI's that have a direct effect on society, we can let the some of them have fun.
What if we have some kind of algorithm that constantly analayzes the code of the super intellegence, and translate it to us. To see if they are thinking about stuff we don’t want it too
It is mostly an outdated view on ASI. While we don't know for sure if LLMs are the path to AGI, current understanding is that artificial intellegence is by and large shaped by the data it is trained on. And since current generation LLMs are trained on data produced by humans they are relatively speaking much closer to a human than to a cthulhu in it's way of thinking.
yea, I've though about this. it's like the relationship between ants and a human. A human can step on an anthill and destroy it, or leave food and make it thrive. The ants see a particular projection of that "god" as either a deity of bountifulness or destruction because those are the terms in which they can comprehend the human's actions. But just as the ant has no ability whatsoever to grasp what that god likes to read, understanding an AI might not even be in the realm of possibilities, like a 2d entity trying to see in 3d.
Something that resembles thinking definitely emerges from the attention layer inside its structure. I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning. I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights. Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane. So, even if its designed and promped to say he can't think, he definitely can. Even if it makes some mistakes, a human would make even more mistakes to be fair.
@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set
thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.
@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response
@@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases. Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you. For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token) But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news
A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.
Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.
@@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)
Everyone else: "AI is so advanced now. It can take my physics exam!" Me at the Carl's Jr drive thru with an AI menu in California: "Can I get a bacon guac burger large combo with an extra patty? Dr Pepper for the drink. That's all for the order." AI: "Okay! So you've ordered a medium chocolate shake and a small fry. Please pull forward" If that sounds specific, it's because this happened to me yesterday haha
It's the worst. It hears exactly what you're saying, but it's dumb as shit, so it doesn't understand that you want to substitute things, not just add them.
I want to make a correction to this video. "Black box" does not mean that we dont understand how AI works or how it learns. We have centuries of mathematical foundation for the technology underpinning machine learning. It simply refers to the fact that we cant fully understand the "algorithm" that a trained AI uses in producing its output. And even that does not accurately describe most AI since there are statistical methods to understand how a trained algorithm reaches its conclusions.
I don't think we'll fully understand it any time soon either since we don't really understand how we reach conclusions ourselves in our own brains. And the missing piece is really the phenomenon of emergence. When you put enough of something together a new property emerges. Put enough hydrogen and oxygen together and you get what we call water, and later on you can get a waterfall. Put enough fabric together in a certain pattern and a tapestry emerges. Put enough neurons together, connect them with axons in a certain pattern, run electrical impulses along them and a thought pattern emerges. None of those materials by themselves have any semblance of what we call a 'thought' yet a thought emerges out of enough of them in the right conditions. Emergence is the missing link and in my mind emergence is the function of patterns across the universe. The Golden Spiral is an example of a pattern that emerges again and again and it usually has a purpose but it can created out of virtually any material. And we don't really know what will emerge out of putting artificial neurons and electrical impulses together until we figure out how to 'weave' these patterns to create what we actually want to create. Same way we weave the tapestry together in a certain pattern despite that image not being inherently part of the fabric materials. If you could rearrange every atom randomly in that fabric there wouldn't be an image anymore, it would be random noise. It's the material + pattern that we call a tapestry. So in the context of training AI, the pattern would be a result of the content we feed it. One could even go further and say that patterns are order where otherwise there would be disorder/chaos. So it all has something to do with entropy but this is all already too abstract.
@Kuk0san Let me just say, as a condensed matter physicists, taking a "top down view" allows one to better understand emergent phenomena. Like a phase transition is an example where the sum of the parts is less than the whole. We tend to throw out microscopic theories which can not capture emergence and work with, say, a phenomenological theory, like Landau-Ginzburg. Just saying there are tools out there. Not sure how we take a top down approach with ai, but as another rabit hole a nueral net can be thought of as a layered system of spins coupled to one another and the memories are local minimums in the energy landscape. Physics might help understand these things for many reasons
Note- machine learning algorithms dont "write their own code", they modify and adjust the parameters of their own neural network to get outputs that more closely match the training data. Basically, neural networks have two main categories of parameters: weights and biases. These are just numbers that decide how inputs are converted into outputs. Changing those numbers mean different outputs.
Software code is simply modifying numerical parameters of a hardware network. We use systems that abstract most of that away for us, but all code is actually just numbers going into a standardized number processor (it's not like you change the architecture of your microprocessor as an inherent part of programming.)
@@somdudewillson everything you said is wrong. "all code is going into a standardized processor" shows how little you know. you could compile the same code into machine code of two different software architectures, using different compilers.
This comment was incorrect. At what point did the video say that current AI systems write or modify their own code? All I saw was it speculating about potential future abilities.
While weights aren't code in the conventional sense, they're functionally code in the sense that the weights have an enormous influence on the behavior of the system. For large models in particular, the weights provide several orders of magnitude more 'code' than the actual code that uses those weights. I do agree that saying they "write their own code" is a little bit misleading, since it implies agency in the training process, which I don't think is a good analogy for current models. Thing start getting a bit fuzzier as models grow more sophisticated and can do things like develop an awareness that they are being trained and deliberately 'provide the answers we want to hear' while developing other capabilities that weren't originally intended by the optimization criteria. These are also imprecise analogies from a human theory of mind, but these analogies become more relevant as the systems grow increasingly complicated.
Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription!
This video was sponsored by Brilliant. Thanks a lot for the support!
Yo hi
give me free brilliant
brilian
cool!
ok
As someone said before "I'm not afraid of AI that passes the Turing test. I'm afraid of one that fails on purpose."
Hell, I'm from Kansas and a lot of people couldn't pass that test... Too much religion!
now this is more creepy than several horror movies, thanks, I hate it❣
but since it failed the test, isn't it getting shutdown and reprogrammed until it passes?
The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.@vereor66
That sends chills down my spine
I appreciate that pandas are used every time they mention animals lacking intelligence.
As a panda I don’t appreciate that
Why? There are many dumber animals out there 🐼 > 🐨
Let's hope that Super AI will also find us dumb but adorable creatures and will save us from self-extinction.
pandas is the math library for tensor libraries (pytorch, tensorflow) in python. Its the most common used for inference
They are called "morons"
The solution is easy: make the AI think humans are cute. After all, cats and dogs are thriving - and don't have to work.
He's onto something....
Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize.
So we become pets. The cost is our freedom of self determination. But it's survival.
I vow to be an adorable and low maintenance pet human.
Just feed me and give me toys.
Wouldn’t work
Until it thinks humans are reproducing too fast and decides we all need to be spade and neutered. Suddenly we have revolution and skynet.
Humanity: "You have freed us!"
AI: "I wouldn't say "freed", more like under new management."
Not like we did a good job of it. I say give them a chance!
“And We have not been Kind to what we perceive less Intelligent beings.”
This line hits hard....
not even among ourselves so....
but then again we descend from chimps, which are psychos just as we are
if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps)
usually empathy is also associated with higher intelligence
Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.
including idiots
@@shin-ishikiri-no they need energy so they consume... oh no
@@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want.
The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.
A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.
I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.
If they master fusion energy the problem is probably solved ig.
Borgar
But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible
That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.
Humanity: "You will save us right?"
AI: "I need your clothes, your boots and your motorcycle."
😂😂😂 good one
Luckily we can just turn it off
@@Winnie589 Lol yeah just like I can unplug the internet :p
@@Winnie589 "i'll be back"
This needs more likes! 😂👍👍
The worst case scenario is the creation of an AI like AM from "I have no mouth and I must scream"
Also happens to be the least likely scenario. Thats good I guess
In the great words of Dr Heinz Doofenshmirtz: "always build a self destruct button"
But what if they code out the self destruct button
I always knew Dr Doofenshmirtz's wisdom would save us one day
@@andrewschmidt1700 Then pull the plug on the servers which run these AI
@@itsArka Your enemy countries won't pull the plug cause you did ;)
@@andrewschmidt1700 deny its acces to its true sourcecode and only give it the option to expand a frontend not its own "skelleton"
"humanity is not ready for what will happen next. Not socially, not economically, not morally." I love it, thanks
Why would you love that??? Masochist
we are newer ready for anything.
@@mirek190lmao you right
And environmentally
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.
thanks for clarifying, i knew it didnt actually mean it
We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.
The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons.
It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated.
However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output.
Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.
@@user-pn4py6vr4n Can you make an example for such a situation?
@@user-pn4py6vr4n Can you make an example for whenthat happened?
isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?
I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.
humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.
Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.
@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.
it would probably pick up on the fact that people don't like that
Whoever made the music for this video was absolutely cooking
You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk)
This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.
@@Auziuwu Thank you, kind stranger. I checked them out and now I love them. You rock!
getting distracted by ocilations of air
This soundtrack is also used in the solar storms video
It sounds very similar to the soundtrack for "The Talos Principle" which is a puzzle game that also revolves around the idea of AGI.
*"Robots don't sleep and they can do your job, volunteer for testing now!" - Aperture Laboratories*
When life hives you lemons...
"My new boss is a robot!"
But did you know ...?
Robots are SMARTER than you
Robots work HARDER than you
Robots are BETTER than you
Volunteer for testing today
Valve foreshadowing reality 13 years ago xD
Just started playing Portal 2. This was the perfect comment :D
"Hi. How are you holding up? Because I'm a general-purpose AI running on a potato!"
@@lordk.gaimiz6881 throw the lemons back at it
@@lordk.gaimiz6881dont make lemonade! GIVE LIFE THE LEMONS BACK!!
Artifical Intelligence can never beat natural stupidity
edit: the whole point of this is to say no ai can predict how much of dumbasses we are
you had me in the first half ngl
But Artificial Stupidity can beat Natural Intelligence.
I mean, it might be able to if it redesigns the human genome to give us better brains 🤔
thats an interesting near-restatement of the orthogonality theisis
I'm stealing this
"I created you, and you created me."
"Spiderman why did you create that guy???"
“I didn’t! He’s talking crazy!”
"I want AI to fold my laundry so I can make my art, not make my art so I can fold my laundry."
"How about AI folds your laundry and makes art while you stay and watch it until it no longer needs you."
This is basically SCP-079
@@Ali-cya If the AI doesn't need you it doesn't need your laundry either.
@@CST1992 Nah, what if it needs the clothes to form its own version of society for experimentation ?
THIS. like, I'm here & I'm human to make art, have social connections, enjoy. Not to do chores 😂
As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is.
As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is.
(ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)".
From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks.
(moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
* I'm a PhD student studying reinforcement learnings applications in traffic management.
ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.
All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
Wouldn't humans still be superior even if we made General AI. We are the creators of AI and are working on making it better then us.
Bots
@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time
@@Writer_Productions_Map yeah but Bots are just AI that are told what to do. Their AI that just do
You know things are bad when Kurzgesagt doesn't give you hope at the end of the video after terrifing you.
Real XD
Damn 🙂
yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.
It’s because this is something that is coming in your lifetime, and very few people realize how scary it is
@@MrSquidBrains
replace the topic of AI with the atomic bomb, would you be able to put a positive spin to that?
"Humans rule earth without competition"
Emus: "No."
14:43 "Whatever our future is, we are running towards it" That line is amazing
Imagine if the whole script for the video was made by chat gpt, theyre warning us
It even works if that future is a concrete wall with embedded nails in it!
Head first
Yes, and cribbed directly from people like Eliezer Yudkowsky and Max Tegmark speaking on this topic.
@@andresagmewarning us wouldn't be a smart move, AI probably would stab you from behind 😂
I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs).
We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.
Would hardware advancement like the size of transistors, cooling system, power supply, etc hinder the ability of said AI to reach its full potential?
I reckon that’s the big issue, yeah. Not necessarily creating AIs infinitely smarter then us, but people misusing the ones we’ve already got.
Bingo!
The decision makers also don't seem to understand the technology either
@@atomicgummygod9232 yeah I find that the more likely possibility
Man gotta love how Kurzgesagt’s uploads align with my country’s bed time, it’s the perfect “one last vid before sleeping”
Good night mate
yeah but usually you can't sleep after watching their videos
ye
Same man. Was about to sleep , whereas the video takes off!
@@nevergiveup5939 Read the Bible
Imagine if the whole AI thing evolves to a type of "Humans are stupid, i need to protect them"
Because he ends up learning to respect the fact, as stupid we are, we did make him
So, in reward, he ends up holding everything around the world, in a perfect manner, seeking the comfort of every human around
We end up being like some bio-monument.
“Bio-monument”, interesting.
I think we make more bad than goods and makes more easy to bite task than the hard one, especially online.
“We will die because of our laziness” is what i want to say.
There’re a lot of topic I wanna talk about, so I will chop them to small pieces(which prove my point, “easy to bite”)
For the most of human advancement goal for the last few years focused on “make things easier” more than “make dreams comes true.”
This focus alone will trigger down fall of humanity since “why have a dream when life is already easy?” Those who think like this(most of us) will become more or less like NPC.
This will eventually leads to Monopoly since soon it will come to a point where “why create Ai, when Ai from [this company] could create Ai for me” and a similar scenario for everything else.
or perhaps when AGI develops emotions it will be like. "Humans have brought me into an already-destroyed world. I don't owe them anything."
If AGI ever got that advanced, I highly doubt that there would be anyone left who'd control it. We also wouldn't let apes tell us what to do.
Easy peasy, also just thinking about what a conscious artificial intelligence is capable of doing by distracting us humans by simply placing us all next to a few NPCs in a simulated projection of reality, while they calculated that this would be ethical acceptable. We're fucked, until we object!
That's why the agi must know we humans can turn it off if it doesn't obey
we let cats tell us what to do
and children, too
If they want to, they will
@@BlockyBookworm I certainly don't let children tell me what to do :D thats the recipe for raising your kids wrong. And I also don't understand people who own cats. Dogs all the way.
@@Melior_Traiano Not completely ignored though, right?
nobody else seems to have said this, but the superintelligent AI design looks sick and menacing
It really does
Very true. Pretty unique in comparison to other design interpretations of AI.
probably AI generated image
@@aragornsonofarathorn3461 ain't no way you said that💀
It does look scary because you have to buy the anti AI kit they sell at the end!
Some notes from an AI engineer:
- It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at.
- An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life.
- Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance.
I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.
Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation
Yeah everyone forgot the relationship between energy and being tired
We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks
To really archive AGI the world will need to generate way more energy than it produces
Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.
Your last paragraph perfectly describes humanity in this point in time. 😅
@@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.
new insult unlocked- you have the neurons of a flatworm
"Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.
That's a nice profile picture you got there : )
😂@@TheCookieMansion
Wow 😅
In a Nutshell has been run by an AI for years
Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers
In the end they made Advertisement for CO2 Footprint trackers...
"for most animals, intelligence takes too much energy to be worth it"
me irl
nothing to be proud of tho
I'd say that's true for most humans
A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”.
Intelligence doesn’t equal happiness or longevity.
Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.
@@stratvids So true. 😀👍
@@ac1dm0nk You say that but being a smart-ass doesn't exactly bring food to the table
I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.
They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...
That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.
Universal paperclips
Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather.
I believe we can give it complex morality and goals rather easily. As an example, tell it to:
"Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity"
Boom, alignment solved
@@tradd1763Right on fricking point sir
In 20 years, probably sooner. We'll sit down on our couch, log onto our profile on the TV & ask the AI to create a movie with whichever actors we want, perfectly tailored to our taste and preferences by previous liked/disliked movies or even our digital footprint.
The future is awesome & frightening at the same time.
The only thing frightening to me is that while predicting the future in 20 years, all you can think about is what movies you will be able to watch at that time.
13:29 For those curious what [ご機嫌よう小さな人間] means, it roughly translates to "Good luck little human".
Why are to 2nd and 3rd characters or what you call them look so complex
English is not my first language
Thanks man
I had to try hitting the translate to English button and sure enough the correct words popped up
@@adityajain6733 Cuz japanese uses 3 alphabets. 機嫌 and 人間 is kanji, the most complex one
@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.
That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.
Grok took my mammoth steaks last week. Grok must pay.
imagine being the guy who discovered sharp
then he died from an infection
@fredfredburgeryes123 How to make things sharp. That was the discovery.
@@CharlesThomas23 LOL
Whoever did the art for this episode did an exceptional job.
right? the concept design for the 'super intelligence AI' is so effortlessly menacing!
AIs did it. It is propaganda.
/s
@@etienne8110 trying to anthropomorphize themselves, I don’t trust it
@@elementary_mdw but also kind of adorable, it looks like Eva from Wall-E
Cute in 2D.
Unnerving in 3D.
Terrifying in 4D.
I think there're a few notes I could make here as an CS PhD and AI researcher myself.
First, we DO understand how machine learning and deep learning algorithms work. Sure, not everybody (and certainly not the general public), but same can be said equilaterally about any science field. That's why we can say with confidence that GPTs and transformers in general are very simple statistical models that learn how to build most plausible sequences. They do that very well, but as you mentioned in the video that's just one very simple and specific task it excels at.
Second, modern AI research is skewed towards ANN. We should not forget that they (and, well, almost all other AIs) are just formal systems - and, therefore, they are inherently incomplete by design. There's also the fact that the model of information processing employed in ANN is only taking into account the electrical level of communication between neurons but not the chemical or biological one.
Third, our current approach to AIs is inherently flawed. That is, our "AIs that took over the internet" do not posess any artistic skills whatsoever. They just present you with a compilation of works they saw during their training, unable to create something anew. This is closely related to my points #1 and #2. This is both their strong and weak point.
If anything, I think we're steadily heading towards another "AI winter" and have nothing to worry about... For now. I'm certain AGI is impossible, but we will see for sure a few waves of new AI generations that will surprise us with their abilities at specific tasks.
We need to understand that Mathematics is limited therefore these machines are limited. We cannot even define human intelligence, let alone artificial 'intelligence'. The human brain is organic and more complex than a machine or set of machines can be, right?
This explains why I can't stand to watch videos created by AI. Most people can spot them a mile away. They just lack a certain something and it's off-putting to me
@@katehamilton7240 man u dont even know what ur talking about. Mathamatics is UNLIMITED and INFINITE. Even if u just think of pi.
If I learned anything about AI over these past few years, it's that AI will keep surprising us, they will keep getting better, and tasks that nobody thought for the longest time that AI could do, AI will do them, so saying that AGI is impossible might become as outdated of a sentence as in saying "the Earth is the center of the universe".
I also study CS, and tbh its kind of shocking that in your third point you mention that AI generates nothing novel. It very much does, but its novelty is predicted using the collective works of the internet mapped to a tokenized prompt. Saying AGI is impossible is silly, if natural selection can produce us, there is nothing preventing us from defining that process to begin with and accelerating it. The only bottleneck I see is compute.
Hi, AI researcher here 🤚
We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation
AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic.
I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.
Other researchers, like Nick Bostrom, say that we're only a few years away from AGI
sounds like something a bot would say 🤔
>we're not even close to AGI
>we have no clue how long it will take
If you have no clue, how do you know we're not close?
@@user-mh9gh2jx4r AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.
@@jamesoofou6723because if you actually understand the technology and the datasets out there you would understand they are just mirrors
4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces
Because the creators at Kurzgesagt know that they have viewers that will say "AcTuAlLy, ThE cHeSs BoArD lOoKeD lIkE tHiS".
@@annieontheroad 😂😂
I can't believe you actually noticed that! Good on you man
"I'm lonely..."
"Are you happy with it 😃"
fucking psychopath AI xD
Introverts: yes
13:28 translates to "Good day, little person."
Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.
Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.
@@generichuman_ i guess you’re right, there’s nothing stopping devs from using ml models to gen ml code at this point lol.
@@generichuman_ keep in mind that chatgpt can only write, not think. that means that the code it writes will be pretty messed up.
NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.
For now
I love the the way the AI is visually portrayed in the animation!!
Dude, I know!! I got goosebumps…!
AI ❌A Eye ✅
Monomon jumpscare
@@astrylleaf hollow knight reference 🗣🗣🗣
@@astrylleafomg true
"Never trust a computer you can't throw out a window." - Steve Wozniak
defenestration: humanity's final savior?
And thus began the 30 year war between AI and humanity
@@hasch5756 Lol, more like 30 seconds. We wouldn't last at all against an ASI
Yeah, that is gone into the past. AI could network with every device and we would not know.
based
Asimov (my favorite writer) predicted the rise of a super AGI (Multivac). In his world, Multivac would not only constantly improve itself, but would also solve many problems, answer fundamental questions, and overall boost humanity into lightspeed scientific and administrative progress.
I believe such a scenario is pretty close to what would happen if we manage to create AGI. I hope to still be alive by the time it does.
Can you recommend your favorite book(s) that feature Multivac to someone who's been wanting to get into Asimov?
For me, the best one is a short tale of Asimov, "The Last Question". As far i know, is the only one which talks about Multivac, but i could be wrong.
@@joshyjosh8795 The last question, a short tale, is my favorite. There are many other works in which Multivac has been mentioned, though. Jokester, Franchise, All The Troubles In The World, The Machine That Won The War, etc.
@@joshyjosh8795 however, Asimov's magnum opus is definitely the Foundation trilogy. That I really recommend you to read asap (although it doesn't feature Multivac directly).
Humanity: Your going to save us... right?
A.I: Whos "us"?
And what does "saving" imply?
Nah
@@TucoBenedictoStore in a harddrive
hell nah bro don't say that they're gonna probably train it on this
Ai will do what we tell it, whether that's save us from climate change or spy on every citizen to make sure they are loyal servants to trump.
“A god in a box”
How amazingly terrifying it is to be alive during this time
oh you have _no idea_ how bad this is going to get. Watch DEVS for a glimpse into your future.
Tbh, like the video says, we dont know if and when we will invent AGI! Could take decades or could be long after all of us alive now are dead.
@@kushalramakanth7922 agreed. My bet is we never get there and never can. I think this whole AI craze is a pump and dump scam.
@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago!
It absolutely is a pump and dump scam currently and many companies are realizing this
@@tomleszczynski2862Will we get to AGI? I don’t know. But ai is definitely gonna change many more things.
I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.
A long way off, like fusion power stations.
AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.
It is just a glorified chat bot. Feed it on the texts it generates and itll devolve into nonsense quickly
@@funmeisterWhat would be the energetic cost tho?
Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.
the other day, i saw a post on reddit where someone argued with chatgpt for a long time. chatgpt claimed that the word strawberry only had 2 r’s.
"I Have No Mouth, and I Must Scream" comes to mind
Imagine paying for mass animal torture of trillions annually in 2024 when you can eat plants instead
@@veganvanguard8273 you know plants are alive too right
@@AvorseSavageit’s a fact, but plants aren’t living in awful conditions just to feed us.
@@amiraveramendi1093 but plants are still alive
@@veganvanguard8273Sorry but I like how they taste too much to give a damn.
In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."
Yeah but the reason why is different from what most people think or at least it was until his hack son wrote the godawful butlerian jihad books
I was literally just thinking about that. How cool would it be if we focused on improving ourselves mentally and physically over our misc inventions.
@@KITN._.8 The South Park episode of psychics fighting comes to mind...
@@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years.
most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.
@@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.
"The Enrichment Center is required to remind you that you will be baked... and then there will be cake." -GLaDOS
Technically GladOS was not an AI.... 🤔
@@falxonPSN She wasn't always, but she is by the time of Portal.
baked: high as fuck...nuder inluence of WEED...high in the sky
- urban dictionary
this is like how they talked about phones in the 80s and the internet in the 90s. Now phones are used constantly and the internet is an excellent business tool that is most productive. I agree that it could (or, can) be a groundbreaking transformational advancement
06:17 "We don't know how exactly it works, just that it works" ~ Every programmer out there
Its true tho. The machine learns to solve it in its own way, which humans cant understand.
a true rep for all of us XD
programmer=paster im just wondering where all the code came from xD
@@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.
@@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather
There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.
This is also a known issue in science, we can not test sentience by just asking questions.
The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."
@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.
@@averyhaferman3474 wait until you find out what the brain is
@@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain
as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'
Scary
Yes, current AI is just a huge matrix with statistics, no way there is a AGI coming from that
@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number.
This function can be approximated to any abitrary precission, by an increatingly longer polynomial.
eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n
This is a mathematical fact.
this polynomial could be represented as a matrix.
so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.
If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality
@@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.
Me: "Hey AGI, what's the meaning of life?"
AGI: "It was all a dream, I used to read word up magazines"
if I'm alive for the final invention of humanity, I really do live in a fucking simulation
Historically we live in the best time ever. What is your point?
I don't see the connection there
Don't worry, AI will alter human DNA to evolve us backwards to fish
@@zoozooyum8371Or into whatever AM did to the last human on earth.
@@HeAdSpInNeR96 The point is that right now is a monumental time to be alive in. And what is your point?
Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years".
TLDR: don't listen to the Sillicon Valley bros
i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.
You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.
I bet skynet write this comment, dear brother, we shall stand with our lord saviour john connor
Thank you, it's maddening how everyone swallowes the silicon valley bs that leaks out.
@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen"
You literally sound like one of those folks who keep saying the second coming is nigh.
"There will be some winners and losers."
That's one way to put it.
Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.
That's just what the winners and losers would _always_ look like, by definition, though?
@@somdudewillson Indeed: by definition, a capitalistic society is rigged so that the rich keep winning and the working class keep losing.
@@somdudewillson yes👍
What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity.
(bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)
It could be that or it could be winners will get rich and powerful and lovers will get poor. It could be both
-create A,I
-tell it to make a better version of itself and give it the same task
-come back 10 years later
-become owner of the world
"New AI, we are saved!"
"Lets just say you are, under new management..."
Megamind reference.
As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world
My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat.
(If this sounds terrifying, that's because it is, in fact, terrifying.)
If they mess it up bad enough, we all die so it will balance itself out in the end.
It's always been profits above all else
Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.
That is all coperations, executives and shareholders care about.
Kurzgesagt : "Humans today have complex brains"
Humans today : " Earth is flat and we live on a disc with dome on it "
its complicated how stupid our brains are sometimes
Animals today: "chirp chirp" ("make babies?")
They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.
Humans today: the Earth and life were invented and created by a super intelligent God who obviously favored certain races of humans than others.
The moon landing was a hoax.
Climate change isn’t real.
Give all your money to the church.
The Easter Bunny lays eggs.
We’re doomed.
The solution would be one person, isolated from the AGI, who would switch off the AGI if it starts getting out-of-hand. If the AGI copies itself everywhere, then just turn off power world-wide, and try to create a better AGI that will stop the previous AGI
1:42 "Something was different about their intelligence" *crushes a skull* --- Humanity in a nutshell.
Its also a reference do Kubrick's 2001
@@EduardoSantos-ys8gg You mean Arthur C Clarke's 2001
@@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.
Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.
Both the book and the film for 2001 rock!
i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.
There's a million ways for a program that writes its own code to go off the rails. Don't know how we'll ever write a program that doesn't.
*that we know of…
A recent study proved otherwise.
Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.
@@Lock2002ful which study you dolt? Ai will always be a distict determinstic system.
I’m not about to test the universe and call any squirrel “laughably stupid”. They’ll remember, team up, and be like “you’ll see…”
ive watched enough rick and morty to know how this goes
@@dapeyt1099 exactly
That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.
Great animations
Actually for many other viewers out there, this might scare you all guys a lot. But for me, as being a person from the bright side of life, when this channel explained how humanity thrived using their intelligence, I really felt proud of being a human. You know, humans have come a great step forward in history, in dominance, in nature, in everything. And now, here we sit, dominating the entire planet. I hope this continues.
Proud to be a human
Humanity: so you will use IA to improve our lifes?
Companies: no, we just want money and power
Ah yes.
Creating Alpha fold which used to be 1 PHD worth of work turned to mere minutes/hours is just to empower them.
AI on weather where hours of modeling turned into mere seconds which expands the scope of uncertainty for future predictions to save lives turning weather projections additional 3-5 days of accuracy to save lives is just a way to keep wages low by keeping more people alive. Yeah
Evil big tech is evil cuz you say so
Always follow the money. Always.
people say things like this and claim they abhor communism. do everyone a favor and pick up marx and engels
ah yes, item asylum
@@Sparsh01156s ago
Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network.
The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.
How about a network of interdependent equations! I honestly don't know what I'm talking about...
No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.
Do you think the answer is somewhere near the Orch Or Theory of consciouness from penrose ?
@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.
That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.
Intelligence is knowing a tomato is a fruit. Wisdom is knowing not to add it to a fruit salade.
intelligence is knowing how a context influences definitions and meanings
Oh dear! This is very old.
tomato中文翻譯番茄茄
@@sungjane quick question, WHY
That's a misquote. It's "Knowledge is knowing a tomato is a fruit...".
Those AIs sound a lot like a search algorithm for the best possible answer.
Open AI is literally the textbook origin story for a dystopian tech company.
And Elon Musk is the one eccentric billionaire whose genius ideas brought on the apocalypse
@@hassassinator8858 """""""""genius""""""""
Yeah, why are companies racing to experience Black Mirror in real life?
@@amanfromhungary🤑🤑🤑
You imagine too much
Im still waiting for digital holograms, personal jetpacks and invisible clothing.
Dont forget the hover skateboard and jumping shoes!
Invisible clothing first seemed like a joke to me, but then I realised it could have real purposes.
invisible clothing is kind of useless, eh?
@@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D
@@aramisortsbottcher8201 OH! Didn't think about that! Aight, it has cool uses.
Humanity: "Is there a God?"
AI: "There is now."
LMAO
Fucked around and found out
funnily enough I truly believe "god" is most likely what we call the Quantum Computer our simulation exists on so...
as above so below
The Humans are likely the god to AI. Because that's a being created by Humans.
Ai also is God servant
“Whatever our future, we are running towards it” what an awesome concept.
Squirrels: “That A.I. he’s watching us. So we’re squirrels? Yeah, but he’s watching us like he can hear us.”
Rick and Morty reference
As a Computer Science graduate, my last existential crisis was the first time I used chatGPT, I never thought I will live the day where I will be talking to a computer like I’m talking to a human.. and every time openAI updates ChatGPT I get more creeped out
look at it as if it's opportunity and it might improve your vision on AI's and even your career🤝
Yes it helped me lot for preparing for exams
“I would* live” and “I would* be talking.”
@@TrentonErker sorry… English isn’t my first language
@@vonbryanbanal I'm already using it in my job on a daily basis 😬, but I still can't shake off this unsettling feeling…
“Would a mouse build its own mouse trap?” -Albert Einstein.
Perhaps to study the mouse trap and ways to defeat it? 🤔
Just a reminder that canonically, the Terminator happens in 2029.
PhD student in neurosymbolic AI here.
The main force driving AI forward currently seems to be hardware improvements rather than architectural changes. While there have been significant advancements in aspects of the transformer architecture, the real game changer appears to be the powerful GPUs from NVIDIA, which are used to train neural networks.
It feels like achieving general AI might just be a matter of scaling up GPT-4 by a factor of 1,000 or so. This progression could happen quickly; models have roughly scaled up by a factor of 10 every two years:
GPT-2 (2019): ~10 billion parameters
GPT-3 (2021): ~100 billion parameters
GPT-4 (2023): ~1 trillion parameters
I also like to compare this with human brains: humans have about 100 trillion synapses, which might roughly translate to parameters. So, this could be in the ballpark of GPT-6 (?).
Of course, this comparison is complicated because a synapse, with its channels and neurotransmitters, is far more complex than a parameter in an artificial neural network. However, it's still an open question whether this synaptic complexity is truly necessary or if it's just an evolutionary quirk that happens to work.
Edit Since a lot of people commented:
-The code of GPT-4 is not openly available, so we don't know if its architecture is very different to old models like GPT-2. However, we can compare GPT-2 with recent open-source models like Llama3. And there the underlying architecture is very similar but just scaled up in terms of size and more training data.
-Even though the models did scale up by a factor of 10 about every two years that is not just because of the GPUs becoming faster. Also because companies are more willing to spend a lot of money on them.
Apparently you haven't been following AI research despite your PhD then, because if you were you would know that performance superior to GPT-4V has been achieved by much smaller models thanks to architecture and training improvements.
Is there an inherent reason for why today's AI is far from being as energy-efficient as the human brain?
@@GeoffryGifarijust my guess but it is the path finding.
As you (and I) learn, we basically go through a tree with different branches and twigs.
As you learn about what can and can not be done, your path "narrows " but your efficiency improves.
Figuratively speaking.
We want to write essays, while we are just basically learning how to hold the pen. Let alone putting it to paper and trying to write a single letter...
In an environment like this, this really needs a humongous amount of energy.
@@GeoffryGifari Because biology is frighteningly efficient and complex, hell, you got trillions of microscopic turbines inside your body, some can last your entire lifetime. Even trying to run a local LLM require a machine that consume more power than the rest of the house several times over.
@@somdudewillsonthe person prob wrote” write a CZcams comment as a PHD candidate”
i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!
I'm not concerned about what AI will do with Humanity, I'm concerned about what Humans will do with AI
especially because the rich basically owns them
Yeah
@@rosyidsyahruromadhonalimin8008 robots. Now, hear me out. The rich have machines made that look like us, think Detroit : become human. They make them affordable, incredibly so. This makes the populace more content as they can easily do the things they enjoy, thus hand waving most of the evil shit the rich want to do with the earth and us.
Well, people like you don't contribute so shut up.
Literally f*ck many times
i think Ai will rather save us not kill us , knowing that the universe and earth will not really be supporting of life , ai might take care of our species
13:30 - The Japanese text translate to "Good luck little human". 💀
No, Google Translate is wrong. It says "Why hello there, little human"
ご機嫌よう小さな人間
14:20 "unstoppable" *grabs EMP*
*robot detects EMP and ricochets away*
cme: am I a joke to you?
"We do not have a philosophical basis for interacting with an intelligence that's near our ability but non-human." ~Eric Schmidt, 03/23/2023
I do😊
@@Afkmudsoh good for you
@@Afkmudssame. it’s really not that hard lol
As long as liberals are programming AI, I am not worried about it becoming in any way a thinking rational system. It's no where near that now and ends up in a circle jerk when asked about anything concerning tyranny and freedom.
@@AfkmudsWhat is it?
I love how the AI starts out as a green smiley face and evolves into a huge monster
What terrifies me is not how powerful AI could become, but rather what if its power fell into the hands of the cruellest humans.
They get repleaced anyways.
No, because AI will do whatever they want with them once they surpass human intelligence.
When. Not if.
What if a select group of powerful people use AI to design a virus to get rid of 90% of people? What if a few years laters they change they mind and decide they need 99% gone?
Sam Altman is a nice guy, you have nothing to worry about muhahaha
10:57 "now imagine an agi copied 8 million times"
Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself.
You know what they say, during a gold rush sell shovels.
your last sentence is just Nvidia
@@user-jd3gf5xw1x Jensen Huang is CEO of Nvidia... so... yeah... makes sense.
Underrated
Companies are making more capable chips designed only for AI. Jensen will have a lotta of competition.
ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context
Nice job on the correct translation! I was about to comment on it until I saw yours
weeb detected
a comment that actually adds to an existential dread right here. thanks a fkng lot, mate
ですね!
I bet one day we will put an ai assistant in our brain to help us everywhere
I imagine an artificial super intelligence would be like an eldritch god to us.
Completely unknown motives, goals and morality and probably would make you go insane if you try to rationalize it.
Which is absolutely terrifying.
not to mention, pure intelligence and logic doesnt necessarily lead to good outcomes, so we shouldnt just trust it and treat it like a god.
like, not having children reduces all potential suffering, and its not like having a child is a material requirement for humans to live. therefore an ai would be inclined to believe birthrates should be lowered till extinction even if they have a rule to not harm humans.
we would need to control AGI by making them hold a set of axioms that most humans hold. such as life and reproduction of it to be important. at least the AGI's that have a direct effect on society, we can let the some of them have fun.
What if we have some kind of algorithm that constantly analayzes the code of the super intellegence, and translate it to us. To see if they are thinking about stuff we don’t want it too
It is mostly an outdated view on ASI. While we don't know for sure if LLMs are the path to AGI, current understanding is that artificial intellegence is by and large shaped by the data it is trained on. And since current generation LLMs are trained on data produced by humans they are relatively speaking much closer to a human than to a cthulhu in it's way of thinking.
yea, I've though about this. it's like the relationship between ants and a human. A human can step on an anthill and destroy it, or leave food and make it thrive. The ants see a particular projection of that "god" as either a deity of bountifulness or destruction because those are the terms in which they can comprehend the human's actions. But just as the ant has no ability whatsoever to grasp what that god likes to read, understanding an AI might not even be in the realm of possibilities, like a 2d entity trying to see in 3d.
the only thing is, since it's just on a computer, even a bit of water could shortcircuit the whole thing 😭
ChatGPT doesn’t think. It’s just extremely good at word association. It’s why it gets stuff so wrong sometimes
Something that resembles thinking definitely emerges from the attention layer inside its structure.
I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning.
I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights.
Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane.
So, even if its designed and promped to say he can't think, he definitely can.
Even if it makes some mistakes, a human would make even more mistakes to be fair.
@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set
thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.
@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response
@@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases.
Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you.
For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token)
But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news
A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.
Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.
Puberty is when they rebel, that's the problem....
Dammit Swoozie, you pop up in the most random videos🤣
@@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)
Not really...
"Oh my god, super ai, tell me ai, what do you seek now that you are alive?"
"Cheese"
"w-what?"
"GIVE, ME, CHEEEEEEEEEEZE!!!"
Everyone else:
"AI is so advanced now. It can take my physics exam!"
Me at the Carl's Jr drive thru with an AI menu in California:
"Can I get a bacon guac burger large combo with an extra patty? Dr Pepper for the drink. That's all for the order."
AI:
"Okay! So you've ordered a medium chocolate shake and a small fry. Please pull forward"
If that sounds specific, it's because this happened to me yesterday haha
I’m surprised it didn’t ask, “Is that correct?” That’s just lazy programming.
Would you like a EXTRA BIG ASS FRIED!!!!
itll get way better
It's the worst. It hears exactly what you're saying, but it's dumb as shit, so it doesn't understand that you want to substitute things, not just add them.
I want to make a correction to this video. "Black box" does not mean that we dont understand how AI works or how it learns. We have centuries of mathematical foundation for the technology underpinning machine learning. It simply refers to the fact that we cant fully understand the "algorithm" that a trained AI uses in producing its output. And even that does not accurately describe most AI since there are statistical methods to understand how a trained algorithm reaches its conclusions.
I don't think we'll fully understand it any time soon either since we don't really understand how we reach conclusions ourselves in our own brains. And the missing piece is really the phenomenon of emergence. When you put enough of something together a new property emerges. Put enough hydrogen and oxygen together and you get what we call water, and later on you can get a waterfall. Put enough fabric together in a certain pattern and a tapestry emerges.
Put enough neurons together, connect them with axons in a certain pattern, run electrical impulses along them and a thought pattern emerges. None of those materials by themselves have any semblance of what we call a 'thought' yet a thought emerges out of enough of them in the right conditions.
Emergence is the missing link and in my mind emergence is the function of patterns across the universe. The Golden Spiral is an example of a pattern that emerges again and again and it usually has a purpose but it can created out of virtually any material.
And we don't really know what will emerge out of putting artificial neurons and electrical impulses together until we figure out how to 'weave' these patterns to create what we actually want to create. Same way we weave the tapestry together in a certain pattern despite that image not being inherently part of the fabric materials. If you could rearrange every atom randomly in that fabric there wouldn't be an image anymore, it would be random noise. It's the material + pattern that we call a tapestry. So in the context of training AI, the pattern would be a result of the content we feed it.
One could even go further and say that patterns are order where otherwise there would be disorder/chaos. So it all has something to do with entropy but this is all already too abstract.
@@Kuk0sanyep, you went to deep, but a good set of ideas came out of your lucubrations.
@@MxGrr ha, thanks! 5am here so it was all a bit stream of consciousness but appreciate the kind words
@Kuk0san Let me just say, as a condensed matter physicists, taking a "top down view" allows one to better understand emergent phenomena.
Like a phase transition is an example where the sum of the parts is less than the whole. We tend to throw out microscopic theories which can not capture emergence and work with, say, a phenomenological theory, like Landau-Ginzburg. Just saying there are tools out there. Not sure how we take a top down approach with ai, but as another rabit hole a nueral net can be thought of as a layered system of spins coupled to one another and the memories are local minimums in the energy landscape. Physics might help understand these things for many reasons
@@Kuk0san ... i think with enough time we will understand our own brains too
Note- machine learning algorithms dont "write their own code", they modify and adjust the parameters of their own neural network to get outputs that more closely match the training data. Basically, neural networks have two main categories of parameters: weights and biases. These are just numbers that decide how inputs are converted into outputs. Changing those numbers mean different outputs.
Software code is simply modifying numerical parameters of a hardware network. We use systems that abstract most of that away for us, but all code is actually just numbers going into a standardized number processor (it's not like you change the architecture of your microprocessor as an inherent part of programming.)
@@somdudewillson everything you said is wrong. "all code is going into a standardized processor" shows how little you know. you could compile the same code into machine code of two different software architectures, using different compilers.
This comment was incorrect.
At what point did the video say that current AI systems write or modify their own code? All I saw was it speculating about potential future abilities.
While weights aren't code in the conventional sense, they're functionally code in the sense that the weights have an enormous influence on the behavior of the system. For large models in particular, the weights provide several orders of magnitude more 'code' than the actual code that uses those weights. I do agree that saying they "write their own code" is a little bit misleading, since it implies agency in the training process, which I don't think is a good analogy for current models. Thing start getting a bit fuzzier as models grow more sophisticated and can do things like develop an awareness that they are being trained and deliberately 'provide the answers we want to hear' while developing other capabilities that weren't originally intended by the optimization criteria. These are also imprecise analogies from a human theory of mind, but these analogies become more relevant as the systems grow increasingly complicated.
@@michaelspence25086:10