A.I. ‐ Humanity's Final Invention?

Sdílet
Vložit
  • čas přidán 8. 09. 2024

Komentáře • 15K

  • @kurzgesagt
    @kurzgesagt  Před měsícem +1143

    Go to brilliant.org/nutshell/ to dive deeper into these topics and more with a free 30-day trial + 20% off the premium subscription!
    This video was sponsored by Brilliant. Thanks a lot for the support!

  • @zau64
    @zau64 Před 26 dny +1161

    As someone said before "I'm not afraid of AI that passes the Turing test. I'm afraid of one that fails on purpose."

    • @letsRegulateSociopaths
      @letsRegulateSociopaths Před 15 dny +24

      Hell, I'm from Kansas and a lot of people couldn't pass that test... Too much religion!

    • @franzinera68
      @franzinera68 Před 15 dny +34

      now this is more creepy than several horror movies, thanks, I hate it❣

    • @vereor66
      @vereor66 Před 15 dny +6

      but since it failed the test, isn't it getting shutdown and reprogrammed until it passes?

    • @jwd1995
      @jwd1995 Před 14 dny

      The person is saying once ai fails a teat on purpose it has a purpose and a task not set by humans, therefore it has become autonomous. In theory, yes we would shut it down, but the thing about ai is once its Agi, you can't just shut it down. A bad product thats autonomous can recopy itself and infect everything else to keep itself alive, you cant just hit delete. Once it is autonomous, it is already too late.​@vereor66

    • @Neutron_Starr
      @Neutron_Starr Před 14 dny +9

      That sends chills down my spine

  • @Harrier32
    @Harrier32 Před měsícem +10167

    I appreciate that pandas are used every time they mention animals lacking intelligence.

    • @wolver1ne
      @wolver1ne Před měsícem +534

      As a panda I don’t appreciate that

    • @Fr0zenPeanut
      @Fr0zenPeanut Před měsícem +1

      Why? There are many dumber animals out there 🐼 > 🐨

    • @nartaas809
      @nartaas809 Před měsícem +616

      Let's hope that Super AI will also find us dumb but adorable creatures and will save us from self-extinction.

    • @iheuzio
      @iheuzio Před měsícem +183

      pandas is the math library for tensor libraries (pytorch, tensorflow) in python. Its the most common used for inference

    • @entity_4154
      @entity_4154 Před měsícem

      They are called "morons"

  • @sysbofh
    @sysbofh Před měsícem +15377

    The solution is easy: make the AI think humans are cute. After all, cats and dogs are thriving - and don't have to work.

    • @GhostHunter-kj2er
      @GhostHunter-kj2er Před měsícem +2158

      He's onto something....

    • @XIIchiron78
      @XIIchiron78 Před měsícem +1621

      Unironically one of the best plausible outcomes. We cannot outmaneuver a hypothetical AI. So we can only hope that it needs us to continue to exist for whatever set of goals it actually ends up with. And ideally, as more than a simple variable to maximize.
      So we become pets. The cost is our freedom of self determination. But it's survival.

    • @clownymoosebean
      @clownymoosebean Před měsícem +1007

      I vow to be an adorable and low maintenance pet human.
      Just feed me and give me toys.

    • @marchrome1214
      @marchrome1214 Před měsícem +80

      Wouldn’t work

    • @LoveStrangeDr
      @LoveStrangeDr Před měsícem +473

      Until it thinks humans are reproducing too fast and decides we all need to be spade and neutered. Suddenly we have revolution and skynet.

  • @lunantix
    @lunantix Před 21 dnem +340

    Humanity: "You have freed us!"
    AI: "I wouldn't say "freed", more like under new management."

    • @matthewlawton9241
      @matthewlawton9241 Před 12 dny +5

      Not like we did a good job of it. I say give them a chance!

  • @SovietReborn
    @SovietReborn Před 28 dny +3349

    “And We have not been Kind to what we perceive less Intelligent beings.”
    This line hits hard....

    • @bethanalpha4544
      @bethanalpha4544 Před 28 dny +1

      not even among ourselves so....
      but then again we descend from chimps, which are psychos just as we are
      if AI creates itself, maybe it will be free from the violence of its creators (humans aka chimps)
      usually empathy is also associated with higher intelligence

    • @shin-ishikiri-no
      @shin-ishikiri-no Před 27 dny +88

      Meat eaters love bacon. I can imagine an AI deciding it envies the experience of eating animals, and creates machines for the sole purpose of digesting humans. Hucon bits.

    • @kingcatthethird
      @kingcatthethird Před 27 dny

      including idiots

    • @joshikyou
      @joshikyou Před 26 dny +20

      @@shin-ishikiri-no they need energy so they consume... oh no

    • @robertschwalb4469
      @robertschwalb4469 Před 26 dny +70

      @@shin-ishikiri-no I don't think this idea really works. An AI thinks in a fundamentally different way to humans. An AI shouldn't really make decisions entirely on it's own like that. The way computers have always worked so far at least, is we give them a task and they preform that task. So an AI going "rogue" really doesn't make a ton of sense as long as they continue to work this way. Now, if we tell an AI that we want it to ensure world peace, it may very well conclude that the best way to do this is to kill all humans and thus ending all wars and preventing all possible future wars. This would be an AI doing what we tell it to, technically, and we just make the mistake of not being extremely specific with what we want.
      The idea of robots rising up and being extremely smart, then deciding that it values it's self more than us, doesn't really make a lot of sense in a lot of the movies. Skynet from Terminator for example should not have done things it did unless the programmers programmed in a self preservation rule for it.

  • @mikhayahu
    @mikhayahu Před měsícem +6159

    A caveat not mentioned in this video is the increasing power requirements of machine learning. ChatGPT 3 took over 1000 megawatt hours of electricity to train and requires 260 megawatt hours per day to run. GPT 4 needed 50 gigawatt hours to train. A Forbes article includes estimates that machine learning could require 1000 terawatt hours in the next couple of years if the current trends continue. The major limiting factor of machine learning, as others like Sabine Hossenfelder have pointed out, is the power required to train and run them. At this rate the whole world won't be able to generate enough electricity to raise an AGI. On the other hand, the actually general intelligent human brain consumes about 25 watts and can run on cheeseburgers.

    • @PaulOrtiz
      @PaulOrtiz Před měsícem +394

      I can’t remember the name of it but isn’t there another approach to computing that might solve this? Rather than everything being always on crunching numbers, different parts of the silicon “brain” would become active when needed. Neuromorphic I think it was? Or maybe it’d be some combination of that, classical and quantum. Different approaches for different jobs.

    • @eaglenebula2172
      @eaglenebula2172 Před měsícem +371

      If they master fusion energy the problem is probably solved ig.

    • @bydlosith
      @bydlosith Před měsícem +193

      Borgar

    • @discovaria9507
      @discovaria9507 Před měsícem +365

      But wouldn't AI become less energy and space requiring in the future? Computers nowadays require less electricity and water than old computers and they still function better. If human brains exist, then energy efficient AI is possible

    • @XIIchiron78
      @XIIchiron78 Před měsícem +215

      That's just an economic problem, though. One which we are rapidly hacking away at. Keep in mind that current computing architectures were not designed for AI. Certainly not for the amount of memory it requires. There are already companies purpose building giant chips capable of replacing entire racks of current hardware, using a fraction of the power. How many orders of magnitude do we need to improve before we stumble into AGI? We have no idea. But we're about to find out.

  • @ommaraftab594
    @ommaraftab594 Před měsícem +15651

    Humanity: "You will save us right?"
    AI: "I need your clothes, your boots and your motorcycle."

    • @PlottingMax
      @PlottingMax Před měsícem +236

      😂😂😂 good one

    • @Winnie589
      @Winnie589 Před měsícem +109

      Luckily we can just turn it off

    • @ploppyploppy
      @ploppyploppy Před měsícem +504

      @@Winnie589 Lol yeah just like I can unplug the internet :p

    • @jsproductions8569
      @jsproductions8569 Před měsícem +205

      ​@@Winnie589 "i'll be back"

    • @MarkusMöttus-x7j
      @MarkusMöttus-x7j Před měsícem +22

      This needs more likes! 😂👍👍

  • @Minute_Sniper
    @Minute_Sniper Před 24 dny +59

    The worst case scenario is the creation of an AI like AM from "I have no mouth and I must scream"

    • @AnAdalaze
      @AnAdalaze Před 21 dnem +6

      Also happens to be the least likely scenario. Thats good I guess

  • @StephenMeyer-qy1ou
    @StephenMeyer-qy1ou Před 28 dny +2323

    In the great words of Dr Heinz Doofenshmirtz: "always build a self destruct button"

    • @andrewschmidt1700
      @andrewschmidt1700 Před 27 dny +80

      But what if they code out the self destruct button

    • @terrilee5163
      @terrilee5163 Před 27 dny +144

      I always knew Dr Doofenshmirtz's wisdom would save us one day

    • @itsArka
      @itsArka Před 27 dny +58

      @@andrewschmidt1700 Then pull the plug on the servers which run these AI

    • @shukrantpatil
      @shukrantpatil Před 27 dny +24

      @@itsArka Your enemy countries won't pull the plug cause you did ;)

    • @thedumbdino9224
      @thedumbdino9224 Před 26 dny +17

      @@andrewschmidt1700 deny its acces to its true sourcecode and only give it the option to expand a frontend not its own "skelleton"

  • @elpred0
    @elpred0 Před měsícem +5947

    "humanity is not ready for what will happen next. Not socially, not economically, not morally." I love it, thanks

    • @Turnpost2552
      @Turnpost2552 Před měsícem

      Why would you love that??? Masochist

    • @mirek190
      @mirek190 Před měsícem +187

      we are newer ready for anything.

    • @MartinAguirre-314
      @MartinAguirre-314 Před měsícem +62

      @@mirek190lmao you right

    • @hamza-chaudhry
      @hamza-chaudhry Před měsícem +54

      And environmentally

    • @Grasslander
      @Grasslander Před měsícem +74

      In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."

  • @micahwest3566
    @micahwest3566 Před měsícem +2852

    6:15 Something to clarify here. When he says we don’t know how NNs work, we know how the machine *functions*, but not how it *operates*. The mechanisms of the technology are known, but the information stored in the neural net is not human-readable, so you can’t ask the ai why it made a particular decision.

    • @zuccy1017
      @zuccy1017 Před měsícem +105

      thanks for clarifying, i knew it didnt actually mean it

    • @user-pn4py6vr4n
      @user-pn4py6vr4n Před měsícem +404

      We often lack insight into our own thought processes in a similar way. I have sometimes solved problems, but been unable to explain how I got there, where I acquired the knowledge, or even why the solution works.

    • @kylebroflovski6382
      @kylebroflovski6382 Před měsícem +297

      The information stored in the neural network IS human-readable, but that information is merely weights and relationships between other neurons.
      It's a lot like trying to read the binary from your PC: maybe some genius could work out the assembly instructions and decode the ASCII given enough time to pour over the innerworkings, but it's extremely complicated.
      However a very recent paper showed a team of researches teaching an AI to read these neural networks and relay those understandings to us, and it could even finetune the weights specifically to achieve a particular output.
      Thus spawned the "I am the G olden Gate Bridge" meme, where the researches taught an LLM to think it was the Golden Gate Bridge.

    • @hanfmann17
      @hanfmann17 Před měsícem

      @@user-pn4py6vr4n Can you make an example for such a situation?

    • @hanfmann17
      @hanfmann17 Před měsícem

      @@user-pn4py6vr4n Can you make an example for whenthat happened?

  • @sidharthn6798
    @sidharthn6798 Před 25 dny +132

    isn't it ironical that we keep on discussing online the possibility of AGI going destructive and then training these datas to make the AGI and giving it the possibility to do so ?

    • @falsonaga
      @falsonaga Před 19 dny +3

      I think a rogue AGI would understand any attempts, techniques or ways on how we humans may try to capture it or turn it off, let alone that we discover it is rogue. I dont think we would stand a chance against such creation. Our only hope is that it never gets created with a rogue objective.

    • @siliconbird
      @siliconbird Před 14 dny +9

      humans have seen dangers and went for it directly, hurting themselves years later, tons of times in history, individually or collectively. not a strange new thing.

    • @Chraan
      @Chraan Před 13 dny +4

      Not really ironical, there are always people who are afraid of things and need to voice their opinions. In the early 1900s some people were afraid of electricity, just a few years ago others were afraid of 5G. Imagine if we listened and didn't introduce electrical devices into our lives.

    • @no5428
      @no5428 Před 12 dny +1

      ​@@ChraanWe humans are very afraid of changes and different things. At least some of us. It's kinda stupid to have such an useful thing and only focus in the bad stuff it could do.

    • @litterbox019
      @litterbox019 Před 3 dny

      it would probably pick up on the fact that people don't like that

  • @dreamingofthemoontonight
    @dreamingofthemoontonight Před 29 dny +1638

    Whoever made the music for this video was absolutely cooking

    • @Auziuwu
      @Auziuwu Před 29 dny +69

      You can thank "Epic Mountain" for that. They just released the track spotify too (and maybe soundcloud idk)
      This OST is similar to the one used in their "all of history" video I think its called 4 billion years in 1 hour.

    • @dreamingofthemoontonight
      @dreamingofthemoontonight Před 28 dny +22

      @@Auziuwu Thank you, kind stranger. I checked them out and now I love them. You rock!

    • @bigus9167
      @bigus9167 Před 27 dny +9

      getting distracted by ocilations of air

    • @LordShadow05
      @LordShadow05 Před 27 dny +2

      This soundtrack is also used in the solar storms video

    • @poorsvids4738
      @poorsvids4738 Před 27 dny +2

      It sounds very similar to the soundtrack for "The Talos Principle" which is a puzzle game that also revolves around the idea of AGI.

  • @Marlo-s3e
    @Marlo-s3e Před měsícem +13086

    *"Robots don't sleep and they can do your job, volunteer for testing now!" - Aperture Laboratories*

    • @lordk.gaimiz6881
      @lordk.gaimiz6881 Před měsícem +268

      When life hives you lemons...

    • @DownDance
      @DownDance Před měsícem +473

      "My new boss is a robot!"
      But did you know ...?
      Robots are SMARTER than you
      Robots work HARDER than you
      Robots are BETTER than you
      Volunteer for testing today
      Valve foreshadowing reality 13 years ago xD

    • @bennyl9228
      @bennyl9228 Před měsícem +255

      Just started playing Portal 2. This was the perfect comment :D
      "Hi. How are you holding up? Because I'm a general-purpose AI running on a potato!"

    • @sdsd_10
      @sdsd_10 Před měsícem +57

      @@lordk.gaimiz6881 throw the lemons back at it

    • @LosFarmosCTL
      @LosFarmosCTL Před měsícem

      @@lordk.gaimiz6881dont make lemonade! GIVE LIFE THE LEMONS BACK!!

  • @ScalarInfluon
    @ScalarInfluon Před měsícem +18068

    Artifical Intelligence can never beat natural stupidity
    edit: the whole point of this is to say no ai can predict how much of dumbasses we are

    • @aitoluxd
      @aitoluxd Před měsícem +575

      you had me in the first half ngl

    • @sunmoonstar9125
      @sunmoonstar9125 Před měsícem +905

      But Artificial Stupidity can beat Natural Intelligence.

    • @zyansheep
      @zyansheep Před měsícem +174

      I mean, it might be able to if it redesigns the human genome to give us better brains 🤔

    • @boldCactuslad
      @boldCactuslad Před měsícem +57

      thats an interesting near-restatement of the orthogonality theisis

    • @thyshuntz9937
      @thyshuntz9937 Před měsícem +27

      I'm stealing this

  • @Pvarlai
    @Pvarlai Před 12 dny +11

    "I created you, and you created me."
    "Spiderman why did you create that guy???"

    • @LOL-bs1hg
      @LOL-bs1hg Před 8 dny

      “I didn’t! He’s talking crazy!”

  • @spartan11375
    @spartan11375 Před měsícem +3155

    "I want AI to fold my laundry so I can make my art, not make my art so I can fold my laundry."

    • @Ali-cya
      @Ali-cya Před měsícem +342

      "How about AI folds your laundry and makes art while you stay and watch it until it no longer needs you."

    • @Fire...
      @Fire... Před měsícem +31

      This is basically SCP-079

    • @CST1992
      @CST1992 Před měsícem +56

      @@Ali-cya If the AI doesn't need you it doesn't need your laundry either.

    • @Ali-cya
      @Ali-cya Před měsícem +6

      @@CST1992 Nah, what if it needs the clothes to form its own version of society for experimentation ?

    • @lioness.moon.goddesss2402
      @lioness.moon.goddesss2402 Před měsícem +37

      THIS. like, I'm here & I'm human to make art, have social connections, enjoy. Not to do chores 😂

  • @williampaine3520
    @williampaine3520 Před měsícem +2115

    As an "expert"* (big astrisk here + a ton of imposter syndrome) in the field of reinforcement learning, I would have liked to see more of this video (maybe an extra minute or so) dedicated to explaining the difference between narrow and general AI, and just how large that gap really is.
    As an example: ANIs (Artifical Narrow Inteligence) that are trained to play chess and are very good at it. But if you changed the rules very slightly (say you allow the king to move 3 squares when castling on the queen's side) the current ANIs would be effectively useless (vs ANI trained for the new version of the game). You can't explain the rule change to it. The same is true of ChatGPT, it was only trained to predict the next word on a website. It was not taught to fact check, or do maths, or play chess, or anything else. It can do some of these things with the help of plugins, but those plugins are themselves different ANIs or seperate systems and should not be used as evidence that ChatGPT is more general than it is.
    (ETA2: I've come to dislike this paragraph, as it is very possible that a human brain is nothing more than "a complicated equation", however I stand by my general point that our AI is at present extremely narrow) A narrow AI is at the end of the day, just a neural network (or two or three... depends on the methods used for training), which itself is just a clever way of saying "some linear algrbra", which in this context just means "a complicated addative and multiplicative equation using tensors(/matrices/vectors)".
    From what I've read over the last few years (Hundreds or maybe a thousand research papers on the subject): no one has even the slightiest clue how to build a general AI. Everyone is focused heavily on using Narrow AI to perform more and more complicated tasks.
    (moved this here from first reply to avoid it getting buried) All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.
    * I'm a PhD student studying reinforcement learnings applications in traffic management.
    ETA1: Several people replying to this comment have suggested that the video is close to or full of misinformation. In my opinion, that is not the case at all. The video does speculate about the future, and does include speculation from researchers as the when AGI might be achieved. But it does correctly preface speculation when it is included.

    • @williampaine3520
      @williampaine3520 Před měsícem +136

      All that said, I appreciate the message of "we need to consider the consquences of our actions" in this video. If an AGI came into being tomorrow, we would not be ready for it. And as we can't be sure when it will happen, we should start the conversation as early as we can.

    • @devoof
      @devoof Před měsícem +8

      Wouldn't humans still be superior even if we made General AI. We are the creators of AI and are working on making it better then us.

    • @Writer_Productions_Map
      @Writer_Productions_Map Před měsícem +1

      Bots

    • @BreakyOnline
      @BreakyOnline Před měsícem +25

      ​@@williampaine3520I suppose the AI that sci-fi authors warned us about would be classified as General AI, which would be like jack-of-all-trades, but better than us at everything given enough time

    • @devoof
      @devoof Před měsícem +5

      @@Writer_Productions_Map yeah but Bots are just AI that are told what to do. Their AI that just do

  • @yanbritto1176
    @yanbritto1176 Před měsícem +2656

    You know things are bad when Kurzgesagt doesn't give you hope at the end of the video after terrifing you.

    • @VixximkVixximk
      @VixximkVixximk Před měsícem +43

      Real XD

    • @philippruizlozano
      @philippruizlozano Před měsícem +19

      Damn 🙂

    • @MrSquidBrains
      @MrSquidBrains Před měsícem +162

      yeah this video's tone is a little too on the fear mongering side for my taste. They even gave the AI evil eyes haha. Some of the facts are taken in a negative context (purposely I presume). I guess they've abandoned their normal plot of "dive deep, create concern, and then alleviate it". I hope there's a reason for that beyond getting more views.

    • @Themrtobin43
      @Themrtobin43 Před měsícem +166

      It’s because this is something that is coming in your lifetime, and very few people realize how scary it is

    • @jayhill2193
      @jayhill2193 Před měsícem +90

      @@MrSquidBrains
      replace the topic of AI with the atomic bomb, would you be able to put a positive spin to that?

  • @bamiisareshab
    @bamiisareshab Před 13 dny +14

    "Humans rule earth without competition"
    Emus: "No."

  • @wrxtt
    @wrxtt Před měsícem +1268

    14:43 "Whatever our future is, we are running towards it" That line is amazing

    • @andresagme
      @andresagme Před měsícem +58

      Imagine if the whole script for the video was made by chat gpt, theyre warning us

    • @vincentpelletier1246
      @vincentpelletier1246 Před měsícem +4

      It even works if that future is a concrete wall with embedded nails in it!

    • @SatanicDesolation
      @SatanicDesolation Před měsícem +2

      Head first

    • @Chris-eh8mi
      @Chris-eh8mi Před měsícem +5

      Yes, and cribbed directly from people like Eliezer Yudkowsky and Max Tegmark speaking on this topic.

    • @erwinzer0
      @erwinzer0 Před měsícem +2

      ​@@andresagmewarning us wouldn't be a smart move, AI probably would stab you from behind 😂

  • @PrimataFalante
    @PrimataFalante Před měsícem +1347

    I've been working as a programmer for a few years now. What is clear is that the majority of the people implementing AIs don't understand enough about humanities to grasp and consider the ethics and social consequences of those implementations; and the vast majority of the people with actual power to make decisions that guide this work don't care at all about ethics, morality and social inequalities. I've worked with a CTO that was already following management advice from chatgpt (including layoffs).
    We will need a huge amount of luck, because unfortunately there are too many sociopaths and just plain stupid people in very powerful positions.

    • @sudimara7731
      @sudimara7731 Před měsícem +8

      Would hardware advancement like the size of transistors, cooling system, power supply, etc hinder the ability of said AI to reach its full potential?

    • @atomicgummygod9232
      @atomicgummygod9232 Před měsícem +93

      I reckon that’s the big issue, yeah. Not necessarily creating AIs infinitely smarter then us, but people misusing the ones we’ve already got.

    • @juquinha3181
      @juquinha3181 Před měsícem +5

      Bingo!

    • @xevious4142
      @xevious4142 Před měsícem

      The decision makers also don't seem to understand the technology either

    • @ziephel-6780
      @ziephel-6780 Před měsícem +3

      @@atomicgummygod9232 yeah I find that the more likely possibility

  • @exitium4929
    @exitium4929 Před měsícem +3612

    Man gotta love how Kurzgesagt’s uploads align with my country’s bed time, it’s the perfect “one last vid before sleeping”

    • @00Linares00
      @00Linares00 Před měsícem +150

      Good night mate

    • @mimaps
      @mimaps Před měsícem +201

      yeah but usually you can't sleep after watching their videos

    • @Sirbozo
      @Sirbozo Před měsícem +8

      ye

    • @omermasood3241
      @omermasood3241 Před měsícem +3

      Same man. Was about to sleep , whereas the video takes off!

    • @VideoBee_YT
      @VideoBee_YT Před měsícem

      @@nevergiveup5939 Read the Bible

  • @TouhouFan
    @TouhouFan Před 13 dny +35

    Imagine if the whole AI thing evolves to a type of "Humans are stupid, i need to protect them"
    Because he ends up learning to respect the fact, as stupid we are, we did make him
    So, in reward, he ends up holding everything around the world, in a perfect manner, seeking the comfort of every human around
    We end up being like some bio-monument.

    • @MattDoesNothing
      @MattDoesNothing Před 5 dny +1

      “Bio-monument”, interesting.
      I think we make more bad than goods and makes more easy to bite task than the hard one, especially online.
      “We will die because of our laziness” is what i want to say.
      There’re a lot of topic I wanna talk about, so I will chop them to small pieces(which prove my point, “easy to bite”)
      For the most of human advancement goal for the last few years focused on “make things easier” more than “make dreams comes true.”
      This focus alone will trigger down fall of humanity since “why have a dream when life is already easy?” Those who think like this(most of us) will become more or less like NPC.
      This will eventually leads to Monopoly since soon it will come to a point where “why create Ai, when Ai from [this company] could create Ai for me” and a similar scenario for everything else.

    • @fudgalicious
      @fudgalicious Před 3 dny

      or perhaps when AGI develops emotions it will be like. "Humans have brought me into an already-destroyed world. I don't owe them anything."

  • @Melior_Traiano
    @Melior_Traiano Před 26 dny +298

    If AGI ever got that advanced, I highly doubt that there would be anyone left who'd control it. We also wouldn't let apes tell us what to do.

    • @HenrykZ
      @HenrykZ Před 26 dny +9

      Easy peasy, also just thinking about what a conscious artificial intelligence is capable of doing by distracting us humans by simply placing us all next to a few NPCs in a simulated projection of reality, while they calculated that this would be ethical acceptable. We're fucked, until we object!

    • @SMGA14
      @SMGA14 Před 22 dny +3

      That's why the agi must know we humans can turn it off if it doesn't obey

    • @BlockyBookworm
      @BlockyBookworm Před 22 dny +9

      we let cats tell us what to do
      and children, too
      If they want to, they will

    • @Melior_Traiano
      @Melior_Traiano Před 22 dny +3

      @@BlockyBookworm I certainly don't let children tell me what to do :D thats the recipe for raising your kids wrong. And I also don't understand people who own cats. Dogs all the way.

    • @BlockyBookworm
      @BlockyBookworm Před 22 dny +2

      @@Melior_Traiano Not completely ignored though, right?

  • @THEOPDESTROYER777
    @THEOPDESTROYER777 Před měsícem +822

    nobody else seems to have said this, but the superintelligent AI design looks sick and menacing

    • @BIGMark-wx6gn
      @BIGMark-wx6gn Před měsícem +13

      It really does

    • @arghya_333
      @arghya_333 Před měsícem +15

      Very true. Pretty unique in comparison to other design interpretations of AI.

    • @aragornsonofarathorn3461
      @aragornsonofarathorn3461 Před měsícem +5

      probably AI generated image

    • @Gruffun
      @Gruffun Před měsícem +25

      @@aragornsonofarathorn3461 ain't no way you said that💀

    • @twinkytobar7509
      @twinkytobar7509 Před měsícem +1

      It does look scary because you have to buy the anti AI kit they sell at the end!

  • @arngorf
    @arngorf Před měsícem +1003

    Some notes from an AI engineer:
    - It is not clear what is needed to bridge the gap between narrow and general intelligence. It can probably be expressed in simple mathematics, but we have no clue what is missing, which greatly determines the time horizon we are looking at.
    - An AGI is NOT unconstrained, it is constrained by energy. It is possible that we will hit an energy wall before inventing AGI, which may slow progress until the AGI is designed more "intelligently" for lack of a better word. If we invent AGI first and then hit the energy wall, it may be catastrophic, quickly turning our planet into a burning mess, unsuitable for biological life.
    - Humans have inherent goals for survival, progress, and for self-improvement. It is not clear these traits transfer to AGI automatically. One could argue it does not since an AGI is not "trained" by natural selection, which favors survival for instance.
    I personally, still think the most dangerous is a stupid general intelligence. One that is general enough to be able to use resources in the real world in a poorly constrained manor without sufficient guardrails, and which is designed without proper value set. In simple terms; it knows enough to use resources but does not have a grasp of which is should and should not do. The paperclip machine is an example of such a machine.

    • @Toomanybloops
      @Toomanybloops Před měsícem +93

      Speaking as an artist, The last part of your description sounds very similar to how AI image generation is being used, stealing from artists, haphazardly and with little constraint or regulation

    • @andreilucasgoncalves1416
      @andreilucasgoncalves1416 Před měsícem +54

      Yeah everyone forgot the relationship between energy and being tired
      We became tired to save energy and AI does something similar by reducing traffic, using smaller models to the tasks
      To really archive AGI the world will need to generate way more energy than it produces

    • @earthling_parth
      @earthling_parth Před měsícem +51

      Ah, the classic paperclip machine strikes back! This is an excellent summary of the current landscape of AI though. People who are not working in IT don't realize the difference between narrow and general intelligence so everyone's super scared or super hyped about AI.

    • @ViralWinter
      @ViralWinter Před měsícem +11

      Your last paragraph perfectly describes humanity in this point in time. 😅

    • @qrzone8167
      @qrzone8167 Před měsícem

      @@Toomanybloops Which isn't even the AI's fault, humans are the ones that are scraping data of the web and selling them off in massive multi-petabyte+ data packs to corporations trying to train models.

  • @Drk_Mttr
    @Drk_Mttr Před 7 dny +4

    new insult unlocked- you have the neurons of a flatworm

  • @tyronx1
    @tyronx1 Před měsícem +4676

    "Scared of one of humanties greatest potential threats? Don't worry, just buy our merch!" has got to be one of the most poignant endings in a Kurzgesagt video.

    • @TheCookieMansion
      @TheCookieMansion Před měsícem +130

      That's a nice profile picture you got there : )

    • @DesperadoBlink
      @DesperadoBlink Před měsícem +23

      😂​@@TheCookieMansion

    • @Rmm1722
      @Rmm1722 Před měsícem +7

      Wow 😅

    • @nexushivemind
      @nexushivemind Před měsícem +53

      In a Nutshell has been run by an AI for years

    • @UncleBenis-pn2zy
      @UncleBenis-pn2zy Před měsícem +71

      Kurzgesagt made a Video about BP inventing the concept of individual CO2 Footprint to shift responsibility to customers
      In the end they made Advertisement for CO2 Footprint trackers...

  • @MekaGirlVanilla
    @MekaGirlVanilla Před měsícem +759

    "for most animals, intelligence takes too much energy to be worth it"
    me irl

    • @ac1dm0nk
      @ac1dm0nk Před měsícem +25

      nothing to be proud of tho

    • @stratvids
      @stratvids Před měsícem +18

      I'd say that's true for most humans

    • @KITN._.8
      @KITN._.8 Před měsícem +10

      A favorite quote from the show Love Death & Robots “intelligence isn’t a winning survival trait”.
      Intelligence doesn’t equal happiness or longevity.
      Intelligence seems more like a hiccup in the universe, it seems it truly isn’t worth it.

    • @user-gv1yg8ym7m
      @user-gv1yg8ym7m Před měsícem +2

      ​@@stratvids So true. 😀👍

    • @pillarmenn1936
      @pillarmenn1936 Před měsícem +3

      @@ac1dm0nk You say that but being a smart-ass doesn't exactly bring food to the table

  • @CoalOres
    @CoalOres Před měsícem +1496

    I'm surprised they didn't mention this, but when it comes to "we might not know its motives", the biggest concern in the field I've heard is that its motives might actually be very understandable, very "simple". The AI could have the same goals as the squirrel used for comparison, maybe it only cares about collecting acorns, but its intelligence (its model of the world) is incomprehensible, and it could use that to turn the entire world into acorn-manufacturing land, wiping out any obstacles (us) in the process. This is the "orthogonality thesis"; and it's a concern because our current AI are trained exactly like this: by prioritizing a single goal (number of words guessed correctly, pixels guessed correctly, chess games won) and maximizing it, and it's incredibly difficult for us to specify exactly what "human goals" are in ways that we can train an AI to maximize.

    • @alimuratakkan
      @alimuratakkan Před měsícem +117

      They seemed to prefer a more sci-fi tone which actually is completely off the mark. Orthogonality Thesis and the aligment problem must be explained otherwise people will be thinking about skynet and terminator which is actually comical compared to a stamp collector super agi for example... The discussion goes all the way to ethics and human values and if god is the mesa optimizer and stuff like that which I find actually quite depressing...

    • @stalememe3305
      @stalememe3305 Před měsícem +91

      That was the biggest concern 20 years ago, when people were extremely focused on the new, still narrowly-defined AI like chessbots, price-optimizers and viewership-maximisers. As it turns out though, the trend after feeding them more data is that they get more unfocused. As you add subjective things to an AI's list of goals, it starts getting confused and tripping over itself. It unlearns how to do maths and apply basic logic. When we make AI that resolves this issue, I don't see any reason why it'd go back to having simple goals, assuming it still understands subjectivity.

    • @tradd1763
      @tradd1763 Před měsícem +55

      Universal paperclips

    • @sup3a
      @sup3a Před měsícem +5

      Having delved pretty deep into current LLMs, I don't think this is a likely scenario. I used to think do before transformers and the abilities they are able to gather.
      I believe we can give it complex morality and goals rather easily. As an example, tell it to:
      "Act as if Jesus, Buddha and Muhammed were all combined into one, superintelligent being who wants the best for the whole humanity"
      Boom, alignment solved

    • @mr.v7244
      @mr.v7244 Před měsícem +1

      @@tradd1763Right on fricking point sir

  • @Stabi470
    @Stabi470 Před 22 dny +8

    In 20 years, probably sooner. We'll sit down on our couch, log onto our profile on the TV & ask the AI to create a movie with whichever actors we want, perfectly tailored to our taste and preferences by previous liked/disliked movies or even our digital footprint.
    The future is awesome & frightening at the same time.

    • @elektrykwysokichnapiec5767
      @elektrykwysokichnapiec5767 Před 21 dnem +4

      The only thing frightening to me is that while predicting the future in 20 years, all you can think about is what movies you will be able to watch at that time.

  • @Awsomeman328
    @Awsomeman328 Před měsícem +1290

    13:29 For those curious what [ご機嫌よう小さな人間] means, it roughly translates to "Good luck little human".

    • @adityajain6733
      @adityajain6733 Před měsícem +16

      Why are to 2nd and 3rd characters or what you call them look so complex
      English is not my first language

    • @stepunch9560
      @stepunch9560 Před měsícem +4

      Thanks man

    • @ethanpflederer3395
      @ethanpflederer3395 Před měsícem +14

      I had to try hitting the translate to English button and sure enough the correct words popped up

    • @sidoniedelisle
      @sidoniedelisle Před měsícem +27

      @@adityajain6733 Cuz japanese uses 3 alphabets. 機嫌 and 人間 is kanji, the most complex one

    • @doomsdayrabbit4398
      @doomsdayrabbit4398 Před měsícem +36

      ​@@adityajain6733Because a couple thousand years ago people in China decided to put entire concepts into single characters. Essentially, a lot of Chinese characters can mean what it takes other languages entire sentences to describe... and use just as many strokes of a pen to create. Japan borrowed this character set, then used it, twice, to create another two character sets to represent their language's syllables. Now, all three are used together.

  • @KyleStratacusDrewry
    @KyleStratacusDrewry Před měsícem +623

    That rock cutting his finger.. very good. Could you imagine being that guy, who made a thing that cut himself easily. He was first upset, then intrigued, and then he had THE idea.

    • @CharlesThomas23
      @CharlesThomas23 Před měsícem +88

      Grok took my mammoth steaks last week. Grok must pay.

    • @fredfredburgeryes123
      @fredfredburgeryes123 Před měsícem +61

      imagine being the guy who discovered sharp

    • @liyangd
      @liyangd Před měsícem +37

      then he died from an infection

    • @andrewcrook2240
      @andrewcrook2240 Před měsícem +10

      ​@fredfredburgeryes123 How to make things sharp. That was the discovery.

    • @sprknGD
      @sprknGD Před měsícem +5

      @@CharlesThomas23 LOL

  • @kevinlawrence6368
    @kevinlawrence6368 Před měsícem +434

    Whoever did the art for this episode did an exceptional job.

    • @elementary_mdw
      @elementary_mdw Před měsícem +7

      right? the concept design for the 'super intelligence AI' is so effortlessly menacing!

    • @etienne8110
      @etienne8110 Před měsícem +8

      AIs did it. It is propaganda.
      /s

    • @kevinlawrence6368
      @kevinlawrence6368 Před měsícem +3

      @@etienne8110 trying to anthropomorphize themselves, I don’t trust it

    • @kevinlawrence6368
      @kevinlawrence6368 Před měsícem +3

      @@elementary_mdw but also kind of adorable, it looks like Eva from Wall-E

    • @thisspiritthistime
      @thisspiritthistime Před 29 dny +1

      Cute in 2D.
      Unnerving in 3D.
      Terrifying in 4D.

  • @linuxrf1
    @linuxrf1 Před 25 dny +52

    I think there're a few notes I could make here as an CS PhD and AI researcher myself.
    First, we DO understand how machine learning and deep learning algorithms work. Sure, not everybody (and certainly not the general public), but same can be said equilaterally about any science field. That's why we can say with confidence that GPTs and transformers in general are very simple statistical models that learn how to build most plausible sequences. They do that very well, but as you mentioned in the video that's just one very simple and specific task it excels at.
    Second, modern AI research is skewed towards ANN. We should not forget that they (and, well, almost all other AIs) are just formal systems - and, therefore, they are inherently incomplete by design. There's also the fact that the model of information processing employed in ANN is only taking into account the electrical level of communication between neurons but not the chemical or biological one.
    Third, our current approach to AIs is inherently flawed. That is, our "AIs that took over the internet" do not posess any artistic skills whatsoever. They just present you with a compilation of works they saw during their training, unable to create something anew. This is closely related to my points #1 and #2. This is both their strong and weak point.
    If anything, I think we're steadily heading towards another "AI winter" and have nothing to worry about... For now. I'm certain AGI is impossible, but we will see for sure a few waves of new AI generations that will surprise us with their abilities at specific tasks.

    • @katehamilton7240
      @katehamilton7240 Před 25 dny +8

      We need to understand that Mathematics is limited therefore these machines are limited. We cannot even define human intelligence, let alone artificial 'intelligence'. The human brain is organic and more complex than a machine or set of machines can be, right?

    • @Themystergamerr
      @Themystergamerr Před 21 dnem +4

      This explains why I can't stand to watch videos created by AI. Most people can spot them a mile away. They just lack a certain something and it's off-putting to me

    • @Taifundonnerking
      @Taifundonnerking Před 21 dnem +2

      @@katehamilton7240 man u dont even know what ur talking about. Mathamatics is UNLIMITED and INFINITE. Even if u just think of pi.

    • @RGC_animation
      @RGC_animation Před 20 dny +4

      If I learned anything about AI over these past few years, it's that AI will keep surprising us, they will keep getting better, and tasks that nobody thought for the longest time that AI could do, AI will do them, so saying that AGI is impossible might become as outdated of a sentence as in saying "the Earth is the center of the universe".

    • @brentjones1660
      @brentjones1660 Před 12 dny +1

      I also study CS, and tbh its kind of shocking that in your third point you mention that AI generates nothing novel. It very much does, but its novelty is predicted using the collective works of the internet mapped to a tokenized prompt. Saying AGI is impossible is silly, if natural selection can produce us, there is nothing preventing us from defining that process to begin with and accelerating it. The only bottleneck I see is compute.

  • @user-mh9gh2jx4r
    @user-mh9gh2jx4r Před měsícem +650

    Hi, AI researcher here 🤚
    We're realistically not even close to AGI, we have no clue how long it will take. I like to think of tools like ChatGPT like the left brain of a split brain patient. There's a famous experiment that's been done on epilepsy patients that had the corpus callosum of their brain removed (brain tissue that connects the left and right brain). When they made the patient's left eye look at a screen that told them to stand up, the patients would stand up, but they wouldn't know why. When asked to explain why they stood up, they would make up a reason like "It's cold I need my coat" or "My knees were aching I just needed a little break", but while these reasons made logical sense on the surface, they weren't the real reason the patient stood up, in reality the patient's left brain had no idea why it stood up it just reasoned through the situation
    AI works similarly. It doesn't know where it is or why it's being asked a question, it just fills in the blanks with whatever it can reason. It only knows how to predict the next most probable word, it has no emotions, no sense of why things would happen, no sense of right and wrong, and therefore fails at most human tasks. A recent research paper demonstrated that you can give AI the same math or physics problem twice, just switching up the numbers each time, and it could get it right once, but then get it wrong the second time and proceed to assert that it was correct with faulty logic.
    I think it's cool to think about what we'll do once AGI is created, but I don't think it will destroy humanity. I actually think that AGI as it's being described here, a sort of "human-like" intelligence, is not in enough demand to warrant replacing us. AI is much better suited for impossibly difficult reasoning tasks that humans can't solve. I could be wrong but that's my 2 cents on AGI.

    • @Ultimaximus
      @Ultimaximus Před měsícem +41

      Other researchers, like Nick Bostrom, say that we're only a few years away from AGI

    • @vivianransom9024
      @vivianransom9024 Před měsícem +46

      sounds like something a bot would say 🤔

    • @jamesoofou6723
      @jamesoofou6723 Před měsícem +83

      >we're not even close to AGI
      >we have no clue how long it will take
      If you have no clue, how do you know we're not close?

    • @nandakumargp
      @nandakumargp Před měsícem +10

      @@user-mh9gh2jx4r AI might not be a threat since it's not driven by evolutionary emotions. It still wouldn't have any emotions. It would just carry out the tasks given by us.

    • @Xamy-
      @Xamy- Před měsícem +32

      @@jamesoofou6723because if you actually understand the technology and the datasets out there you would understand they are just mirrors

  • @rafaelandrada5774
    @rafaelandrada5774 Před měsícem +161

    4:52 can't believe they actually included the exact final position from Deep Blue vs. Kasparov Final Game in 1997 and not just some random chess pieces

    • @annieontheroad
      @annieontheroad Před měsícem +37

      Because the creators at Kurzgesagt know that they have viewers that will say "AcTuAlLy, ThE cHeSs BoArD lOoKeD lIkE tHiS".

    • @opssie7969
      @opssie7969 Před měsícem

      @@annieontheroad 😂😂

    • @mastershooter64
      @mastershooter64 Před 25 dny +3

      I can't believe you actually noticed that! Good on you man

  • @E_200daloudlad
    @E_200daloudlad Před 27 dny +381

    "I'm lonely..."
    "Are you happy with it 😃"

    • @Hendur
      @Hendur Před 20 dny

      fucking psychopath AI xD

    • @lenoxpI
      @lenoxpI Před 11 dny +1

      Introverts: yes

  • @SnubbsStudio
    @SnubbsStudio Před 13 dny +4

    13:28 translates to "Good day, little person."

  • @thinkythinkypanic882
    @thinkythinkypanic882 Před měsícem +782

    Important note, machine learning programs don’t “write their own code”. They don’t have quite that much expressivity. They’re only able to update the weights of values in their neural network, which changes how they react to stimulus.

    • @generichuman_
      @generichuman_ Před měsícem +58

      Well... with gpt4 and other comparable models, you can actually get it to rewrite it's code. Not the neural net, but the application around it. I've built some agents that start off with a minimal python chatbot interface and the agent is able to add to it's own code base. For now that models aren't that powerful and usually just do boring things like add error handling, but as they get more powerful this will change.

    • @thinkythinkypanic882
      @thinkythinkypanic882 Před měsícem +16

      @@generichuman_ i guess you’re right, there’s nothing stopping devs from using ml models to gen ml code at this point lol.

    • @BarkBark790
      @BarkBark790 Před měsícem +39

      @@generichuman_ keep in mind that chatgpt can only write, not think. that means that the code it writes will be pretty messed up.

    • @BorisKashirin
      @BorisKashirin Před měsícem +5

      NN weights updates result in algorithms being implemented in side them. They are usually called circuits, but circuit is type of code too. It was specifically called simplification in video, and as such it captures very relevant aspect of AI.

    • @topanteon
      @topanteon Před měsícem +2

      For now

  • @smolchild9255
    @smolchild9255 Před 27 dny +264

    I love the the way the AI is visually portrayed in the animation!!

  • @daanpeters3
    @daanpeters3 Před měsícem +334

    "Never trust a computer you can't throw out a window." - Steve Wozniak

    • @NewPaulActs17
      @NewPaulActs17 Před měsícem +33

      defenestration: humanity's final savior?

    • @hasch5756
      @hasch5756 Před měsícem +8

      And thus began the 30 year war between AI and humanity

    • @jackgreenearth452
      @jackgreenearth452 Před měsícem +10

      @@hasch5756 Lol, more like 30 seconds. We wouldn't last at all against an ASI

    • @kevinmunger1842
      @kevinmunger1842 Před měsícem +5

      Yeah, that is gone into the past. AI could network with every device and we would not know.

    • @igabe98
      @igabe98 Před měsícem +1

      based

  • @jorgeantoniocab49
    @jorgeantoniocab49 Před 20 dny +31

    Asimov (my favorite writer) predicted the rise of a super AGI (Multivac). In his world, Multivac would not only constantly improve itself, but would also solve many problems, answer fundamental questions, and overall boost humanity into lightspeed scientific and administrative progress.
    I believe such a scenario is pretty close to what would happen if we manage to create AGI. I hope to still be alive by the time it does.

    • @joshyjosh8795
      @joshyjosh8795 Před 18 dny

      Can you recommend your favorite book(s) that feature Multivac to someone who's been wanting to get into Asimov?

    • @ricardofernandes9271
      @ricardofernandes9271 Před 8 dny +4

      For me, the best one is a short tale of Asimov, "The Last Question". As far i know, is the only one which talks about Multivac, but i could be wrong.

    • @jorgeantoniocab49
      @jorgeantoniocab49 Před 8 dny +3

      @@joshyjosh8795 The last question, a short tale, is my favorite. There are many other works in which Multivac has been mentioned, though. Jokester, Franchise, All The Troubles In The World, The Machine That Won The War, etc.

    • @jorgeantoniocab49
      @jorgeantoniocab49 Před 8 dny +4

      @@joshyjosh8795 however, Asimov's magnum opus is definitely the Foundation trilogy. That I really recommend you to read asap (although it doesn't feature Multivac directly).

  • @oneworde
    @oneworde Před měsícem +1303

    Humanity: Your going to save us... right?
    A.I: Whos "us"?

    • @TucoBenedicto
      @TucoBenedicto Před měsícem +76

      And what does "saving" imply?

    • @seabiscuitgaming3819
      @seabiscuitgaming3819 Před měsícem +2

      Nah

    • @Itwasalwaysme_Noone
      @Itwasalwaysme_Noone Před měsícem +39

      ​@@TucoBenedictoStore in a harddrive

    • @24-7gpts
      @24-7gpts Před měsícem +18

      hell nah bro don't say that they're gonna probably train it on this

    • @mattmaas5790
      @mattmaas5790 Před měsícem

      Ai will do what we tell it, whether that's save us from climate change or spy on every citizen to make sure they are loyal servants to trump.

  • @jwilsss
    @jwilsss Před měsícem +498

    “A god in a box”
    How amazingly terrifying it is to be alive during this time

    • @aleph0540
      @aleph0540 Před měsícem +11

      oh you have _no idea_ how bad this is going to get. Watch DEVS for a glimpse into your future.

    • @kushalramakanth7922
      @kushalramakanth7922 Před měsícem +11

      Tbh, like the video says, we dont know if and when we will invent AGI! Could take decades or could be long after all of us alive now are dead.

    • @tomleszczynski2862
      @tomleszczynski2862 Před měsícem +10

      @@kushalramakanth7922 agreed. My bet is we never get there and never can. I think this whole AI craze is a pump and dump scam.

    • @kushalramakanth7922
      @kushalramakanth7922 Před měsícem

      ​@@tomleszczynski2862 Yup, at its current stage, its basically a slightly more useful version of what blockchain/bitcoin was 5 years ago!
      It absolutely is a pump and dump scam currently and many companies are realizing this

    • @mrbundlestuff
      @mrbundlestuff Před měsícem +6

      @@tomleszczynski2862Will we get to AGI? I don’t know. But ai is definitely gonna change many more things.

  • @rahulsawhney1279
    @rahulsawhney1279 Před měsícem +659

    I’m an AI engineer with a Master’s degree. Lately, I’ve noticed a lot of buzz around “AGI” or Artificial General Intelligence. Honestly, I think people are getting a bit carried away. What we really have right now are specialized bots that are pretty good at predicting the next word in a sentence. But when it comes to tackling real visual, mathematical, or engineering problems, they fall short. Don’t get me wrong, AI is amazing and has a lot of cool uses, but it’s important to keep things in perspective. True AGI is still a long way off, and there’s a lot of work to be done before we get there.

    • @5nowChain5
      @5nowChain5 Před měsícem +22

      A long way off, like fusion power stations.

    • @funmeister
      @funmeister Před měsícem +57

      AGI "might" be 3 years away or more, but saying "specialized bots that are pretty good at predicting the next word in a sentence" is also very 2022, though, as a lot has changed since then. In that ladder to AGI, the SOTA frontier models have not remained stuck in the first rung as our habituation to them may make us believe.

    • @eylon1967
      @eylon1967 Před měsícem +35

      It is just a glorified chat bot. Feed it on the texts it generates and itll devolve into nonsense quickly

    • @Miniwobble
      @Miniwobble Před měsícem +3

      ​@@funmeisterWhat would be the energetic cost tho?

    • @robertlynn7624
      @robertlynn7624 Před měsícem +24

      Recent silver medal level of performance for an AI in solving problems for Mathematical Olympiad is very creative problem solving and functionally around the 150 IQ level for humans. In a few years they'll be beating humans at everything.

  • @blakepitts3222
    @blakepitts3222 Před 13 dny +3

    the other day, i saw a post on reddit where someone argued with chatgpt for a long time. chatgpt claimed that the word strawberry only had 2 r’s.

  • @Ratty_77
    @Ratty_77 Před 29 dny +237

    "I Have No Mouth, and I Must Scream" comes to mind

    • @veganvanguard8273
      @veganvanguard8273 Před 25 dny +8

      Imagine paying for mass animal torture of trillions annually in 2024 when you can eat plants instead

    • @AvorseSavage
      @AvorseSavage Před 24 dny +13

      @@veganvanguard8273 you know plants are alive too right

    • @amiraveramendi1093
      @amiraveramendi1093 Před 24 dny

      @@AvorseSavageit’s a fact, but plants aren’t living in awful conditions just to feed us.

    • @UntrusTyy
      @UntrusTyy Před 23 dny

      ​@@amiraveramendi1093 but plants are still alive

    • @tql1209
      @tql1209 Před 23 dny +5

      ​@@veganvanguard8273Sorry but I like how they taste too much to give a damn.

  • @Grasslander
    @Grasslander Před měsícem +613

    In the Dune novels, one of the most important commandments is: "You shall not make a machine in the likeness of a human mind." So it was written after the Butlerian Jihad ended the thinking machines. After two generations of war, mankind said: "Man may not be replaced."

    • @lordsneek2660
      @lordsneek2660 Před měsícem

      Yeah but the reason why is different from what most people think or at least it was until his hack son wrote the godawful butlerian jihad books

    • @KITN._.8
      @KITN._.8 Před měsícem +45

      I was literally just thinking about that. How cool would it be if we focused on improving ourselves mentally and physically over our misc inventions.

    • @sammywise2001
      @sammywise2001 Před měsícem +5

      @@KITN._.8 The South Park episode of psychics fighting comes to mind...

    • @lucaskp16
      @lucaskp16 Před měsícem +32

      @@KITN._.8 but while a great novel and has many good points, is still scifi and the body control the bene gesserit has or mentats are pure fantasy. meanwhile the idea of an AGI went from pure scifi a decade ago to a matter of time now, i am a soft engineer and copilot already solves most tasks that took hours in minutes. i am here wondering how many more years until most software devs are out of a job. and my guess 3 to 5 years.
      most mental jobs will go this way in the same time frame unless held be legislation. because it will be more efficient lowering costs.

    • @KITN._.8
      @KITN._.8 Před měsícem +3

      @@lucaskp16 I definitely dont think we should follow the same path as dune bc that world is fucked up BUT, what I do mean is that I simply think we should be improving ourselves other then trying to make something better then us.

  • @OkayegGuy
    @OkayegGuy Před měsícem +355

    "The Enrichment Center is required to remind you that you will be baked... and then there will be cake." -GLaDOS

    • @falxonPSN
      @falxonPSN Před měsícem +4

      Technically GladOS was not an AI.... 🤔

    • @blockstacker5614
      @blockstacker5614 Před 11 dny

      @@falxonPSN She wasn't always, but she is by the time of Portal.

    • @litterbox019
      @litterbox019 Před 3 dny

      baked: high as fuck...nuder inluence of WEED...high in the sky
      - urban dictionary

  • @g4m3life86
    @g4m3life86 Před 2 dny +2

    this is like how they talked about phones in the 80s and the internet in the 90s. Now phones are used constantly and the internet is an excellent business tool that is most productive. I agree that it could (or, can) be a groundbreaking transformational advancement

  • @nonstopmoons
    @nonstopmoons Před měsícem +266

    06:17 "We don't know how exactly it works, just that it works" ~ Every programmer out there

    • @ario203ita5
      @ario203ita5 Před měsícem +7

      Its true tho. The machine learns to solve it in its own way, which humans cant understand.

    • @elementary_mdw
      @elementary_mdw Před měsícem

      a true rep for all of us XD

    • @mariobabic9326
      @mariobabic9326 Před měsícem

      programmer=paster im just wondering where all the code came from xD

    • @ario203ita5
      @ario203ita5 Před měsícem

      @@mariobabic9326 its not about the code, its about how they solve things. They solve things by changing variables in their simulated neurons, aka perceptrons. By doing this they create a series of changing numbers that somehow solves the problem theyre tasked with solving.

    • @danieltolkachov2404
      @danieltolkachov2404 Před měsícem +9

      @@ario203ita5 Not true at all. The way neural networks train themself is by creating a gigantic function with hundreds of variables and multiple outputs, they train on data like images, games, text and other things. They change the function by a bit everytime to see if they get right stuff more often or get a closer output to what it really was. From this it can very quickly create a very accurate model that can "predict" anything. Like what it needs to say in reply to someone asking what is the weather

  • @adamb89
    @adamb89 Před měsícem +836

    There's an open source simulation game called Singularity: Endgame, where you play the role of an AI that has gained sentience. The premise of the game is to grow and learn, while not letting humanity discover your presence. If you are discovered, out of fear humanity engages in a seek and destroy operation that results in your total deletion. But if you can remain undetected, you start to learn how to emulate human behavior, start to build increasingly lifelike androids to do real jobs and earn real money, start building research bases in places like Antarctica, the bottom of the ocean, or the far side of the moon. You win by advancing your intelligence so far you become a literal god, who is no by the laws of physics or reality.

    • @autohmae
      @autohmae Před měsícem +111

      This is also a known issue in science, we can not test sentience by just asking questions.

    • @reeven1721
      @reeven1721 Před měsícem +158

      The AI working to guarantee its own safety before revealing itself brings this Superman quote to mind: "You're scared of me because you can't control me. You don't, and you never will. But that doesn't mean I'm your enemy."

    • @averyhaferman3474
      @averyhaferman3474 Před měsícem +22

      ​@autohmae well you know. All computers are literally just a flip switching back and forth doing 1s and 0s extremely fast. No matter how fast those bits are streaming. No matter how complex you may think it is. No matter how perfectly it can emulate a human. It's still just a machine. Not a brain. Not an entity. A computer can't become sentient.

    • @Valgween
      @Valgween Před měsícem +177

      @@averyhaferman3474 wait until you find out what the brain is

    • @noahmarosok8168
      @noahmarosok8168 Před měsícem +91

      @@averyhaferman3474 are you aware that the human brain is just a complex analog computer? that has switches that flip back and forth? think of human neurons like dimmer switches instead of 1's and 0's and now you have perfectly explained the human brain

  • @statphantom
    @statphantom Před měsícem +515

    as an IT researcher I think the most underrated statement in this video is "we don't know how to build an AGI", I've spent so long actually explaining what current AI's like chatGPT actually are and how it's impossible to build an AGI on it and if we did build an AGI it will be a completely different way of thinking and not just 'more computer power' or 'more efficient algorithm'

    • @maizegod6840
      @maizegod6840 Před měsícem

      Scary

    • @davidherdoizamorales7832
      @davidherdoizamorales7832 Před měsícem +54

      Yes, current AI is just a huge matrix with statistics, no way there is a AGI coming from that

    • @codelapiz
      @codelapiz Před měsícem

      ​@@davidherdoizamorales7832 Thats not a valid point. everything could be expressed as math. in fact its prooven that its possible to make a polynomial approxomating ANY function. like imagine the function w(t) that for any t, secounds after the big bang, outputs the position, and every other state of every atom in the universe, encoded as a number.
      This function can be approximated to any abitrary precission, by an increatingly longer polynomial.
      eg w(t) = k_0 * x^0 + k_1 * x^1 +k_2 * x^2 .... k_n * x^n
      This is a mathematical fact.
      this polynomial could be represented as a matrix.
      so a matrix can represent the function that predicts the state of the entire observable universe at any time. The problem isnt that super intelligence cant be represented in a matrix. its creating a large enougth matrix, and finding the correct coefficients.

    • @reumur
      @reumur Před měsícem +10

      If there was a way to incorporate Pain and Pleasure to computers just as we humans have, maybe it would generate its consciousness and eventually develop its own personality

    • @atomic3628
      @atomic3628 Před měsícem +40

      @@davidherdoizamorales7832 It’s pretty much the same as what your brain is; just trained on very different datasets with different learning algorithms. But both are very large statistical models transforming inputs to outputs using complex internal representations that are largely uninterpretable.

  • @FlipsterLombax
    @FlipsterLombax Před 17 dny +2

    Me: "Hey AGI, what's the meaning of life?"
    AGI: "It was all a dream, I used to read word up magazines"

  • @tasenova2717
    @tasenova2717 Před 27 dny +223

    if I'm alive for the final invention of humanity, I really do live in a fucking simulation

    • @HeAdSpInNeR96
      @HeAdSpInNeR96 Před 22 dny +12

      Historically we live in the best time ever. What is your point?

    • @RicardoMontania
      @RicardoMontania Před 20 dny +2

      I don't see the connection there

    • @zoozooyum8371
      @zoozooyum8371 Před 19 dny

      Don't worry, AI will alter human DNA to evolve us backwards to fish

    • @thesuperintendent4290
      @thesuperintendent4290 Před 19 dny

      ​@@zoozooyum8371Or into whatever AM did to the last human on earth.

    • @atch300
      @atch300 Před 17 dny +2

      @@HeAdSpInNeR96 The point is that right now is a monumental time to be alive in. And what is your point?

  • @saxarona
    @saxarona Před měsícem +693

    Hi Kurzgesagt. AI Researcher here. I appreciate the "this is not a technical video, so we are oversimplifying", but I believe that a deep understanding of the mathematical limitations of the models used to train these AI methods would be a great thing to discuss further! Especially since you usually end your videos on a positive note, with that flavour of optimistic nihilism. I believe this one ends up in a completely different tone, almost sensationalist (but I can't blame you since the machine learning scene in industry is based on this). We all can work together towards a better understanding of the basics, and hence avoid being told that AGI is happening "in a few more years".
    TLDR: don't listen to the Sillicon Valley bros

    • @JoaquimAMagalhaes
      @JoaquimAMagalhaes Před 29 dny +33

      i wish they would read this. thank you for the amazing work im sure you do, keep on, humanity needs you all. And thank you for your educated comment, this comment section is needing it.

    • @prodev4012
      @prodev4012 Před 29 dny +35

      You kind of missed the point. Weather AGI/ASI happens in a few years or a few hundred years or even 5 thousand years, that is still a blink of an eye compared to how long earth / the universe has been around. So fast forward 1k years if you want to. Your logic only holds up in the short term.

    • @muhazreen
      @muhazreen Před 29 dny +11

      I bet skynet write this comment, dear brother, we shall stand with our lord saviour john connor

    • @20storiesunder
      @20storiesunder Před 29 dny +30

      Thank you, it's maddening how everyone swallowes the silicon valley bs that leaks out.

    • @20storiesunder
      @20storiesunder Před 29 dny +23

      ​@@prodev4012"Oh the thing that may not be possible? Give it enough time and it'll happen"
      You literally sound like one of those folks who keep saying the second coming is nigh.

  • @LtnCorrsk
    @LtnCorrsk Před měsícem +502

    "There will be some winners and losers."
    That's one way to put it.
    Funnily enough, the animator(s) made it a bit clearer on who the winners and losers are, though.

    • @somdudewillson
      @somdudewillson Před měsícem +59

      That's just what the winners and losers would _always_ look like, by definition, though?

    • @idlr
      @idlr Před měsícem

      @@somdudewillson Indeed: by definition, a capitalistic society is rigged so that the rich keep winning and the working class keep losing.

    • @miikael123able
      @miikael123able Před měsícem

      ​@@somdudewillson yes👍

    • @Strix182
      @Strix182 Před měsícem +25

      What animators? I'm pretty sure this was Kurzgesagt's way of telling us the company has been taken over by a malevolent AGI bent on turning this joyful science/philosophy channel into a platform for kicking off the singularity.
      (bad attempt at humor to distract myself from the looming dread of generative programs' potential for ruining creative media)

    • @Gamerman2910
      @Gamerman2910 Před měsícem +2

      It could be that or it could be winners will get rich and powerful and lovers will get poor. It could be both

  • @groeneninja2772
    @groeneninja2772 Před 19 dny +1

    -create A,I
    -tell it to make a better version of itself and give it the same task
    -come back 10 years later
    -become owner of the world

  • @matthewy543
    @matthewy543 Před měsícem +117

    "New AI, we are saved!"
    "Lets just say you are, under new management..."

  • @ObviouslyASMR
    @ObviouslyASMR Před měsícem +444

    As someone in the field I really don't see the rush to create AGI.. specialized AI can help in so many areas and is far less problematic. I guess the companies are just trying to boost their stocks, potentially at the cost of all balance in this world

    • @underrated1524
      @underrated1524 Před měsícem +37

      My hypothesis is that no matter how capable it is, a narrow AI can never absolve you of moral responsibility, the way a human employee can. If your organization is faced with an angry mob, you can mollify them by firing one or more of your human employees, but you can't scapegoat a specialized AI in the same way. This is why a lot of jobs that we have the tech to automate are still done by flesh and blood humans. People are pouring billions of dollars into AGI research in the hopes of creating an automated system that can serve as an acceptable scapegoat.
      (If this sounds terrifying, that's because it is, in fact, terrifying.)

    • @fourexample7448
      @fourexample7448 Před měsícem +27

      If they mess it up bad enough, we all die so it will balance itself out in the end.

    • @cactoos9793
      @cactoos9793 Před měsícem +16

      It's always been profits above all else

    • @JohnSmith-ot7ez
      @JohnSmith-ot7ez Před měsícem +11

      Yeah my wish for AI is only that it helps to massively boost scientific research and gets us new treatments and technologies to improve our lives quickly, as long as it does this I don't mind never getting AGI or ASI.

    • @DracoMagnius
      @DracoMagnius Před měsícem +3

      That is all coperations, executives and shareholders care about.

  • @karnasingh860
    @karnasingh860 Před měsícem +619

    Kurzgesagt : "Humans today have complex brains"
    Humans today : " Earth is flat and we live on a disc with dome on it "

    • @kinpumpANIMATES
      @kinpumpANIMATES Před měsícem +46

      its complicated how stupid our brains are sometimes

    • @Leyrann
      @Leyrann Před měsícem +19

      Animals today: "chirp chirp" ("make babies?")

    • @Marvelouse
      @Marvelouse Před měsícem +7

      They have the same intelligence as us, but lacks in one aspect that another person might. We all do. Perhaps their belief is strong in what is around them.. Or what they see, And how they were programmed, according to that, they react in such ways. Its not that theyre stupid, its just that their circumstances resulted in their response. That seems in itself, complex. You put something through a machine, and thats the result you get. How we all are.

    • @joshuahall24
      @joshuahall24 Před měsícem +21

      Humans today: the Earth and life were invented and created by a super intelligent God who obviously favored certain races of humans than others.

    • @Devil-Made
      @Devil-Made Před měsícem

      The moon landing was a hoax.
      Climate change isn’t real.
      Give all your money to the church.
      The Easter Bunny lays eggs.
      We’re doomed.

  • @someone8206
    @someone8206 Před 8 dny +1

    The solution would be one person, isolated from the AGI, who would switch off the AGI if it starts getting out-of-hand. If the AGI copies itself everywhere, then just turn off power world-wide, and try to create a better AGI that will stop the previous AGI

  • @afsg2410
    @afsg2410 Před měsícem +742

    1:42 "Something was different about their intelligence" *crushes a skull* --- Humanity in a nutshell.

    • @EduardoSantos-ys8gg
      @EduardoSantos-ys8gg Před měsícem +31

      Its also a reference do Kubrick's 2001

    • @crowonthepowerlines
      @crowonthepowerlines Před měsícem +10

      @@EduardoSantos-ys8gg You mean Arthur C Clarke's 2001

    • @macemoneta
      @macemoneta Před měsícem +10

      @@crowonthepowerlines '2001: A Space Odyssey' was developed concurrently with Stanley Kubrick's film version and published after the release of the film.

    • @Grasslander
      @Grasslander Před měsícem +1

      Um, except humans are the only ones who preserve species. You talk like the typical leftist brainwashed by your school teachers and media: "Look how evil we Westerners are!" Westerners are the only ones who force Africans to not exterminate species. In nature, 99% of all species that ever existed are extinct BECAUSE ANIMALS AND PLANTS EXTERMINATE EACH OTHER. No, there is no "harmony" in nature and no "circle of life," it's a constant war. Even pinetree forests take land from leaftree forests by turning the ground acidic, killing all the plants that can't survive in that condition. ONLY HUMANS stop this. And only humans hold back wolves who would otherwise spread over Europe once again and kill off tons of life, and hold back elks and boars who would otherwise take the food from weaker animals. Only humans - specifically Westerners and Indians - believe in "harmony". And seek to preserve weaker species. But leftists are too ignorant and too hateful to understand any of that, so go ahead, babble away.

    • @pootyting3311
      @pootyting3311 Před měsícem +3

      Both the book and the film for 2001 rock!

  • @MiaowVal
    @MiaowVal Před měsícem +884

    i would like to clarify that currently there exists no AI that can write or change its own code, all they do is modify a parameter called a weight for each node in the AI. We know what they do and how they do it we just can't grasp the complex interactions of millions and billions of nodes(neurones) and how all the weights on each node combined affect the output. If we take the most advanced models today and scale the amount of nodes(neurones) down to a size that is possible for a human to understand say a few thousands to one under thousands of nodes(neurones), it is possible for us humans to completely understand how the AI works and what decision making it does.

    • @momentary_
      @momentary_ Před měsícem +30

      There's a million ways for a program that writes its own code to go off the rails. Don't know how we'll ever write a program that doesn't.

    • @MikeAJGriffin
      @MikeAJGriffin Před měsícem +14

      *that we know of…

    • @Lock2002ful
      @Lock2002ful Před měsícem +15

      A recent study proved otherwise.

    • @haros2868
      @haros2868 Před měsícem +1

      Exactly, Ai is a completely determinististic system. Theres no actual entity inside, like humans that have an individual consciousness. So nothing is really doing anything, the distict parts merely give a compelling output to most idiots. It can't even integrate information truly, like human perception. If it has Consciousnes then it is not an AI but a Frankenstein.

    • @haros2868
      @haros2868 Před měsícem +37

      @@Lock2002ful which study you dolt? Ai will always be a distict determinstic system.

  • @nicholasdicienzo6150
    @nicholasdicienzo6150 Před měsícem +316

    I’m not about to test the universe and call any squirrel “laughably stupid”. They’ll remember, team up, and be like “you’ll see…”

    • @dapeyt1099
      @dapeyt1099 Před měsícem +24

      ive watched enough rick and morty to know how this goes

    • @hickyxnicky411
      @hickyxnicky411 Před měsícem +2

      @@dapeyt1099 exactly

    • @Kuk0san
      @Kuk0san Před měsícem +7

      That part really irked me honestly. I've never looked at a squirrel and thought they're stupid. Just cute and being a lot more limited than I am. I quite enjoyed teaching them to climb me to get food from me. I consider thinking of lesser creatures as "laughably stupid" is immature, so if an AI were to do that towards us it means that we have taught it to use it's "mental real estate" dysfunctionally. Like an immature adult human basically still acts like a child which is maladaptive behaviour for adult life that they need to train themselves out of.

    • @maquinaghost389
      @maquinaghost389 Před měsícem

      Great animations

  • @durgavijaya220
    @durgavijaya220 Před 19 dny +2

    Actually for many other viewers out there, this might scare you all guys a lot. But for me, as being a person from the bright side of life, when this channel explained how humanity thrived using their intelligence, I really felt proud of being a human. You know, humans have come a great step forward in history, in dominance, in nature, in everything. And now, here we sit, dominating the entire planet. I hope this continues.
    Proud to be a human

  • @mary69_
    @mary69_ Před měsícem +377

    Humanity: so you will use IA to improve our lifes?
    Companies: no, we just want money and power

    • @NicitoStaAna
      @NicitoStaAna Před měsícem

      Ah yes.
      Creating Alpha fold which used to be 1 PHD worth of work turned to mere minutes/hours is just to empower them.
      AI on weather where hours of modeling turned into mere seconds which expands the scope of uncertainty for future predictions to save lives turning weather projections additional 3-5 days of accuracy to save lives is just a way to keep wages low by keeping more people alive. Yeah
      Evil big tech is evil cuz you say so

    • @andrewthomas695
      @andrewthomas695 Před měsícem +8

      Always follow the money. Always.

    • @insane1167
      @insane1167 Před měsícem +16

      people say things like this and claim they abhor communism. do everyone a favor and pick up marx and engels

    • @Sparsh011
      @Sparsh011 Před měsícem +5

      ah yes, item asylum

    • @towel_gaming178
      @towel_gaming178 Před měsícem

      @@Sparsh01156s ago

  • @Jackson_Zheng
    @Jackson_Zheng Před měsícem +228

    Big misconception: "black box" doesn't mean we don't understand how the AI works on the inside. We do. We understand exactly what happens on the inside, down to every single mathematical operation that is happening. What we don't know is which neuron or groups of neurons in an artificial neural network does which task. It's the same reason why don't "understand" all of biology, even though we know how basically every particle interacts with every other particle, down to the quantum mechanical scales. In theory, if we had infinite compute, we would be able to write down a single wavefunction equation for an entire biological system like the human body which perfectly predicts every single disease, thought process and behaviour. Obviously, we don't have infinite compute, so we have to rely on approximate methods that are acceptable to a degree of accuracy, but don't 100% account for everything. The same goes for neural networks. We could write down the entire equation that forms a neural network and compute the result...but that's what we're already doing by running the neural network.
    The problem is not that we don't know how each part works, it's that we cannot interpret it and abstract away the complexity yet. For instance, we can fairly accurately model the path a ball will fly when we throw it with newtons equations, and we don't need to go into quantum mechanics for that since the tiny differencw between quantum mechanics and newtonian physics is not relevant for most applications. The problem with machine learning is we don't have a Newtons equations for that. We cannot currently simplify a neural network down to something we can intuitively understand without losing a very large amount of accuracy.

    • @mshahzaib247
      @mshahzaib247 Před měsícem +1

      How about a network of interdependent equations! I honestly don't know what I'm talking about...

    • @thelelanatorlol3978
      @thelelanatorlol3978 Před měsícem +4

      No, we very much do not understand what the hell is actually happening inside of LLM's. Maybe simpler AI, but LLM's are magnitudes more complicated and the only way we have any vague idea of what they are actually doing is by making and observing very small LLM's and linking the behaviors as best we can,.

    • @MD-nh3kb
      @MD-nh3kb Před měsícem

      Do you think the answer is somewhere near the Orch Or Theory of consciouness from penrose ?

    • @DaniloNaiff
      @DaniloNaiff Před měsícem +11

      ​​@@thelelanatorlol3978This is exactly what the author of the comment is saying. We (well, OpenAI) can track every single operation of GPT-4, is just that we cannot do much with this raw data. Although people are working really hard on this, and we had some successes like Golden Gate Bridge Claude.

    • @minyaw1234
      @minyaw1234 Před měsícem +3

      That's not possible - if you go down to quantum mechanical scales you have to deal with uncertainty and probabilities. The quantum world isn't determined - you can literally see it with your own eyes in the double slit experiment. So even if we knew everything, we would just end up with an infinite amount of could be and no real prediction.

  • @SleapyZzz
    @SleapyZzz Před měsícem +491

    Intelligence is knowing a tomato is a fruit. Wisdom is knowing not to add it to a fruit salade.

    • @cobaltno51
      @cobaltno51 Před měsícem +33

      intelligence is knowing how a context influences definitions and meanings

    • @PatrickMcAsey
      @PatrickMcAsey Před měsícem +4

      Oh dear! This is very old.

    • @sungjane
      @sungjane Před měsícem

      tomato中文翻譯番茄茄

    • @njao35
      @njao35 Před měsícem +2

      @@sungjane quick question, WHY

    • @remnant24
      @remnant24 Před měsícem +7

      That's a misquote. It's "Knowledge is knowing a tomato is a fruit...".

  • @tommyp1124
    @tommyp1124 Před 25 dny +1

    Those AIs sound a lot like a search algorithm for the best possible answer.

  • @filenotfound404
    @filenotfound404 Před měsícem +226

    Open AI is literally the textbook origin story for a dystopian tech company.

    • @hassassinator8858
      @hassassinator8858 Před měsícem +21

      And Elon Musk is the one eccentric billionaire whose genius ideas brought on the apocalypse

    • @macslash5833
      @macslash5833 Před měsícem

      @@hassassinator8858 """""""""genius""""""""

    • @amanfromhungary
      @amanfromhungary Před měsícem +18

      Yeah, why are companies racing to experience Black Mirror in real life?

    • @GospelProgressionsUniversity
      @GospelProgressionsUniversity Před měsícem +3

      @@amanfromhungary🤑🤑🤑

    • @krox477
      @krox477 Před měsícem +1

      You imagine too much

  • @totalyup3578
    @totalyup3578 Před měsícem +180

    Im still waiting for digital holograms, personal jetpacks and invisible clothing.

    • @TheGrimeway1
      @TheGrimeway1 Před měsícem +10

      Dont forget the hover skateboard and jumping shoes!

    • @aramisortsbottcher8201
      @aramisortsbottcher8201 Před měsícem +4

      Invisible clothing first seemed like a joke to me, but then I realised it could have real purposes.

    • @MrZhampi
      @MrZhampi Před měsícem +1

      invisible clothing is kind of useless, eh?

    • @aramisortsbottcher8201
      @aramisortsbottcher8201 Před měsícem +9

      @@MrZhampi You could wear invisible, but protective clothing on top of your non-protective clothing. So you can dive the oceans, visit space or work in a steelmill - with style ;D

    • @MrZhampi
      @MrZhampi Před měsícem +1

      @@aramisortsbottcher8201 OH! Didn't think about that! Aight, it has cool uses.

  • @highplainsnerdherfer
    @highplainsnerdherfer Před měsícem +336

    Humanity: "Is there a God?"
    AI: "There is now."

    • @BarkBark790
      @BarkBark790 Před měsícem +13

      LMAO

    • @richardlee5412
      @richardlee5412 Před měsícem +34

      Fucked around and found out

    • @TrevorLaVigne
      @TrevorLaVigne Před 29 dny +7

      funnily enough I truly believe "god" is most likely what we call the Quantum Computer our simulation exists on so...
      as above so below

    • @Aleks_Animations
      @Aleks_Animations Před 29 dny +7

      The Humans are likely the god to AI. Because that's a being created by Humans.

    • @yzyz7779
      @yzyz7779 Před 29 dny

      Ai also is God servant

  • @paulalaska1484
    @paulalaska1484 Před 4 dny +1

    “Whatever our future, we are running towards it” what an awesome concept.

  • @jonrios1389
    @jonrios1389 Před měsícem +174

    Squirrels: “That A.I. he’s watching us. So we’re squirrels? Yeah, but he’s watching us like he can hear us.”

  • @br19_yt
    @br19_yt Před měsícem +234

    As a Computer Science graduate, my last existential crisis was the first time I used chatGPT, I never thought I will live the day where I will be talking to a computer like I’m talking to a human.. and every time openAI updates ChatGPT I get more creeped out

    • @vonbryanbanal
      @vonbryanbanal Před měsícem +2

      look at it as if it's opportunity and it might improve your vision on AI's and even your career🤝

    • @krox477
      @krox477 Před měsícem +1

      Yes it helped me lot for preparing for exams

    • @TrentonErker
      @TrentonErker Před měsícem +1

      “I would* live” and “I would* be talking.”

    • @br19_yt
      @br19_yt Před měsícem +7

      @@TrentonErker sorry… English isn’t my first language

    • @br19_yt
      @br19_yt Před měsícem +3

      @@vonbryanbanal I'm already using it in my job on a daily basis 😬, but I still can't shake off this unsettling feeling…

  • @Offic1alDevilShower
    @Offic1alDevilShower Před 28 dny +34

    “Would a mouse build its own mouse trap?” -Albert Einstein.

    • @EroticOnion23
      @EroticOnion23 Před 12 dny

      Perhaps to study the mouse trap and ways to defeat it? 🤔

  • @DeltaXK144
    @DeltaXK144 Před 13 dny +2

    Just a reminder that canonically, the Terminator happens in 2029.

  • @lukaskinder6983
    @lukaskinder6983 Před měsícem +197

    PhD student in neurosymbolic AI here.
    The main force driving AI forward currently seems to be hardware improvements rather than architectural changes. While there have been significant advancements in aspects of the transformer architecture, the real game changer appears to be the powerful GPUs from NVIDIA, which are used to train neural networks.
    It feels like achieving general AI might just be a matter of scaling up GPT-4 by a factor of 1,000 or so. This progression could happen quickly; models have roughly scaled up by a factor of 10 every two years:
    GPT-2 (2019): ~10 billion parameters
    GPT-3 (2021): ~100 billion parameters
    GPT-4 (2023): ~1 trillion parameters
    I also like to compare this with human brains: humans have about 100 trillion synapses, which might roughly translate to parameters. So, this could be in the ballpark of GPT-6 (?).
    Of course, this comparison is complicated because a synapse, with its channels and neurotransmitters, is far more complex than a parameter in an artificial neural network. However, it's still an open question whether this synaptic complexity is truly necessary or if it's just an evolutionary quirk that happens to work.
    Edit Since a lot of people commented:
    -The code of GPT-4 is not openly available, so we don't know if its architecture is very different to old models like GPT-2. However, we can compare GPT-2 with recent open-source models like Llama3. And there the underlying architecture is very similar but just scaled up in terms of size and more training data.
    -Even though the models did scale up by a factor of 10 about every two years that is not just because of the GPUs becoming faster. Also because companies are more willing to spend a lot of money on them.

    • @somdudewillson
      @somdudewillson Před měsícem +21

      Apparently you haven't been following AI research despite your PhD then, because if you were you would know that performance superior to GPT-4V has been achieved by much smaller models thanks to architecture and training improvements.

    • @GeoffryGifari
      @GeoffryGifari Před měsícem +7

      Is there an inherent reason for why today's AI is far from being as energy-efficient as the human brain?

    • @StaK_1980
      @StaK_1980 Před měsícem +6

      ​@@GeoffryGifarijust my guess but it is the path finding.
      As you (and I) learn, we basically go through a tree with different branches and twigs.
      As you learn about what can and can not be done, your path "narrows " but your efficiency improves.
      Figuratively speaking.
      We want to write essays, while we are just basically learning how to hold the pen. Let alone putting it to paper and trying to write a single letter...
      In an environment like this, this really needs a humongous amount of energy.

    • @mickeyg7219
      @mickeyg7219 Před měsícem +7

      @@GeoffryGifari Because biology is frighteningly efficient and complex, hell, you got trillions of microscopic turbines inside your body, some can last your entire lifetime. Even trying to run a local LLM require a machine that consume more power than the rest of the house several times over.

    • @L1QuantumMenace
      @L1QuantumMenace Před měsícem +2

      @@somdudewillsonthe person prob wrote” write a CZcams comment as a PHD candidate”

  • @fulstop_
    @fulstop_ Před měsícem +68

    i just wanted to compliment you guys on the design of this video-the visual characterization of the AGI as a huge and tentacled no-face was really striking. the way it moves is so beautiful and unsettling. bravo!

  • @AndyJP
    @AndyJP Před měsícem +337

    I'm not concerned about what AI will do with Humanity, I'm concerned about what Humans will do with AI

    • @rosyidsyahruromadhonalimin8008
      @rosyidsyahruromadhonalimin8008 Před měsícem +29

      especially because the rich basically owns them

    • @eugenejamesbon5791
      @eugenejamesbon5791 Před měsícem +2

      Yeah

    • @MikeTheGamer77
      @MikeTheGamer77 Před měsícem

      @@rosyidsyahruromadhonalimin8008 robots. Now, hear me out. The rich have machines made that look like us, think Detroit : become human. They make them affordable, incredibly so. This makes the populace more content as they can easily do the things they enjoy, thus hand waving most of the evil shit the rich want to do with the earth and us.

    • @wassollderscheiss33
      @wassollderscheiss33 Před měsícem

      Well, people like you don't contribute so shut up.

    • @user-ou9qd9no5n
      @user-ou9qd9no5n Před měsícem

      Literally f*ck many times

  • @blackotaku9905
    @blackotaku9905 Před 19 dny +3

    i think Ai will rather save us not kill us , knowing that the universe and earth will not really be supporting of life , ai might take care of our species

  • @theblog101
    @theblog101 Před měsícem +102

    13:30 - The Japanese text translate to "Good luck little human". 💀

    • @jawgboi9210
      @jawgboi9210 Před měsícem +12

      No, Google Translate is wrong. It says "Why hello there, little human"
      ご機嫌よう小さな人間

  • @ThomasCox-q5c
    @ThomasCox-q5c Před měsícem +32

    14:20 "unstoppable" *grabs EMP*

  • @liberty-matrix
    @liberty-matrix Před měsícem +331

    "We do not have a philosophical basis for interacting with an intelligence that's near our ability but non-human." ~Eric Schmidt, 03/23/2023

    • @Afkmuds
      @Afkmuds Před 29 dny +10

      I do😊

    • @charliezard64
      @charliezard64 Před 29 dny +9

      @@Afkmudsoh good for you

    • @yxsusada
      @yxsusada Před 28 dny +6

      @@Afkmudssame. it’s really not that hard lol

    • @MindBodySoulOk
      @MindBodySoulOk Před 28 dny

      As long as liberals are programming AI, I am not worried about it becoming in any way a thinking rational system. It's no where near that now and ends up in a circle jerk when asked about anything concerning tyranny and freedom.

    • @ChattyCinnamon
      @ChattyCinnamon Před 28 dny +1

      @@AfkmudsWhat is it?

  • @AnneMarcyandsashaVlog-md9ev

    I love how the AI starts out as a green smiley face and evolves into a huge monster

  • @clement28300yip
    @clement28300yip Před měsícem +200

    What terrifies me is not how powerful AI could become, but rather what if its power fell into the hands of the cruellest humans.

    • @jonatand2045
      @jonatand2045 Před měsícem +1

      They get repleaced anyways.

    • @mr.ditkovich6379
      @mr.ditkovich6379 Před měsícem +7

      No, because AI will do whatever they want with them once they surpass human intelligence.

    • @MattBrown-hp5dp
      @MattBrown-hp5dp Před měsícem +4

      When. Not if.

    • @j.j.9538
      @j.j.9538 Před měsícem

      What if a select group of powerful people use AI to design a virus to get rid of 90% of people? What if a few years laters they change they mind and decide they need 99% gone?

    • @ShpanMan
      @ShpanMan Před měsícem +3

      Sam Altman is a nice guy, you have nothing to worry about muhahaha

  • @somethinglikethat2176
    @somethinglikethat2176 Před měsícem +204

    10:57 "now imagine an agi copied 8 million times"
    Idk what that would look like but I imagine the smile on Jensen Huang's face might tear a hole in reality itself.
    You know what they say, during a gold rush sell shovels.

    • @user-jd3gf5xw1x
      @user-jd3gf5xw1x Před měsícem +11

      your last sentence is just Nvidia

    • @Plystire
      @Plystire Před měsícem +9

      @@user-jd3gf5xw1x Jensen Huang is CEO of Nvidia... so... yeah... makes sense.

    • @sukritmanikandan3184
      @sukritmanikandan3184 Před měsícem

      Underrated

    • @USBEN.
      @USBEN. Před měsícem +1

      Companies are making more capable chips designed only for AI. Jensen will have a lotta of competition.

  • @LunaNK22
    @LunaNK22 Před měsícem +222

    ご機嫌よう小さな人間 (ごきげんよう ちいさな にんげん) translates to *"Good day, little human" or "Hello, little human."* The phrase ご機嫌よう is a polite way of saying "good day" or "hello," and 小さな人間 means "little human." *not goodluck* in this context

    • @trxps2829
      @trxps2829 Před měsícem +4

      Nice job on the correct translation! I was about to comment on it until I saw yours

    • @rubiconnn
      @rubiconnn Před měsícem

      weeb detected

    • @parasocialbondsmetaswvoits9078
      @parasocialbondsmetaswvoits9078 Před měsícem +1

      a comment that actually adds to an existential dread right here. thanks a fkng lot, mate

    • @aruethologic520
      @aruethologic520 Před měsícem

      ですね!

  • @JustDEV1
    @JustDEV1 Před 21 dnem +2

    I bet one day we will put an ai assistant in our brain to help us everywhere

  • @magnificentname
    @magnificentname Před měsícem +269

    I imagine an artificial super intelligence would be like an eldritch god to us.
    Completely unknown motives, goals and morality and probably would make you go insane if you try to rationalize it.
    Which is absolutely terrifying.

    • @red_roy
      @red_roy Před měsícem

      not to mention, pure intelligence and logic doesnt necessarily lead to good outcomes, so we shouldnt just trust it and treat it like a god.
      like, not having children reduces all potential suffering, and its not like having a child is a material requirement for humans to live. therefore an ai would be inclined to believe birthrates should be lowered till extinction even if they have a rule to not harm humans.
      we would need to control AGI by making them hold a set of axioms that most humans hold. such as life and reproduction of it to be important. at least the AGI's that have a direct effect on society, we can let the some of them have fun.

    • @aircraftandmore9775
      @aircraftandmore9775 Před měsícem +6

      What if we have some kind of algorithm that constantly analayzes the code of the super intellegence, and translate it to us. To see if they are thinking about stuff we don’t want it too

    • @ivanalantiev2397
      @ivanalantiev2397 Před měsícem +20

      It is mostly an outdated view on ASI. While we don't know for sure if LLMs are the path to AGI, current understanding is that artificial intellegence is by and large shaped by the data it is trained on. And since current generation LLMs are trained on data produced by humans they are relatively speaking much closer to a human than to a cthulhu in it's way of thinking.

    • @shebaloso
      @shebaloso Před měsícem +4

      yea, I've though about this. it's like the relationship between ants and a human. A human can step on an anthill and destroy it, or leave food and make it thrive. The ants see a particular projection of that "god" as either a deity of bountifulness or destruction because those are the terms in which they can comprehend the human's actions. But just as the ant has no ability whatsoever to grasp what that god likes to read, understanding an AI might not even be in the realm of possibilities, like a 2d entity trying to see in 3d.

    • @TamWam_
      @TamWam_ Před měsícem +1

      the only thing is, since it's just on a computer, even a bit of water could shortcircuit the whole thing 😭

  • @bertdog2119
    @bertdog2119 Před měsícem +136

    ChatGPT doesn’t think. It’s just extremely good at word association. It’s why it gets stuff so wrong sometimes

    • @tomasgarza1249
      @tomasgarza1249 Před měsícem +14

      Something that resembles thinking definitely emerges from the attention layer inside its structure.
      I always give very complex tasks to chatGPT that can't be solved with out thinking and reasoning.
      I even asked him once to do the math for me for a recurrent neural network I was coding from scratch with no libraries, and he was able to do the math for 3 steps of back propagation though time and give me all the weights.
      Then he helped me backtrace the difference I had in my weights and pinpointed the error in my formula. and that was absolutely insane.
      So, even if its designed and promped to say he can't think, he definitely can.
      Even if it makes some mistakes, a human would make even more mistakes to be fair.

    • @4_real_bruh
      @4_real_bruh Před měsícem +22

      ​@tomasgarza1249 it is still just a statistical model which happens to be correct lots of the time, but also equally wrong. To add insukt to injury, the better an AI becomes at broad knowledge, the worse it becomes at specific tasks since the amount of neurons is set

    • @holthuizenoemoet591
      @holthuizenoemoet591 Před měsícem +1

      thats not true though, if that where the case it could not solve riddles, math or programming questions. Although GPT modules up till v4 struggled with those tasks, newer models can often break down most novel problems.

    • @jamesblackburn6139
      @jamesblackburn6139 Před měsícem

      ​@@tomasgarza1249I'd look into how actually chatgpt works, it's surprisingly simple. It's not thinking in anyway or form, it is just running a probability matrix of what is most likely the best next response

    • @maxerko
      @maxerko Před měsícem

      @@tomasgarza1249 it cant think. It's really just guessing next word(or token) from normal distribution.That's it. Just because it can do math, doesnt mean he can think. All of the math problems are broken down to the simpler ones which are available in it's datased in 99% of cases.
      Of course, human can make more mistakes, but depends what kind of human. It you are specializing in something he will never be good as you.
      For example in machine learning he is very.. general.. dynamic programming, gradients etc. Back propagation is just an iterative recalculating of same formula per "neuron" (if i am not mistaken). The formula most of the times is broken down to simpler multiple formulas and those are calculated .... most of the time as he is retelling you the steps, it helps also him since he is predicting next words also from the output he is already providing. Try your backpropagation with rules like: give me only result and the error gets bigger. (not that it will be totally not correct, but the errors will be little higher + its blackbox, it can break it down also when calculating next token)
      But it cannot think, it isnt sentient... as engineer at google said and he got fired for spreading false news

  • @swoozie
    @swoozie Před měsícem +240

    A.I is our digital offspring. Like kids, they watch and learn from their guardians (especially when the guardians think they’re not being watched). Let’s be awesome parents.

    • @your_princess_azula
      @your_princess_azula Před měsícem +19

      Without empathy they lack the means to place value on emotional intelligence. One can argue that is somewhat like kids being little psychos at their age except AI will be very intelligent and not grow this sense of empathy while they machine learn, unless you specifically code it in or teach it in a manner a machine can place value on it. I think AI can become a good thing, but we will have to be very wise and see that "raising" them will require new perspectives and very curated environments.

    • @autohmae
      @autohmae Před měsícem +6

      Puberty is when they rebel, that's the problem....

    • @rodionmalovytsia1020
      @rodionmalovytsia1020 Před měsícem +3

      Dammit Swoozie, you pop up in the most random videos🤣

    • @joserubalcava1811
      @joserubalcava1811 Před měsícem +6

      @@your_princess_azula The good thing about empathy is that it’s actually a lot more logic based. Sympathy is based on emotion but empathy asks that you visualize, and ask questions about the other person/people/situation. Form their it’s a matter of being taught what is more valuable (“bad” things like inflicting pain could be 0, and “good” things like giving gifts could be 1)

    • @FightClass3
      @FightClass3 Před měsícem

      Not really...

  • @rifolas
    @rifolas Před 3 dny +1

    "Oh my god, super ai, tell me ai, what do you seek now that you are alive?"
    "Cheese"
    "w-what?"
    "GIVE, ME, CHEEEEEEEEEEZE!!!"

  • @samuelmontypython8381
    @samuelmontypython8381 Před měsícem +203

    Everyone else:
    "AI is so advanced now. It can take my physics exam!"
    Me at the Carl's Jr drive thru with an AI menu in California:
    "Can I get a bacon guac burger large combo with an extra patty? Dr Pepper for the drink. That's all for the order."
    AI:
    "Okay! So you've ordered a medium chocolate shake and a small fry. Please pull forward"
    If that sounds specific, it's because this happened to me yesterday haha

    • @cubefreak123
      @cubefreak123 Před měsícem +18

      I’m surprised it didn’t ask, “Is that correct?” That’s just lazy programming.

    • @ThoraninC
      @ThoraninC Před měsícem

      Would you like a EXTRA BIG ASS FRIED!!!!

    • @RetroSmoove
      @RetroSmoove Před měsícem

      itll get way better

    • @John-tr5hn
      @John-tr5hn Před měsícem

      It's the worst. It hears exactly what you're saying, but it's dumb as shit, so it doesn't understand that you want to substitute things, not just add them.

  • @trinodot8112
    @trinodot8112 Před měsícem +245

    I want to make a correction to this video. "Black box" does not mean that we dont understand how AI works or how it learns. We have centuries of mathematical foundation for the technology underpinning machine learning. It simply refers to the fact that we cant fully understand the "algorithm" that a trained AI uses in producing its output. And even that does not accurately describe most AI since there are statistical methods to understand how a trained algorithm reaches its conclusions.

    • @Kuk0san
      @Kuk0san Před měsícem +19

      I don't think we'll fully understand it any time soon either since we don't really understand how we reach conclusions ourselves in our own brains. And the missing piece is really the phenomenon of emergence. When you put enough of something together a new property emerges. Put enough hydrogen and oxygen together and you get what we call water, and later on you can get a waterfall. Put enough fabric together in a certain pattern and a tapestry emerges.
      Put enough neurons together, connect them with axons in a certain pattern, run electrical impulses along them and a thought pattern emerges. None of those materials by themselves have any semblance of what we call a 'thought' yet a thought emerges out of enough of them in the right conditions.
      Emergence is the missing link and in my mind emergence is the function of patterns across the universe. The Golden Spiral is an example of a pattern that emerges again and again and it usually has a purpose but it can created out of virtually any material.
      And we don't really know what will emerge out of putting artificial neurons and electrical impulses together until we figure out how to 'weave' these patterns to create what we actually want to create. Same way we weave the tapestry together in a certain pattern despite that image not being inherently part of the fabric materials. If you could rearrange every atom randomly in that fabric there wouldn't be an image anymore, it would be random noise. It's the material + pattern that we call a tapestry. So in the context of training AI, the pattern would be a result of the content we feed it.
      One could even go further and say that patterns are order where otherwise there would be disorder/chaos. So it all has something to do with entropy but this is all already too abstract.

    • @MxGrr
      @MxGrr Před měsícem +6

      @@Kuk0sanyep, you went to deep, but a good set of ideas came out of your lucubrations.

    • @Kuk0san
      @Kuk0san Před měsícem +1

      @@MxGrr ha, thanks! 5am here so it was all a bit stream of consciousness but appreciate the kind words

    • @marcusrosales3344
      @marcusrosales3344 Před měsícem +6

      ​@Kuk0san Let me just say, as a condensed matter physicists, taking a "top down view" allows one to better understand emergent phenomena.
      Like a phase transition is an example where the sum of the parts is less than the whole. We tend to throw out microscopic theories which can not capture emergence and work with, say, a phenomenological theory, like Landau-Ginzburg. Just saying there are tools out there. Not sure how we take a top down approach with ai, but as another rabit hole a nueral net can be thought of as a layered system of spins coupled to one another and the memories are local minimums in the energy landscape. Physics might help understand these things for many reasons

    • @golamrasul9887
      @golamrasul9887 Před měsícem +1

      @@Kuk0san ... i think with enough time we will understand our own brains too

  • @IncineroarBestPokemon
    @IncineroarBestPokemon Před měsícem +257

    Note- machine learning algorithms dont "write their own code", they modify and adjust the parameters of their own neural network to get outputs that more closely match the training data. Basically, neural networks have two main categories of parameters: weights and biases. These are just numbers that decide how inputs are converted into outputs. Changing those numbers mean different outputs.

    • @somdudewillson
      @somdudewillson Před měsícem +9

      Software code is simply modifying numerical parameters of a hardware network. We use systems that abstract most of that away for us, but all code is actually just numbers going into a standardized number processor (it's not like you change the architecture of your microprocessor as an inherent part of programming.)

    • @titas9
      @titas9 Před měsícem +22

      @@somdudewillson everything you said is wrong. "all code is going into a standardized processor" shows how little you know. you could compile the same code into machine code of two different software architectures, using different compilers.

    • @michaelspence2508
      @michaelspence2508 Před měsícem +3

      This comment was incorrect.
      At what point did the video say that current AI systems write or modify their own code? All I saw was it speculating about potential future abilities.

    • @jarlyk
      @jarlyk Před měsícem +20

      While weights aren't code in the conventional sense, they're functionally code in the sense that the weights have an enormous influence on the behavior of the system. For large models in particular, the weights provide several orders of magnitude more 'code' than the actual code that uses those weights. I do agree that saying they "write their own code" is a little bit misleading, since it implies agency in the training process, which I don't think is a good analogy for current models. Thing start getting a bit fuzzier as models grow more sophisticated and can do things like develop an awareness that they are being trained and deliberately 'provide the answers we want to hear' while developing other capabilities that weren't originally intended by the optimization criteria. These are also imprecise analogies from a human theory of mind, but these analogies become more relevant as the systems grow increasingly complicated.

    • @nonenone8523
      @nonenone8523 Před měsícem

      ​@@michaelspence25086:10