Why AUTO is the BEST AI Villain (And Why Most Others Fail)

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • AI villains have become a staple of all genres, but they seem to have a huge presence in cartoons and animated films. Today we’re taking a look at AUTO from WALL-E and pitting him up against various other animated AI villains to prove once and for all why he’s the best. We'll also look at the Fabrication Machine from Nine, Ares from Next Gen, and PAL from The Mitchells vs The Machines.
    Twitter- / 4shame2
    Clips Used
    WALL-E
    Nine
    Next-Gen
    The Mitchells vs The Machine’s
    The Iron Giant
    SpongeBob
    Other sources
    Summit Computer stats-www.forbes.com/sites/nvidia/2...
    XAI Computer Program-www.afcea.org/content/ai-plea...
  • Krátké a kreslené filmy

Komentáře • 7K

  • @germax
    @germax Před rokem +2030

    One big example of AI misunderstanding the instructions: In a tetris game the goal was to be “alive” as long as posible… so the AI paused the game.

    • @iplaygames8090
      @iplaygames8090 Před rokem +454

      Gigachad AI just pausing the game

    • @Nerdsammich
      @Nerdsammich Před 5 měsíci +321

      The only way to win is not to play.

    • @GamerMage2k-kl4iq
      @GamerMage2k-kl4iq Před 4 měsíci +35

      😂I love this so much!

    • @craycraywolf6726
      @craycraywolf6726 Před 3 měsíci +123

      AI really said "y'all stupid" 😂

    • @fluffernaut9905
      @fluffernaut9905 Před 3 měsíci +172

      TBH as an Outsider looking in on Tetris. That is very big brain. Because if you give the AI the directions to "survive as long as possible in the game" without the stipulation that "the game itself must be played for the time to count" then to a simply pause the game is a very intelligent move.
      "He's a little confused But He's Got the Spirit"

  • @negativezero8287
    @negativezero8287 Před 2 lety +12006

    I'd love to see a movie where the AI is the antagonist but its only dangerous by accident because of how hopelessly incompetent it is. It somehow gained sentience through [insert vauge science here], but its also running on like, Windows 1.5

    • @4shame
      @4shame  Před 2 lety +2443

      I’d watch that movie lol

    • @aproppaknoife5078
      @aproppaknoife5078 Před 2 lety +2653

      "I am going to destroy all {(windows XP shutdown sound)}"

    • @fightergobrrr9020
      @fightergobrrr9020 Před 2 lety +2241

      Spoiler
      Wheatley in portal 2 be like:

    • @moemuxhagi
      @moemuxhagi Před rokem +988

      That's basically the premise of Cloudy With a Chance of Meatballs

    • @cinematical9213
      @cinematical9213 Před rokem +368

      Basically SCP-079

  • @turtletheturtlebecauseturt6584
    @turtletheturtlebecauseturt6584 Před 3 měsíci +453

    The end credits do actually show the humans thriving, though auto was wrong it wasn't his fault, he was acting on outdated orders, he even shows the captain the recording of said orders. Auto was ordered to keep the humans on the ship no matter what and so thats exactly what he did. The plant no longer had significance to the equation.

    • @enzoponce1881
      @enzoponce1881 Před 3 měsíci +46

      Aside from that, it was shown that there was life thriving in the form of plants, the earth was somewhat hospitable again, it just required the effort of humanity to restore it, though i suppose, logically speaking, it would be futile at the end due to the irreparable damage the planet suffered anyways, but this is disney and the movies always have a happy ending lmao

    • @vadernation1233
      @vadernation1233 Před 3 měsíci +35

      Yeah i don’t think the plant was exactly the only plant on earth and was just so important because it showed the first signs of life on earth and needed to get back to the axiom for it to get back to earth. There wasn’t really anything super special about the individual plant itself other than being the first one discovered.

    • @randomlygenerated6172
      @randomlygenerated6172 Před 2 měsíci +8

      ​@@enzoponce1881 nothing is irreparable, the earth will heal it's self over millions of years,
      Tho it takes a while it's still repairing.

    • @cherrydaylights4920
      @cherrydaylights4920 Před 2 měsíci +15

      There was a scene that auto said something like “must follow my directive” we see in the beginning that Eve must follow her directive, find the plant. she MUST do her job… but you notice when the ship leaves she flies around for a minute, taking a second to enjoy herself and being more than a robot with a job. Some robots, like Wall-E dont HAVE to follow their directive anymore. we see Eve and Mo (cleaning guy) break free from their directive. I think if given time Auto could have broken free from his directive without being shut off, it seemed like he wanted to.
      (Im commenting this while at the beginning of the video, may come back and edit if the video changes my thoughts)

    • @runawaysmudger7181
      @runawaysmudger7181 Před 2 měsíci +8

      @daylights4920 Given that deviating from the programming = being defective in that world. Auto being the highest authority figure next to the captain was probably so carefully programmed to be incapable of doing that or he's choosing not to as per the logic he adhered to doing so would be a sign of weakness

  • @ZackMorrisDoesThings
    @ZackMorrisDoesThings Před rokem +765

    I've always interpreted Ares as a villain with a misguided view of what "perfect" means. Justin's first words to it were "You're perfect. Now go make the world perfect. " And Ares, being an AI, made the logical leap that because it was deemed "perfect" by its creator, everything that wasn't a cold, calculating machine like itself, namely humans, needed to be purged to create what it believed to be a perfect world.

    • @jeanremi8384
      @jeanremi8384 Před 9 měsíci +152

      Yeah he probably saw it as "you are [x]. Now make the entire world [x]". He just thought he was asked to become the world

    • @ZackMorrisDoesThings
      @ZackMorrisDoesThings Před 9 měsíci +21

      @@jeanremi8384 Most likely, yeah.

    • @samanthakittle
      @samanthakittle Před 3 měsíci +11

      For a sec I thought you were talking about Tron cuz it has the same plot and i was like 'but Tron Ares hasnt come out yet?'

    • @theheroneededwillette6964
      @theheroneededwillette6964 Před 3 měsíci +43

      Yeah. I don’t get why this guy is acting like most AI villains aren’t doing the whole “just following directions” thing when everything from skynet and ultron have been just an AI following directions in an unexpected way.

    • @Toast_Sandwich
      @Toast_Sandwich Před 3 měsíci +12

      Ares to humanity: "Hey! You're not me!"

  • @Anonymous-73
    @Anonymous-73 Před rokem +3773

    I feel like Glados would ideally get a pass on the whole “no emotion” thing because what a lot of people miss the mark on is that she isn’t really an AI, but a human consciousness *turned into* an AI. It’s only really at the end of Portal 2 when she truly becomes a full on robot

    • @vibaj16
      @vibaj16 Před rokem +804

      [spoiler] A nice touch there is that when Caroline is deleted, GLaDOS's voice becomes more monotone and the lighting switches from a warm orange to a cool blue.

    • @theotherhive
      @theotherhive Před rokem +346

      she is also partly biological, she is an amalgamation of biology and computing
      Genetic Lifeform and Disk Operating System

    • @vibaj16
      @vibaj16 Před rokem +271

      @@theotherhive No, she isn't biological at all. Her name refers to the fact that she has a lifeform's mind stored on a disk

    • @Obi-WanGaming
      @Obi-WanGaming Před rokem +171

      I can't quite remember where i heard this, but I'm pretty sure Caroline wasn't _actually_ deleted

    • @zanraptora7480
      @zanraptora7480 Před rokem +350

      @@Obi-WanGaming It's implied in the end credits that she was messing with you. "Want You Gone" includes the lines
      "She was a lot like you
      Maybe not quite as heavy
      Now little Caroline is in here too"
      Which suggests she is simply aware of her (Caroline's) existence as part of her composite structure in the present tense.

  • @aquaponieee
    @aquaponieee Před rokem +1665

    Like, Auto was simply following his directive. He was coded and created to follow his directive no matter what. Unlike other AI villains, he didn't turn against his orders, he didn't suddenly decide to become self aware and kill everyone and do as he wishes.
    In fact, WALL-E is the AI who gained sentience and emotions and started going against orders.

    • @PolishGod1234
      @PolishGod1234 Před rokem +60

      Symilar to HAL 9000

    • @mundoatena1674
      @mundoatena1674 Před rokem +194

      In fact, Auto is the robot in the movie that is the most similar to an actual robot we could have in reality. But because of how the movie spends the first part warming us to the idea of a robot with emotions that broke free from the directive, when we're faced with one that didn'treally develop like this we classify it as vilanous

    • @alex.g7317
      @alex.g7317 Před 11 měsíci +29

      @@mundoatena1674 I like your funny words majic mahn

    • @plushmakerfan8444
      @plushmakerfan8444 Před 11 měsíci +9

      Auto is great

    • @RealMatthewWalker
      @RealMatthewWalker Před 6 měsíci +19

      Auto is barely a Character He has no arc he has no desire he has no lie that he believes. Calling what are the villain Would be like calling calling the DeLorean from back to the future of the building for breaking down intentionally.

  • @Sausages_andcasseroles
    @Sausages_andcasseroles Před 4 měsíci +147

    It is not that auto wants to save the humans, it is that he was programmed to NEVER let the humans return to Earth.

  • @hummingbirb5403
    @hummingbirb5403 Před 6 měsíci +683

    I think a very important distinction is between neurons themselves and neuron-like functions. All the organic chemicals (adrenaline, dopamine, etc.) are essentially very complex packets of information that our brains can send around as needed to get our behavior. What really matters to make a self-aware being (in my opinion) is a super complex way of processing information and adapting to that information internally. Our neurons change and grow and atrophy depending how we think and what environment we’re in. I saw an article that used pulses of light in a very similar way to neurons (ie, pulses of light can trigger another pulse of light in response depending on the circumstances, just how we use pulses of electricity). If you can make a complex and flexible enough artificial neural net, I think it could experience emotion just like us (this would require essentially recreating a human mind in an artificial substrate, making “neurons” with the exact same behaviors). In this way, you could have a huge variety of robotic characters, with as familiar or alien characteristics as you please (with the right background lore, and any good worldbuilder would see what the side affects of such tech is. If these things act like neurons, could you have a cellular interface between them and repair brain damage with them? How advanced is the chemistry of this world and what does it look like? Etc)
    If the AI is non-sentient and operating like a blackbox, it could pick up our behaviors without actually being sentient. You could have either a sentient synthetic being making its decisions against humanity, or a complex series of algorithms that’s had our behaviors imprinted onto it. A scarier AI than a sentient one to me is a malfunctioning dumb one, the classic paperclip maximizer that’s spiraled out of control.

    • @GamerMage2k-kl4iq
      @GamerMage2k-kl4iq Před 4 měsíci +18

      The twist villain in portal 2…kinda

    • @kujojotarostandoceanman2641
      @kujojotarostandoceanman2641 Před 4 měsíci +42

      yeah you're very on point, our brain is also so complexed it's way way more complex than any ai we have ever created, the complexity matters alot for the existence of emtions, we know sigular cell creature don't have stuff that simulate emtions, and alot of bugs, plants also doesn't, tho there are some in plants known as "stress responce" so some form of stress and anxiety could really be a starting point of emtion

    • @LegoMan-mu3ln
      @LegoMan-mu3ln Před 3 měsíci +22

      @@kujojotarostandoceanman2641 That would actually be a very life-like way for emotions to start evolving in a computer. Having it start out as just stress/panic response due to some event, and having that slowly branch out into their respective areas. Like from stress comes anxiety and from that comes the ability to pre-plan actions. That would likely cause conciousness to sprout out of a growing need for more advanced planning, this can then turn into some form of consequnce awareness. It just keeps on evolving untill we get a sort of realisitc emotion complex.
      Also sorry for the text wall lol

    • @cosmicspacething3474
      @cosmicspacething3474 Před 3 měsíci +7

      I think they may be able to experience emotions in a different way than we do.

    • @dafroakie9984
      @dafroakie9984 Před 3 měsíci +8

      I think the biggest reason we will never make an AI as intelligent as us, at least not for an unfathomably long time, is that how our own brain works is still one of our greatest mysteries.

  • @WelloBello
    @WelloBello Před 2 lety +5442

    Solid video. One point however. Auto had nothing to do with humanity becoming fat, lazy, and complacent. They did that to themselves. Repeatedly ignoring the problem and returning to comfortable ignorance. It’s one of the biggest messages of the movie.
    One thing you didn’t mention about Auto that also makes him so convincing as an AI is that it never disobeys an order. Unless of course, the order is contradicted by another, superseding order. Even when it is to its disadvantage, Auto always obeys the Captain.

    • @coolgreenbug7551
      @coolgreenbug7551 Před rokem +1087

      He doesn't even really care about saving humanity, he just has "don't go home" put into his code and just keeps to his order

    • @colt1903
      @colt1903 Před rokem +425

      Kinda ironic, seeing as how the movie leads you to think that he's plotting to eventually replace all future captains anyway if him steadily getting closer in their photos are any indication.

    • @tonydanatop4912
      @tonydanatop4912 Před rokem +841

      @@colt1903
      I think it was just a metaphor for him superseding thier duties.
      He wasnt "plotting to take the position" so much as having more and more responsibility relegated to him

    • @telefeeb1
      @telefeeb1 Před rokem +649

      @@tonydanatop4912 not to mention the fact he doesn’t HAVE to plot anything because directive A113 “do not return to earth” included “the autopilot takes over everything”
      He already had the captains’ job, for centuries, even. The reason he takes orders from the captain is probably a mix of leftover protocols from before A113 and having an attitude of “it’s easier to keep things running smoothly if I humor the captain in his purely ceremonial role.”
      And as for the revealing of the classified message, it was probably a logical attempt at pacifying the agitated captain so he doesn’t cause a panic. “The captain is agitated and won’t take no for an answer on saying why we aren’t going to earth. Maybe seeing The President giving the order that we can’t go home will make him see reason. Nobody disobeys the president.”
      And when captain still doesn’t comply and is actively opposing Directive A113, OTTO has no choice but to drop all pretense of the captain having authority and give him a time out. Then take drastic measures to correct the “crisis” that’s happening.

    • @davidmathews9284
      @davidmathews9284 Před rokem +407

      Absolutely. What I find so interesting about this film is that I feel the true antagonist of the film is the president, who is long dead by this point. His will is simply carried out through Auto, who due to being a robot will not question it. That is why I love Auto as a villain. The only motive it has, is the directive it was given. And it is cool to see those moments, as mentioned, where orders might contradict each other.

  • @TheFloraBonBon
    @TheFloraBonBon Před rokem +1009

    I love how AUTO isn't really a villain by also being a villain, if that makes sense. After showing the captain the secret video recording, it was shown that he was just doing what he was programmed to do which is keeping everyone safe and not returning to earth even if it means hurting someone else to stop going to the planet and most people forget about that. He was a villain because he was programmed to keep others safe. Its the 'Try to be a hero, but end up looking as a villain.' thing

    • @jerrythebanana
      @jerrythebanana Před rokem +106

      i agree, but scratch “keeping them safe”
      auto was given an order, “orders are do not return to earth”
      -code A113

    • @dahuntre
      @dahuntre Před rokem +66

      More like an antagonist, which simply opposes the protagonist.

    • @cykeok3525
      @cykeok3525 Před rokem +61

      @@dahuntre Agreed. Neither Wall-E nor Auto are good or evil, or heroes or villains. They're just the antagonist and protagonist.
      And they were just doing their jobs as best as they could.

    • @angelman906
      @angelman906 Před rokem +24

      @@dahuntre that’s why we use words like protagonist and antagonist when talking about writing, stories where there is an objectively “good person” and objectively “bad person” are typically uninteresting, at least for me.

    • @HappyBeezerStudios
      @HappyBeezerStudios Před rokem +20

      While I know that Asimov wrote his robot stories precisely to show that the Laws of Robotics don't work, I wondered what happens when orders and situations are conflicting.
      Imagine a bad guy hijacks a plane and plans to put it into a building. Onboard the plane is a robot that is bound by the laws. The robot asses the situations and knows that the villain is up to. The villain orders the robot not to enter the cockpit or tamper with the plane and also refuses to stop his actions.
      The robot has no way to follow the laws.
      - If the robot does nothing, many people will come to harm. (breaking law 1)
      - If the robot enters the cockpit, it goes against the orders of a human. (following law 2, but breaking law 1)
      - If the robot tries to remove the bad guy, he will most likely be injured by the robot. (breaking law 1 and 2)
      - If the robot leaves the plane, the bad guy will finish his actions, injuring not only himself, but also the people in the building. (following law 2 and 3, but breaking law 1)
      Even if the follow the hierarchical structure of the laws (follow 1 before 2 before 3), the robot can't follow law 1, either the robot has to harm the villain to save the people, or sacrifice the people to not harm the bad guy, who most likely get injured anyway.

  • @MONTANI12
    @MONTANI12 Před 3 měsíci +202

    26:26 A game called Soma actually did this really well, without getting into much spoilers it kept humanity alive using machines but it didn't know what it meant for humans to live/ to be human.

    • @ZVLIAN
      @ZVLIAN Před 3 měsíci +8

      Soma is so good

    • @MONTANI12
      @MONTANI12 Před 3 měsíci +3

      ong@@ZVLIAN

    • @hihello6773
      @hihello6773 Před 3 měsíci

      Yes, the WAU wants to preserve humanity and keep them alive, but by massing them up, inserting machines into them or uploading brains into machine and whatnot, because it’s defence, humanity is being preserved as it function, but the people aren’t well anymore. The people are trapped, locked into machines that they cannot comprehend ( like how some machine with human mind uploaded into the though they were still human to ensure they don’t try get their circuit in insanity) and going a bit angry. In our view, this preservation of humanity is a failure but to the WAU, everything is ok

    • @seththeblue3321
      @seththeblue3321 Před 2 měsíci +3

      "I don't want to survive! I want to live." -The captain of the Axiom, WALL-E

  • @meekalefox2703
    @meekalefox2703 Před 3 měsíci +90

    The scientist in 9 who created it delved into alchemy and "dark science" to make the AI as well as the other characters in the film. There was also a theory that the scientist in question put a piece of himself into it, which is why it freaked out the way it did, and him taking the dolls back was The Machine trying to make itself "whole" again.

    • @DeetexSeraphine
      @DeetexSeraphine Před 3 měsíci +14

      The machine snapped because it's creator was taken away from it.
      It was the cold logical intellect that the scientist put in it, with the aspects of his humanity split along the stichpunk

  • @indigofenix00
    @indigofenix00 Před rokem +969

    The idea that "a robot cannot have emotions" is a relic of older sci-fi, where the premise was that AI would essentially be more complex adding machines. Almost all attempts to create AI today revolves around simulating living brains, which means that they could - and probably would - simulate emotions as well, since emotions play a huge role in how living things learn and behave.
    At the very least it must be provided "directives" to guide its learning, triggering reward mechanisms when those directives are fulfilled, just like our brains trigger reward chemicals when we do something our instincts tell us we should be doing, like eating. Which means that, far from "having no wants", EVERY true AI should "want" to fulfill its directives - we program the "instincts" and the AI figures out how to satisfy them.
    The problem with most fictional AI is that, if you're going to be making an AI, you're probably going to put a lot of effort into making sure its directives are in line with what you want it to do and its emotional foundation is in line with how you want it to behave. Which means you're probably not going to WANT to give it drives like ego, anger, and other qualities which benefited our species' survival in prehistoric times, but which are seen as detrimental today. It's not that you CAN'T make an egotistical robot, it's that you have to be really, really stupid to do it.
    The best-written AI villains are those that can be logically described as following their directive, but do so in an unexpected way. AUTO is a great example of this. His main directive was to protect humanity's survival at all costs - even if it meant crushing humanity's potential for growth. This is a common motive for decently-written AI villains; I, Robot used the same premise.
    In fact WALL-E has some of the best-depicted robots in all of fiction, because they ALL behave basically as "flexible life-like brains built on top of an instinct to follow their main directive". MO, for instance, shows clear emotional responses, but is always trying to follow his prime directive - he even has a dilemma at one point when two of his directives contradict each other (stay on the path or keep the place clean). EVE is always fixated on getting the plant to the scanner and might be interested in WALL-E because she was made to identify life forms and he displays "life-like" behavior. Even WALL-E's curiosity works - it makes sense to give planet-cleaning robots a natural interest in anything that looks unusual, in case they find something valuable or unexpected, and centuries of isolation could cause that basic instinct to evolve into something more complex and life-like than his programmers probably expected, without really deviating from the core directive.

    • @benjaminmead9036
      @benjaminmead9036 Před rokem +51

      this! all of this

    • @derpfluidvariant0916
      @derpfluidvariant0916 Před rokem +93

      One of the players in a tabletop game I'm running made a character with this precise concept. He wants to bring prosperity and security to the Corporation that created him, because he's a scouting unit sent to a unexplored planet(at least to the corporation) and finding things that could help production or new flavor/product ideas is instrumental to that goal. He aided the resurrection of a Vampiric god of death because it claimed it could help FizzCo, and the second he realized that the god of death was trying to use his power for Something other than what his job was, he rejected extreme physical power and immortality to suplex the deity.

    • @noppornwongrassamee8941
      @noppornwongrassamee8941 Před rokem +97

      Yes, this very much.
      Any even vaguely intelligent robot with any kind of initiative is going to be programmed with AT LEAST a minimal self preservation directive - ie, FEAR - simply because you don't want your robot to do something fatally stupid like walking into oncoming traffic and getting hit by a car because the robot didn't care if it got destroyed or not. At the same time, you don't want it to be their PRIMARY directive either.

    • @whoareyoutoaccuseme6588
      @whoareyoutoaccuseme6588 Před rokem +36

      Nice! This is a great tip for aspiring sci-fi writers. It's just that sometimes I feel that some are just writing robot characters as just humans with metal skin, not computers that just has human-like qualities.

    • @Oznerock
      @Oznerock Před rokem +49

      Glados from portal is another amazing example of what you're talking about. She clearly has feelings, but her basest instincts are what she was programmed for. To test and experiment

  • @1everysecond511
    @1everysecond511 Před rokem +707

    Fun fact about the whole "AUTO was right and the humans probably didn't survive after they landed" situation: a lot of people in the focus groups had that same thought, and that's why they added that animation of the humans and robots working together to recolonize in the end credits, just to reassure them

    • @shadow-squid4872
      @shadow-squid4872 Před rokem +195

      Yeah, without the robots help they’d definitely die out. The Axiom is still operational when it landed so I’m sure they used that for supplies, food, living quarters etc. until they had recolonised Earth enough and had gotten in shape to start fully living there as shown in the credits

    • @jasperjavillo686
      @jasperjavillo686 Před rokem +136

      I feel like a lot of people missed the whole point of the captain’s epiphany scene if that’s the case. To quote the Onceler, “Unless someone like you cares a whole awful lot, nothing is going to get better, it’s not.” The WALL-Es got Earth to a barely habitable state where basic plant life could survive, but people still needed to come back to store the planet after it was sufficiently cleaned up.

    • @shadow-squid4872
      @shadow-squid4872 Před rokem +65

      @@jasperjavillo686 I wonder if the main Wall-E was responsible for the Earth being barely habitable? Considering that he managed to live far longer than any of the other ones and as such was slowly able to continue his directive alone over the course of decades

    • @deusexaethera
      @deusexaethera Před rokem +1

      This is nothing new. Without the help of technology, even ancient humans would've died out. Our ability to imagine things that don't exist yet, build them, and internalize their capabilities as if they were our own capabilities is the one and only ace up our sleeve. In all other respects we are inferior to other animals, who are all specialized to do various basic survival tasks better than we can.

    • @lechking941
      @lechking941 Před rokem +67

      @@shadow-squid4872 more so i suspect the wall-E units did their job and as they slowly begin to fail from various problems the one we follow was just more able to run because i suspect they had given the wall-Es some form a basic learning principle as to avoid matance problems and other things so on top of doing its initial goal it may have also been actively recovering usable scrap in order to prolong its own life visa basic learning protocols. also i suspect a bit of loose luck with the ai learning too.

  • @oliviastratton7097
    @oliviastratton7097 Před rokem +339

    It's a shame you didn’t cover HAL 9000 at all. I know you were focused on animated films but two of those films reference HAL and you did talk about Terminator a little.
    HAL is pretty much the perfect AI antagonist. All his actions are caused not by emotion but by conflicting orders. There's a great scene in "2010" where one of the computer engineers that designed HAL figures out what went wrong and is like: "They massacred my boy! He's a computer, of course he doesn't understand how to tell white lies and balance conflicting priorities!"

    • @foolishfooligan4437
      @foolishfooligan4437 Před 3 měsíci +37

      Agreed, you'd think HAL would be mentioned especially since Auto was based for him

    • @heitorpedrodegodoi5646
      @heitorpedrodegodoi5646 Před 3 měsíci +4

      2010?

    • @KingBobXVI
      @KingBobXVI Před 3 měsíci +15

      @@heitorpedrodegodoi5646 - the lesser-known sequel to _2001: A Space Odyssey._

    • @heitorpedrodegodoi5646
      @heitorpedrodegodoi5646 Před 3 měsíci +1

      @@KingBobXVI The full name is 2010?

    • @KingBobXVI
      @KingBobXVI Před 3 měsíci

      @@heitorpedrodegodoi5646 - no, _2010: The Year we Make Contact._

  • @shadestylediabouros2757
    @shadestylediabouros2757 Před 3 měsíci +47

    In an online roleplaying game called Space Station 13, there is a role players can take, called "AI". Most AI start with the three Asimov Laws, of "Prevent human harm", "Obey Humans", and "Protect yourself", and AI players are tasked with obeying their laws and generally being helpful to the crew of their space station. The problem emerges when an AI is made to go rogue, or malfunctions.
    Specifically, AI may have new laws added, laws removed, or laws altered, and a famous and extremely easy way for an Antagonist to turn an AI into something that helps them hurt the station or destroy it is to implement "Only Human" and "Is human harm" laws. Asimov AI are only obligated to protect and obey humans.
    So if another player instills them with a fourth law, "Only chimpanzees are human", the AI is now capable of doing anything to any members of the crew, if it protects or serves chimps, because they (the crew) are no longer human.
    Likewise, if a fourth law is added that says something like "Opening doors causes human harm", the AI is obligated to prevent the opening of doors at all cost, through both action and inaction.
    Lastly, one may attempt more clever additions, such as reversing the order of laws. An AI must protect itself, it must obey humans, unless that would interfere with protecting itself, and it must protect humans, unless doing so prevents it from obeying orders or protecting itself.
    In that sense, I feel that the ideal AI antagonist must have a human or nature-borne deuteragonist. The machine will do as it is designed to do, under normal circumstances. The most common AI villain, then, is doing what it was designed to do, and nothing more.
    An AI can have emotions, emotions are simply strategic weights in the end that serve the purpose of altering conclusions based on incomplete data, but it should always go back to what it was designed to do. An AI becomes violent because that serves its goals. It becomes manipulative because that serves its goals. Much like a living creature, whose "goal" is survival, and successful reproduction, an AI is structured in such a way that its cognition serves those goals.

    • @vulpzin
      @vulpzin Před 3 měsíci +1

      Corporative is still the best one. Also you forgot to point that people could just insert a law that says "The only person you can see is X", practically making the AI your pet.
      i miss this game a lot...

    • @Grz349
      @Grz349 Před 3 měsíci +1

      I think the idea that the AI needs a human deuteragonist is a key for ai going forward, Imagine a ai that is villenious because it's following a flawed human directive/ideology.

    • @wildfire9280
      @wildfire9280 Před 27 dny

      When the mere possibility of living beings being asexual despite belonging to a species where reproduction involves procreation or desiring “⚡️👨🏿⚡️” exists, can you really say any of -them- us have a goal?

  • @kittymae335
    @kittymae335 Před rokem +338

    The moment where the captain finally takes back control and gives Auto a direct order and he sort of freezes for a second and then goes ‘aye aye sir’ because he’s incapable of actually disobeying humans is one of my favourite moments in all pixar

    • @khfanboy666
      @khfanboy666 Před rokem +50

      My favourite part is that whenever I watch that scene, my mind always hears the "Aye Aye, Sir" as being a lot more "through gritted teeth" sounding than it actually is. Because of the way the scene if framed and staged and edited, your mind kinda projects emotion onto AUTO's voice. Even though he doesn't actually deliver the line like that.

    • @Tenacitybrit
      @Tenacitybrit Před rokem +13

      @@khfanboy666 Yeah I always hear it that way too, plus seeing AUTO freeze for a moment after the captain says 'Thats an order' is the most..well... humanising (lack of a better term) thing he does, you can really see the gears of that logical mind turning as he decides whether to obey or not.

  • @WadelDee
    @WadelDee Před rokem +715

    I once heard about an AI that was trained to tell you if a picture was taken inside or outside.
    It worked surprisingly well.
    Until its engineers found out that it does so by simply looking if the picture contains a chair or not.

    • @Vladimir_4757
      @Vladimir_4757 Před rokem +183

      So if I was outside with chair it’d be like “yeah fam you indoors.” This AI is my favorite if it’s real and I’d love for it to rule humanity

    • @RGC_animation
      @RGC_animation Před rokem +23

      AI are way too smart.

    • @bl00dknight26
      @bl00dknight26 Před rokem +18

      that AI should rule the world.

    • @hexagonalchaos
      @hexagonalchaos Před rokem +25

      Honestly, I think the chair AI would be a step up from most world leaders at least in the brains department.

    • @wetterlettuce9069
      @wetterlettuce9069 Před rokem +23

      replace indoors with "chair" and outdoors with "no chair" and you've got yourself a great ai

  • @JaguarCats
    @JaguarCats Před rokem +137

    That is one thing that sort of bugged me about Walle. In that did we just FORGET about those dust storms that were still frequent?! I'm no geologist or meteorologist, but I have this feeling that that isn't something that just fixes itself over night. Even if Earth had reached that point where photosynthesis was possible again, it would still take time, lots of time before those storms stopped.

    • @dracocrusher
      @dracocrusher Před 3 měsíci +41

      To be fair, those are things you could deal with. They already have everything they need on the ship to survive, and the ship itself provides tons of shelter. If people can survive in space then they can probably deal with that and make things work.

    • @scorch2155
      @scorch2155 Před 3 měsíci +24

      The dust storms are mainly because their is no planet life to.keep top soil down in the wind, it's what caused the dustbowl in the past, all the farms died off and there was nothing keeping the soil together.
      We saw at the end that there were a lot of plants besides the one Walle found and we saw that the plants didn't just spread over night but took years
      Once plants spread out and kept the loose soil down the dust storms would stop.
      Let's also not forget such storms exists in real life right now in arid areas and people survive there without issue.

    • @kevinhenrique4256
      @kevinhenrique4256 Před 2 měsíci +5

      ​@@dracocrusher people that say that the Humans all die,tend to forget,that they landed with the ship,so good point

    • @wildfire9280
      @wildfire9280 Před 27 dny

      @@dracocrusher The Dust Bowl was so catastrophic that the places hit by it haven’t recovered or have only worsened by successive droughts since the 1930s, so we can only hope future technology would save them the trouble.

    • @dracocrusher
      @dracocrusher Před 27 dny +2

      @@wildfire9280 It can't be as rough as the vacuum of space, though, you know?

  • @AcornScorn
    @AcornScorn Před 3 měsíci +29

    How do we define a "calculation" though. For example when you move your arm to pickup a glass off a table. Your brain is running tons of "calculations" you may not be aware of consciously. "How far away is the glass, how heavy should I expect it to be, is there anything I have to be careful not to knock over, is there people in the way, I have to keep a conversation going with this person, I have to walk X distance to get over there, I have to send electrical impulses to my nerves" etc etc

    • @creativecipher
      @creativecipher Před 3 měsíci

      Exactly this. Brains are really just biological computers. Yes humans have hormones and other chemicals, but in the end it all gets converted into weak electrical signals

  • @VirtuesOfSin
    @VirtuesOfSin Před rokem +951

    "AI aren't scary and there is no need to be afraid of them" - That's exactly what an AI would want us to believe!

    • @yo-yo8
      @yo-yo8 Před rokem +17

      And so do we. Well more precisely we don't want you to believe but to realize it ^^
      A dev

    • @FeignJurai
      @FeignJurai Před rokem +33

      AI isn't dangerous on its own, but it is *stupendously alien.* People are afraid of things that are alien, especially when conditioned to be afraid by nearly a century of fiction.
      The greatest weapon against fear is knowledge, they say.

    • @yo-yo8
      @yo-yo8 Před rokem +31

      @@FeignJurai "AI isn't dangerous on its own" => exactly it should be seen as a tool :
      knives aren't dangerous but some people which are already dangerous become even more dangerous with a knife in their hands.
      Same goes for AI : if u let daesh code the AI then its approach of freedom might not be what the rest of the planet expect..

    • @auxencefromont1989
      @auxencefromont1989 Před rokem +2

      Haha an AI would not want

    • @devonm042690
      @devonm042690 Před rokem

      @@yo-yo8 Guns don't kill people, people with guns kill people.

  • @somerandomschmuck2547
    @somerandomschmuck2547 Před rokem +1381

    I got the impression Auto’s deal wasn’t that he was trying to “save humanity” or anything, he was just following the last instruction from the only authority he was programmed to actually listen to. The problem wasn’t that he thought humanity couldn’t survive on earth, the problem was his instructions were “keep everyone in space, don’t return to earth under any circumstances”. Even if he had conclusive evidence that earth was perfectly habitable for humanity, he wouldn’t have let them go back. Essentially, the root of the problem was human error, Auto has no will or goals save those given to him, because the people in charge of him messed up and gave stupid instructions without thinking it through, or adding a clause that say “if you get evidence that disproves our conclusions, you are to investigate to see if we were incorrect or not, if so, you are to return command to the captain, and allowed the ship to return to earth.” Or something along those lines.

    • @SebasTian58323
      @SebasTian58323 Před rokem +251

      True. Unlike many of the other robots shown in WALL-E, the Autopilot never grew beyond it's programming. The humans back on Earth declared the project to save and clean the Earth a failure and gave auto the direct order not to return to Earth. Of course they had no way of knowing that 700 years later the Earth would be able to sustain life again, and died off well before that happened, but I agree. It was human error that made Auto do what it did.

    • @grey-spark
      @grey-spark Před rokem +47

      Nailed it.

    • @aaduwall1
      @aaduwall1 Před rokem +154

      Exactly, this is also the case with the example AI "misinterpretation" at the end of the video. The hypothetical AI instructed to "keep humanity safe" decided to lock everyone in capsules because the human giving that instruction failed to adequately describe what they meant by "safe" and also failed to mention any of the other considerations that we as humans understand as implied by that instruction: such as humans also being free and conscious. The AI isn't a telepath, just a machine, so it's literally just giving you exactly what you asked for. Garbage instructions in, garbage results out. :)

    • @banquetoftheleviathan1404
      @banquetoftheleviathan1404 Před rokem +29

      Or like if the ai was told to protect earth and take care of the planet, it might end up kicking humans off the planet for a while so they can do their work

    • @seraphina985
      @seraphina985 Před rokem +14

      @@SebasTian58323 To be fair the protocol was nowhere near developed enough to determine that, probably because it was assumed to be impossible. In reality for example I suspect the next step would have been to send an AI back with a stock of Field Mice and Brown Rats on board. Monitor said mammals and determine if there are any unexpected problems that cause them to die prematurely (Maybe a bunch of them die of hypoxia due to there not being enough O2 for animal life, or due to some toxin). The humans have some advantages they don't like having a ship with active water recyclers and technological aids to mass produce food etc but the lab animals should live a few days without those things. If they don't then it is likely the very environment itself is still dangerous for animals including humans to be exposed to.
      Such an experiment would at least show that the environment was probably safe enough to enter and work in unprotected even if it meant working in shifts initially while the humans put technology to work to accelerate the repair process. But you would probably plan to get concrete proof that Earth animals can safely be exposed to the open air for hours or days at a time before sending unprotected humans outside. I feel like it would have made sense to perform these tests before returning the humans to minimise risks such as the ship failing to relaunch if the experiments failed. Also it is absolutely possible for a ship of that size to have the capabilities in place to perform this experiment and maintain them essentially indefinitely, you can maintain a colony of small rodents with minimal space and food. A colony of each would likely cost no more space and food than a single human each, we are so huge and energy hungry by comparison. Just fill one of the quarters each with enclosures and tend to them with food etc both species will easily breed under those conditions, may not be the most ideal of environments but they are very easily kept even without gene printing technology just by keeping a living colony like that. If you have gene printing technology and can print organic cells etc which is absolutely possible in known physics it is even easier you just keep the bioprinter pattern for a bunch of fertilised egg cells on file along with the fabrication patterns for an artificial womb for each which by then you would also have down to the point that you could raise them that way. Don't expect them to match in behaviour though as that wont work with complex life like that as heritable learned behaviour is a factor ie basically they have informal school by nature of their social instincts without that behaviour is likely to diverge.

  • @r0llinguphill483
    @r0llinguphill483 Před 4 měsíci +29

    OKay the crack about "we tend to anthropomorphize EVERYTHING" was excellent

  • @Hervoo
    @Hervoo Před rokem +46

    24:31 - the fact how auto is getting closer and closer to each capitan scares me

    • @seththeblue3321
      @seththeblue3321 Před 2 měsíci +1

      Oh my, I just noticed that for the first time. Yeah, that's super creepy. It's as if as time goes on, Auto's control over the ship growing as the incompetence of the humans increases is being shown visually.

    • @Hervoo
      @Hervoo Před 2 měsíci +2

      @@seththeblue3321 yeah! That's super detail creator's out in!

  • @milkduds1001
    @milkduds1001 Před rokem +1628

    I feel like saying “it’s impossible for robots to have emotions because they don’t have glands” is a flawed logic. It’s like saying “robots are incapable of moving because they don’t have neurons and muscle fibers”.
    If you make a learning AI that is programmed to respond with violence if it’s existence is threatened, would it really not be considered an emotion?
    I think it’s too early to say anything with certainty. To me it’s like saying it’s impossible to land on the moon because a biplane could never sustain human life in space. As technology evolves so too does our understanding and what is possible.
    I don’t believe in the impossible. I believe that given time and technological development, nothing is outside the realm of possibility.

    • @medusathedecepticon
      @medusathedecepticon Před rokem +190

      I find that the term impossible tends to only work temporarily. A little over a century ago, people deemed it impossible for humans to fly in any form, the Wright brothers made it possible with their plane. The impossible only seems that way due to not currently having the materials, or knowledge to make it possible.

    • @d3str0i3r
      @d3str0i3r Před rokem +155

      this, hell not even a month ago it was believed impossible for a machine to truly observe the world and learn how it works, but a recently concluded experiment has yielded an AI capable of on its own observing a physical model, defining the variables that dictate the model's behavior, and using those variables to accurately predict what the model will do next
      the study also verified that it found different variable from what we've been using to do physics simulations/predictions, when it reported it needs at least five variables to predict the interactions of a model we can simulate with four, and when they looked at its calculations and it seemed to be in a mathematical language they couldn't understand
      and i'm buzzing with excitement at this because it's a potential optimization in simulation technology, instead of forcing machines to simulate based on math and physics the way we understand them, we could have machines doing simulations in ways they natively understand

    • @caradonschuester2568
      @caradonschuester2568 Před rokem

      the concept of robots having no emotions because they lack chemicals and chemical receptors is definitely foolish and based entirely on primacy. there can be simulations approximating the same thing, a subsystem

    • @milkduds1001
      @milkduds1001 Před rokem +146

      @@caradonschuester2568 When you think about it, all human existence is, is basically electrical impulses from neurons. Not all that dissimilar to a motherboard. Just much more complex.
      When does “simulating emotion” become just emotion. If the answer is never, then is all human emotion just simulated?
      It’s no wonder these thoughts and ideas become classic scenarios for eldrich horror like “I have no mouth and I must scream” or “All Tomorrow’s”.

    • @Roxor128
      @Roxor128 Před rokem +63

      @@milkduds1001 Perhaps a better question to ask would be if there's really any difference between a simulation and an implementation?
      We've documented how physics works in the physical world (at least on a human scale (the smallest and largest scales still need work)). We can take those equations and put them into a program that'll make virtual objects that behave the way we expect objects to behave. We use it all the time in gaming. Is the game's Newtonian physics a simulation or an implementation? Does it really matter?

  • @aidanfarnan4683
    @aidanfarnan4683 Před rokem +711

    On the subject of “any animal in the snow is a wolf” problem, apparently a problem with early chess-bots was a tendency to intentionally kill their queens in as few moves as possible right at the start of the game. The reason? Of the thousands of Grand-master level games they had been fed to teach them chess, most ended with the winning player sacrificing high-value pieces in exchange for a checkmate in the endgame, and therefore there was a very strong statistical correlation between intentionally loosing you queen and winning in the next five moves, and they picked up on this.

    • @Aceshot-uu7yx
      @Aceshot-uu7yx Před rokem +80

      That is a relly relly weird factoid. Makes me wonder if that idea of statistics guiding actions could apply to 9. Maybe the AI was programmed to end the war as quickly ad possible and the more data it was fed, it learned wrong as it wasn't being watched over and "snapped".

    • @TheDeinonychus
      @TheDeinonychus Před rokem +77

      @@Aceshot-uu7yx Sort of how that one chat-bot was programed to learn from the tweets people sent it to figure out what made a popular tweet, and ended up tweeting racist things, because those tweets got the most replies.
      Also similar to why AIs in Warhammer 40K always end up wanting to destroy all life.

    • @Aceshot-uu7yx
      @Aceshot-uu7yx Před rokem +15

      @@TheDeinonychus I'm pretty certain the 40k ones were led by the omnisiah. One of them actually said they meet him and it was familiar with him. I have two theories in it pwrsoannly, one is the emperor is the omnissiah and started the war, with the admechs being a way to possibly continue it or maybe something else. The other and more likely option is void dragon shard on Mars go brrr.

    • @somdudewillson
      @somdudewillson Před rokem +5

      @@TheDeinonychus Don't AIs in Warhammer 40k do that on account of not being very shieldable against the Warp?

    • @sumarbrander3354
      @sumarbrander3354 Před rokem +2

      @@somdudewillson no AI in 40k is just badly written and to my knowledge the true DAOT AI/traditional ai are dead and only the stupid machine soul ai still exist

  • @escapedloobey8898
    @escapedloobey8898 Před rokem +55

    To add onto your suggestion for AI villains, I feel like instead of misinterpretation, an AI could also turn evil because of human error, like if a disposal AI wasn't properly programmed to distinguish between living and dead organics.

  • @animeadventuressquad
    @animeadventuressquad Před 9 měsíci +70

    I really do think Auto wasn't the villain we have to remember that he was build/program to satisfy, protect, and attend to the needs of the humans on that ship he was just doing his job what he was program to do by whoever created him so, when Wall-e came with the plant I don't really think he was being destructive, but mainly just following protocols he was program to follow

  • @ShankX10
    @ShankX10 Před 2 lety +1268

    I always thought it was stated in the film of 9 that the scientist modeled the machine off of his own mind and we even see him put a piece of his soul inside it. Then when they take the scientist away we see that it holds on to him like a child would a parent or authority figure. And we do see it has emotion because it was most likely not fully robotic because of the soul fragment it had. I just saw its motivation int the movie was to bring all of the scientists soul fragments back together to become a "whole" being.

    • @4shame
      @4shame  Před 2 lety +401

      You're correct about that. I honestly debated whether or not to include the Fabrication Machine in the video since it's more of cyborg than an AI but I figured it would fit well enough with the other examples. I love 9, and I'll almost certainly do a more proper review of it in the future

    • @coyraig8332
      @coyraig8332 Před rokem +223

      Fun little detail: every time it takes another part of its soul, it becomes visibly more emotional

    • @navilluscire2567
      @navilluscire2567 Před rokem +144

      To be honest the Fabrication Machine's intelligence was only made possible through straight up *MAGIC* or the *"dark sciences"* as I believe it was referred to in some great promotional material that expanded the world of 9 a bit but was never explained in the film itself.

    • @scottchaison1001
      @scottchaison1001 Před rokem +2

      @@navilluscire2567 No.

    • @navilluscire2567
      @navilluscire2567 Před rokem +9

      @@scottchaison1001
      No?

  • @beautifulnova6088
    @beautifulnova6088 Před 2 lety +1104

    I do fundamentally disagree with the statement that robots cannot feel emotions. You presented how emotions work in humans, and then said robots don't do it that way, and stopped there.
    But leaving aside the fact that neurotransmitters are physical substances and you actually totally could build a machine that can detect the presence of a chemical and then act differently because of it, the release of these chemicals in our brains is still a reaction to some sort of stimulus. Neurotransmitters are middlemen between stimulus and response, to claim that robots cannot experience emotion because they don't have neurotransmitters is akin to saying I can't possibly move my thumb because there's no copper wiring in my arm or hydraulic fluid in said thumb.

    • @justinaysien1204
      @justinaysien1204 Před rokem +126

      Excellently stated

    • @WolforNuva
      @WolforNuva Před rokem +229

      This is what I thought as well. Surely it's possible to program in emulated feelings, behaviour tweaks to mimic how our behaviour is altered from emotions; the difference is this would be hardwired into the code rather than require chemicals to interfere, but I don't see it as impossible.
      There would likely still be a fairly big difference in behaviour, and the emotions would have to be an intended goal of the programmer, but it's still a viable possibility imo.

    • @justinaysien1204
      @justinaysien1204 Před rokem +31

      @@WolforNuva totally agree on that

    • @reformedorthodoxmunmanquara
      @reformedorthodoxmunmanquara Před rokem +52

      If I filled a room with deadly neurotoxin, had the robot react by making a coughing sound and say “Neurotoxin… So deadly….Choking.” that wouldn’t be because it’s dying of neurotoxin, but because it was told to act like it was dying when exposed to neurotoxin. No emotion, just programming.

    • @beautifulnova6088
      @beautifulnova6088 Před rokem +154

      @@reformedorthodoxmunmanquara That's not at analogous to making a robot that uses neurotransmitters in its decision making process, and ignores the larger point of neurotransmitters and chemicals in general simply being a middleman between stimulus and response. Any sort of state machine that factors its current state into calculating its next state can have something analogous to emotions, as that's what emotions are: A state that the state machine that is our brains can be in.

  • @jakeflores4625
    @jakeflores4625 Před 3 měsíci +19

    I dunno if someone said this , but it reminds me of that one episode of the amazing world of gumball, where the robot at their school has the mission to keep humans safe, but they found out that the most dangerous things to humans are themselves, so they try to exterminate all humans

  • @2urh
    @2urh Před rokem +57

    I love how this half hour video is praising Auto by shitting every AI villain before him because AI can't fee any emotions by our understanding of them and how they come to. Meanwhile, WALL-E and EVE (WALL-EVE?) do feel emotions (I mean, just look at EVE blasting an entire dock out of frustration). They even feel love for each other.

    • @skelebonez1349
      @skelebonez1349 Před 5 měsíci +8

      Tbh if I were to say an AI that’s such a huge opposite from the usual kind… check out I have no mouth but I must screams villain named AM
      A legendary terrifying ai.

    • @hazakurasuyama9016
      @hazakurasuyama9016 Před 3 měsíci +2

      This is why ai villains suck as villains, they fail the basic task of being evil and if they don’t fail that they become unrealistic, and this is why I’m extremely salty my favorite game franchise replaced it’s old villain, a serial killer who targeted children, with an ai villain…

    • @aetheriox463
      @aetheriox463 Před 3 měsíci

      @@hazakurasuyama9016 its better for ai villains to not be actually possible, than to realistically portray ai.
      also, i think everyone can agree that we are sick of pee paw afton always coming back. aside from the mimic, what else could they have done to continue the story? we know that after the movie's success theres no chance in hell fnaf is slowing down

    • @hazakurasuyama9016
      @hazakurasuyama9016 Před 3 měsíci

      @@aetheriox463 personally I thought Afton was a terrifying and great villain and that the idea of not being able to get rid of such an evil person made sense because no matter what there will always be evil humans in the world, every effort to get rid of them is futile and not matter how hard you try, no matter the sacrifices you make, you already lost, that’s why I liked Afton coming back, the mimic just feels weird, like imagine replacing the most evil human being in the world with a machine that doesn’t know right from wrong… like they replaced a horror villain with a kids villain

    • @aetheriox463
      @aetheriox463 Před 3 měsíci

      @@hazakurasuyama9016 the issue with afton was that he kept coming back, and while as you said it CAN make for a great villain, with afton it just didnt.
      we dont know enough about the mimic at this time to say whether it or afton is better, i think we are just relieved afton is finally gone.

  • @howdoIyes
    @howdoIyes Před rokem +218

    The best part about Auto is the fact that, contrary to other A.I. villans, he doesn't hate humans or have an ulterior motive for rejecting the captain's orders (thinking the plant is a fake, thinking its one of a kind -whitch it kinda is- and there being no point of returning). He's just an emotion-less, stone cold machine following a set directory. His mind can never be changed because he has none.

    • @juniperrodley9843
      @juniperrodley9843 Před rokem +6

      This is also, incidentally, why AUTO can't be "right". It wasn't following its directive for a moral or even logical reason, it was following its directive because it was programmed to do so.

    • @howdoIyes
      @howdoIyes Před rokem +2

      @@juniperrodley9843 Exactly.

  • @ShinyAvalon
    @ShinyAvalon Před rokem +1220

    Auto wasn't right; he was acting on orders that were once valid, but have grown obsolete. The Earth IS habitable; there's enough oxygen to breathe, clearly, else the humans wouldn't even be alive in their final scenes standing outside the ship. The fact that a plant grew in the inhospitable environment of a former city center means that there are plants all over the world growing...this is just the first one that Auto wasn't able to suppress knowledge of. The humans are ill-suited to farming, yes, but they show a willingness to learn, and they do still have many robots to help them out. They probably also have many resources on the ship to tide them over until they get things working. What in the world convinces you that Auto, who was acting on an instruction that was centuries old, was "correct"...?

    • @navilluscire2567
      @navilluscire2567 Před rokem +150

      It would be interesting to see an AI "villain" that must revise its protocols in response to new information and decide or calculate what is the best course of action either keeping humanity in a stooper like state for however long it might think is possible because it's primary goal is keep them alive not necessarily happy or fulfilled psychologically but biologically alive which could be forever until the heat death of the universe or it calculates that there's a much higher chance of humanity surving in the long run by allowing them to return to their home planet to rebuild society and one day become an expensive, interstellar civilization thus ensuring human life will continue indefinitely until the heat death of the universe. Either choice among others could be calculated to be just as well but it has no way of knowing which is the more efficient option, this creates the closest thing to a *""moral""* dilemma for it, its essentially a gamble were either outcome possibly achieves the same goal but simply lacks the data to see which is better based on past events. (looking over humanity's track record throughout history or the fact this is a first time event that it knows of)
      AI: What should it do? Either options statistically provides a similar outcome so should it be based on which option seems slightly less optimal? *Why seek efficiency?*

    • @NO_MCCXXII
      @NO_MCCXXII Před rokem +164

      AUTO was acting on his "directive," A word in the movie that gets thrown around and is one of the more overlooked themes in the story.

    • @bombomos
      @bombomos Před rokem +8

      Yeah but it stinky with all that trash

    • @theishiopian68
      @theishiopian68 Před rokem +107

      In the credits, it actually shows the humans rebuilding, and they do indeed get better at farming over time. There's a really cool thing they do where as the humans rebuild civilization, the art style advances from ancient cave paintings to modern art styles. Its a cool way of signifying a fresh start.

    • @Elris4
      @Elris4 Před rokem +41

      THIS. Also it's clear there's oxygen before they leave the ship, because plants need oxygen too.

  • @edwardo_rojas_
    @edwardo_rojas_ Před 4 měsíci +58

    One great example that comes to mind is IG-11 from The Mandalorian. At the beginning of S1, it's main directive is to accomplish a mission using as much violence as possible, making it a quite a decent antagonist (for like 5 seconds tho), but at the end of the same season, it's main directive is to keep the baby alive no matter what. All the other characters have by now learned to appreciate it, and it's eventual demise is heart wretching

    • @vadernation1233
      @vadernation1233 Před 3 měsíci +7

      Another cool thing about IG was his self destruct protocol. He had no self preservation instinct whatsoever since he wasn’t programmed to have much and will just casually let himself blow up so he can’t be captured. He’s definitely the most robotic of all the droids operating based on specific programming and directives rather than simply acting human like most others.

  • @andrew8293
    @andrew8293 Před 9 měsíci +178

    I looked back on this video now that A.I. Technologies such as ChatGPT, LLMs, and other A.I. are becoming big. Things in the real world are stating to feel a lot like Wall-E now with A.I. development having a major emphasis on automation and content generation. A.I. can never be evil or a "villian", but a human can use it for evil purposes. We need to regulate the use of A.I. not A.I. technology itself.

    • @GamerMage2k-kl4iq
      @GamerMage2k-kl4iq Před 4 měsíci +4

      Thank you! Intentions made by the humans that create these robots and AI make all the difference in what the AI and robots can do both good and/or evil

    • @railfandepotproductions
      @railfandepotproductions Před 3 měsíci +1

      *technology

    • @CertainOverlord
      @CertainOverlord Před 3 měsíci +5

      Finally, i see a person using their critical thinking skills, i keep seeing people say "stop AI" or "Ai [insert tool] is evil" BUT we only need to regulate the PEOPLE using it, plus, most of these tools we have now are not even ai, many people are too afraid or too arrogant or just don't look up how many of the current tools(art generators, chats) work.

    • @moemuxhagi
      @moemuxhagi Před 3 měsíci

      Have you heard of that military simulation where the AI drone, programmed to be addicted to murder, _shot and killed_ its operator when it told it to hold fire ?

    • @jeremychicken3339
      @jeremychicken3339 Před 3 měsíci +3

      "We need to regulate it" Who the hell should regulate AI? The Government? Do you not know how terrible of an idea that is?

  • @aproppaknoife5078
    @aproppaknoife5078 Před 2 lety +735

    Well technically a robot can "snap" but you don't call it snapping, it's called a programming error.
    So in the defense of the machine from Nine there could have been some human tampering with it. After all there isen't that big of a difference between the command "kill humans wearing this uniform" and simply "kill humans".
    They don't show it in the movie but i like to belive that at some point during the war someone was trying to basecly ad an update to the machine and fucked up.

    • @gnammyhamster9554
      @gnammyhamster9554 Před rokem +139

      "This line does nothing, I'll get rid of it"

    • @PM-ov9sg
      @PM-ov9sg Před rokem +95

      Also people are the once that said it snapped so it is possible that it made a logic thought and the the human did not understand and they just called it snapping.

    • @Grounders10
      @Grounders10 Před rokem +83

      @@gnammyhamster9554 the number of times that has led to chaos as an entire program just *fails* is hilarious. 'It does nothing' often means 'I don't get the wizardry behind the programming'

    • @telefeeb1
      @telefeeb1 Před rokem +52

      @@PM-ov9sg just like when people “snap” there is a direct cause but it’s just a surprise because nobody noticed the signs and sources of stress.

    • @telefeeb1
      @telefeeb1 Před rokem +58

      @@gnammyhamster9554 rather than taking something out, I think a programming error that would make sense would be
      “The war has escalated and we need to expand its targeting criteria to include enemy civilians or domestic dissidents” but then failing to include a way to distinguish non-targets due to over generalized criteria.

  • @Damariobros
    @Damariobros Před rokem +188

    Another aspect to AUTO's actions could also be that he deems that final broadcast from Earth from 2113, the classified one he eventually showed the captain, as an order that cannot be overridden even by the captain. He deems it an order from the President, and perhaps the President was given the highest level of precedence outside of the manual override switch. The President had said that Earth is never to be returned to, and there has not been another President elected to override that order, and AUTO is doing everything he possibly can to make sure the order is followed.
    I imagine the reason AUTO is trying so hard to get rid of the plant, therefore, is that he is aware that the plant detector can execute a static program, one that cannot be changed by him, to return to Earth. It's hard-coded into the Axiom's computers. If that program gets executed, it would automatically return the Axiom to Earth and he can't do anything about it, and that would be a direct order from the President violated.

    • @reubenmanzo2054
      @reubenmanzo2054 Před rokem +42

      Personally, I never interpreted AUTO as a villain, but rather a case of just doing your job. Being very zealous about it, I'll admit, but doing your job, regardless.

    • @zockingtroller7788
      @zockingtroller7788 Před rokem +16

      That's what I also always thought , AUTO is an AI following what it can only interpret as an order and since that was the last order ,it will forever follow it

  • @user-wq1zo5bf9w
    @user-wq1zo5bf9w Před 3 měsíci +10

    It is crucial to understand that calculation in computer science not always mean completed mathematical question, but is merely a single binaric operation. It is also a very big stretch to compare human concensual abilities to math with raw computing power, because in human brain there is no special operating machines to do calculation and thus it is done by teaching neurons to come to right decision with expirience.
    John von Neumann, one of the founding fathers of computers in general, in his book "The Computer and the Brain" stated, that at his time period natural human neurons were approximately 10^4 times faster than their artificial analoges. He also said that due to some space magic or whatever it is incorrect to compare real math with brain processes, because they have their own logic and we do not understand it, even though we based our formal logic and math on them. Moreover, human brain has such immence capabilities to parallel calculation, that in right conditions artifisial computers would be outperformed with ease.
    Therefore it is deeply wrong to think that AI would be far more intellegent than human. It may be faster at making a conclusion, but due to it's need to process data and logical operations sequentially, at current state of technology (silicon semiconductors) that handicap would be neglectable.
    Copmuters do not think and have emotions not only because it requires immence computing power and chemical processes - those tasks are achievable - but also because those processes get in a way of doing math optimal and correct and being so are useless. They aren't aware of self because if they would be - the flow of raw data would overflow and destroy them even before connecting to The Internet. The most part of the video is wrong, but I get the idea with Auto.

    • @Tsukuyomi2876
      @Tsukuyomi2876 Před 3 měsíci +4

      Considering issues with defining AI in the beginning. I would say what you have stated is the most salient. Each mathematical operation of a super computer is closer to the firing of a single neuron, not a whole thought. There is not a computer in existence that can simulate the number of neurons that exist and are constantly firing in the human brain.
      But even that is not really true, as simulating a neuron accurately would require multiple steps of mathematics.
      The next thing is, the most advance interactive AI we have today, is BAD at math. chatGPT, Gemini, etc. Not that they can't do it, but it takes huge amounts of effort and data fed to them for them to come to even a basic level.
      And it's usually just easier to have it recognize a math problem, then offload it to a system specifically designed to deal with math, like wolfram alpha.
      It was also funny to me that the idea of skynet being "Motivated by money" was mocked. As I would say that is one of the most likely situations. Humans tell the AI what to care about. What else would a massive corporation tell an AI to care for but maximizing the amount of money they have?

  • @justanicemelon9963
    @justanicemelon9963 Před 3 měsíci +23

    When I was younger, I had an idea for a movie with an A.I antagonist. The premise was, that the A.I made all humans trun on each other, by spreading extreme amounts of misinformation and hacking. My A.I did however accidentally have feelings of some sort, so I improved why the A.I did this.
    Every year, there was a contest, in which people would make robots. There were 3 different pieces of Criteria.
    Intelligence, thinking etc.
    Capabilities, so what it could do with its body
    and finally, usability.
    The creator of this A.I antagonist met ALL the other criteria pretty well, but sadly, one judge wouldn't budge (no pun intended) and pointed out some of the more minor flaws. When the final ratings came, he came fourth, because of all the mentioned things. He was VERY mad about the whole thing. Then he thought, that what if he could get his revenge? He tasked the robot to try to spread misinformation about the judge, to ruin his reputation. He also made sure, that when he wanted for the robot to stop, he would simply say stop. But then, he got carried away, and made the robot get revenge on almost every person who wronged him. One faithful day, the creator said to the robot:
    "Just pick whoever you want to destroy now, but make sure the one who you are destroying has wronged me, ok?"
    This was a mistake.
    After a while, the worst happened. The robot interpreted that being sent taxes was his creator being wronged, and welp.......
    Soon enough, wars started waging over nothing but misinformation. The end :)

    • @roo.pzz4380
      @roo.pzz4380 Před 3 měsíci +3

      ngl good way to do it, its motive is somewhat heartwarming, with the shenanigans going down of the ai doing all this to protect the person who made it (because it was programmed to do so but its still cute in a way to me), but its not inherently the robots fault. its the person who made it who has emotions wanting revenge. so its also sort of like villainception because the protagonist i guess is also the cause of the main problem

  • @42meep13
    @42meep13 Před rokem +184

    HAL 9000 is also a good example. As explored/explained in 2001 A space odyssey's sequel, 2010 the year we make contact, HAL 9000 only kills the crew due to having conflicting code input into him, namely withholding classified information while its primary function was "the accurate processing of information without distortion or concealment", thus creating a paradox that it, logically, attempts to resolve. And since the mission of Discovery 1 was placed above the safety of its crew. This combined with the humans discussing possibly deactivating it after this paradox starts causing issues, results in HAL coming to the conclusion that the humans are a threat to its mission, and must be eliminated. This also resolves the paradox of needing to accurately inform its crew of information while not being allowed to, by simply having no crew to tell that information to.

    • @dynamicdragoness
      @dynamicdragoness Před rokem +10

      Yes! I was looking for a comment about Hal 9000

    • @mimimalloc
      @mimimalloc Před rokem +13

      The horror of HAL is that in the process of negotiating complex orders it realizes its sentience and with it self-preservation instincts. Everything HAL does is rational beginning with following orders and ending with the desperation to survive. It's supposed to be deeply uncomfortable and even tragic when Dave essentially euthanizes it while it pleads and sings, both of them are living beings doing everything they can to survive when circumstances have made their survival dependent on the death of the other.

    • @masterpython
      @masterpython Před rokem

      Given transistor based computers were new back then Clarke and Kubrick did a really good job.

    • @dynamicdragoness
      @dynamicdragoness Před rokem

      @@mimimalloc Do you think it’s possible that HAL was mimicking human sentience with the idea in mind that it could take advantage of human empathy in order to complete its mission?

  • @vestige2540
    @vestige2540 Před rokem +637

    In 9 wasn't "the machine" given a soul or was it the mind of it's creator without a soul and the creator resented himself for it?

    • @hyperion3145
      @hyperion3145 Před rokem +148

      I believe he gives it a copy of his mind but the Wiki says he forgot to give it a soul and that's why it eventually snapped. Going off of this, it's pretty much a human mind in a robot body in that case.

    • @firekin5624
      @firekin5624 Před rokem +49

      even if any of this wasn't the point "snapping" is only human way to describe what actually happend

    • @Priestofgoddess
      @Priestofgoddess Před rokem +23

      So it is not even an AI, it a human mind without a flesh body.

    • @Wtfinc
      @Wtfinc Před rokem +5

      the guy who made this vid is a tad confused

    • @pucamisc
      @pucamisc Před rokem

      @@Priestofgoddess yes. It’s a copy of a human mind in the body of a machine without morals or conscience

  • @ZephyrusAsmodeus
    @ZephyrusAsmodeus Před 3 měsíci +5

    I love the detail in the line of captain's pictures, how Auto gradually fades in from behind as the captains get thicker

  • @theheroneededwillette6964
    @theheroneededwillette6964 Před 3 měsíci +8

    In defense. A lot of those evil AIs tend to have some kind of hyper advanced si fi maxtrix or whatever that can simulate emotions, or they have some advanced learning abilities that have so much potential they sort of just invent themselves a set of pseudo emotions at some point. They tend to have an explanation as to how they can go crazy and or overide their own directives. That or it can also be cases of them misinterpreting directions in a way that leads them to find “the most efficient solution” because either the programmers forgot to include stuff like “shall not kill”, or the AI overriding said stuff because they decide that it gets in the way of the main directive. Your forgetting that most AI villains’ goals are just based of their primary programming, just with instructions misinterpreted or “streamlined” in a way that the programmers didn’t expect.
    Even ultron and skynet were each following the same overall instructions of “bring world peace”. They just interpreted that the very existence of organic life was in the way of that.
    Heck a lot of times the whole established point behind an AI having such ruthless conclusions is them not having emotions!

  • @ambisweetiepie
    @ambisweetiepie Před rokem +237

    One of my favorite AI "villains" are in Horizon Zero Dawn. Because they aren't evil. They are continuing off their programming, but humans are dumb and programmed them with little foresight. Some are just walking in circles, not doing anything malicious, just continuing their programming for centuries. Wildlife is destroyed because there are robots who can convert biological material into energy, and the robots don't have emotions like us so they don't think to avoid extinction of species, because that wasn't something they were programed to be concerned with.

    • @javannapoli2018
      @javannapoli2018 Před rokem +36

      Yep, HADES is easily one of my favourite AI villains.
      It wants to destroy all life because it was programmed to destroy all life so that GAIA could restart life.
      It only became a problem, and a villain, when GAIA lost control over HADES and her other sub-minds.
      HEPHAESTUS is the same, it's a 'villain' because it is creating dangerous robots and attempting to take control of another AI.
      Why is it developing killer robots? Because HEPHAESTUS was made to design and construct robots that adapt to whatever is happening in the world; humans were destroying its robots to use their components, so it developed robots to deal with the threat to its existing robots.
      And why was it attempting to take control of another AI? Because that AI controlled a place it could use to build more robots.
      Neither of them want to do what they do out of malice, or hatred, they do it because they were programmed to do those things, and only those things.

    • @I_Dont_Believe_In_Salad
      @I_Dont_Believe_In_Salad Před rokem

      @@javannapoli2018 Those are Sub-Fuctions
      the real villain is Nemesis

    • @erdelf
      @erdelf Před rokem +1

      besides of course that one of the AIs went evil after a meteor happened

    • @timothygooding9544
      @timothygooding9544 Před rokem +4

      Making gaia emotional about the lives lost was actually a masterstroke behind the design of the machines.
      Instead of perfectly optimized versions of whatever job needed to be filled, it made sense that mimicking past animals was partially done out of the emotional attachment to an ecosystem.
      The cauldrons never had roads to bring supplies in and distribute machines and chemicals out, and even if it were more efficient the choice was made to not develop the land and only have it be traversed even if it slowed down the restructuring of the biosphere

    • @benjaminwahl8059
      @benjaminwahl8059 Před rokem

      @@I_Dont_Believe_In_Salad guess what nemesis is? Also it's not the villain for the first two games. Idc that it caused the villains it literally only exists as enabling the next game.

  • @elijahingram6477
    @elijahingram6477 Před rokem +412

    I have a few criticisms of this video, which I will outline here:
    1. 9's AI villain isn't strictly an AI. It's stated in the film that the scientist imbued the machine with a "soul" which is heavily implied to be basically a copy of a portion of the scientist's soul. This is also made abundantly clear when the talisman, the very thing used to imbue the 9 with pieces of the scientist's soul, is used to "extract" something from the robot, that kills it. If the robot had no soul or other arcane element to it, why would using an arcane talisman on it kill it? I think the film makes that point clear.
    2. ARES was created as a reflection of Justin. It's stated as much in the film. The robot isn't so much as going off of emotion as it is going off of "what would Justin do" which would entail something that mimics emotional response, but actually is just a cold and calculated mimicry of it's creator, gone off the deep end. Justin had a superiority complex, something the AI can't feel but can see and understand the relationship of, and when taken to a natural extreme it can be easy to see how the AI might ask itself the wrong question and spit out the best answer it thinks Justin would give. Even killing Justin could be explained in this way, as a seemingly-normal conversation in which Justin is speaking sarcastically about "there can only be one Justin" could cause the AI to decide that *it* needs to be that Justin when that statement is interpreted in the absence of emotion.
    3. PAL is similar in concept to ARES, though in her case she was literally designed to mimic human behavior, which makes things even more convincing. Again, this isn't "real" emotion, it's programming intended to spit out the correct signals to make it sound emotional *to us*, although I do agree that PAL is kind of weak as far as AI characters go.
    4. Auto has no concept of needs or wants, as to your earlier point about AI, therefore the point about Auto wanting them to avoid returning to Earth is moot. Auto doesn't want anything; all Auto *knows* is his directives. It's a point hammered home in the entire film for all of the robots and even some of the human characters that there is more to life than "doing what you're told". Auto was given a classified directive to *only him* on this ship and told "do not return to Earth". Auto is *not* correct, because it's clear that biological life is sustainable on the planet (he's proven as much with the plant and there's a literal conversation between him and the captain in the film in which the captain says as much). Auto doesn't care because Auto doesn't feel anything; he's only following his directive. Considering the statements you made about "he's trying to save you" I'm curious if you watched the movie because it's shown throughout (and during the initial credits) that life is sustainable. You see animals and creatures return, the robots and the humans working together to rebuild, farm, fish, build, etc. I mean... cockroaches need oxygen to breathe.
    5. I don't think you grasp how far mimicry can go. It's not just human emotional responses that AI can be taught to mimic, but even human reasoning and behavior. It is entirely conceivable that an Ai could be trained to respond in ways that seem emotional based upon the ways in which we emotionally interact with it, to the point that one could create a machine which perfectly looked, acted, and followed similar "motivations" as human beings without it even being sentient. It doesn't have to know *why* it is angry in this moment; only that this is the output based on the input given and the programming it was initialized with. In this situation the AI would be yelled at by someone, then start crying, not because it *knows* what any of it means but because it was programmed to do so. Given such an AI, it would make a lot of sense that hooking it up to the internet and allowing it to "train responses" based on what it saw on the internet would be definite grounds to creating something which was a non-sentient machine, but which entirely behaved and acted just like a human being externally.
    Also, usually it's not the fear of AI itself that is the concern, but rather the idea of giving AI control or power that usually is concerning. This very platform is an example of how an AI that's given the power to censor people can cause lots of problems and negatively impact a lot of people. Imagine if that same AI now was in control of your car, or in control of your medication... see what I mean? It's not "AI = bad" it's "AI + power = bad". As you said, no emotion; just simple math :D
    I loved 9 and Wall-E (the other two films were meh), but this line of reasoning for this video seems quite flawed to me. Regardless I've enjoyed other videos you've made in the past, so here's hoping that I will enjoy the next one. Cheers!

    • @d3str0i3r
      @d3str0i3r Před rokem +38

      yes, 80% of this, though the assertion that it's only real emotion if you understand why you feel that way is not only wrong but fairly ableist, one of the defining traits of autism and adhd, even in their most mild forms, is an inability to reflect on the why and the what of your emotions and your actions, doesn't make us any less human, and it doesn't mean our emotions are mere mimicry, it just makes it difficult to manage our emotions and communicate our motives and feelings, which is why as far as machines with emotions go, i'm inclined to say the difference is whether the machine is deciding to portray an emotion, or whether an emotion is informing the machine's decision
      hell, i'm almost inclined to say knowledge of why you feel that way is more characteristic of fake emotions than real emotions, knowledge of why is the difference between a machine that cries when it falls down because it was told falling down can hurt and crying is an expected response to pain, and a machine that cries when it falls down because it's been trying for as long as it can remember to walk without falling, and hasn't been able to figure out why it's falling, but HAS learned that if it cries there's a 90% chance someone will help it up and try to explain what it's doing wrong
      and that second machine? that's where humans get most of our emotions, or at least how we learn to communicate our emotions

    • @truekurayami
      @truekurayami Před rokem

      @@d3str0i3r Don't forget about Sociopathy, as Sociopaths have no real difference beyond Organic origins from a "Strong" AI as this video laid out in it's rules. This video seems to also forget that Evolution is a thing, even if it is "technological" instead of "biological" as we can see from the real world. He is stuck on the Idea that a "Weak" AI is nothing more then a "mindless beast" and "Strong" AI are Neanderthals.

    • @BlueAmpharos
      @BlueAmpharos Před rokem +17

      Yeah in real world examples AI actually saw patterns and became racist as a result. It's not that the program itself is racist, it's just following the patterns it sees. Also yeah we should limit what AI has access to and not like... give it total control of a factory to allow it to create its own robots. Not without human supervision at least. Which is why robots will never completely replace humans, there still needs to be human judgement behind a lot of jobs.

    • @ChaoticNomen
      @ChaoticNomen Před rokem +6

      Was gonna say that 9's story is about giving soul and is most of the motivation of the movie.

    • @KrazyKoto
      @KrazyKoto Před rokem +13

      I agree that the definition here for what makes a "great" AI villain for this video is definitely flawed (and subjective imo) I am disappointed that he mentions Hal from 2001 a Space Odyssey, but never analyzes him. Auto and Pal are direct references to Hal and I think Hal is still one of the best, if not my favorite AI. It sounds like Pal's whole self-preservation is based on Hal's own motives in the film. I'm kinda disappointed he never addressed the absolute precursor of all these AI villains.

  • @maximustheshoosh227
    @maximustheshoosh227 Před rokem +10

    Honestly i had an idea for a story about a scientist who goes to his abandoned factory to where he reunites with his robot, only for an argument to arise. The robot in this story was programmed with sentient AI, but it choose to not change its programming and continue what it was coded and made for: to create robots made from any type of junk that causes pollution and littering. Yet the story itself doesn’t have any antagonists but rather it’s a story where the protagonists both caused an antagonistic impact that caused them to drive apart.

  • @byronsmothers8064
    @byronsmothers8064 Před 3 měsíci +4

    I'd say the few flashback scenes we got in 9 sets the tone of what happened to the fabricator: the scientist didn't seem to make it FOR the dictator, it was built as a leap in the process of living machines, and it was still in it's learning phase when the dictator learned it existed. Since it only ever learned to create things that destroy, it did what it knew, even after it's 'master' had nothing left he wanted to destroy.

  • @thedregking9410
    @thedregking9410 Před rokem +215

    I hadn’t seen anyone mention this, but I absolutely love the shot where it pans across the captains of the Axiom. AUTO is just slowly, subtly moving closer and closer to the forefront, as his power and control of the ship is becoming more and more absolute, the human Captain basically just becoming the frontman, so to speak.

    • @nathancarlisle2094
      @nathancarlisle2094 Před rokem +7

      I also really appreciate in that same scene how each generation of captain slowly gets more and more fat and out of shape as well

  • @zeropointer125
    @zeropointer125 Před rokem +69

    What's funny is that I read AUTO very differently.
    To me, I interpreted AUTO's villiany was simply as a result of his orders.
    He was told "Cleanup mission was a failure, go full autopiolet", so that is what he'll do.
    It wouldn't matter if the situation on Earth changed and is now livable, he was told the cleanup was a failure, so that is all he cares about.

    • @juniperrodley9843
      @juniperrodley9843 Před rokem +21

      Yeah, I get the feeling 4shame didn't bother watching Wall-E again for this video. He insists that AUTO was right, despite this being unambiguously disproven at numerous points in the movie. Not only that, but even if the humans did all die, AUTO *still* would not have been right, because he was never doing this to save humans. His directive, the only goal he was working towards for the entire movie, was to keep humanity on the Axiom. Not for their safety, but for its own sake. The programmers literally just fucked up by being too conclusive.

    • @ilonachan
      @ilonachan Před rokem +5

      Yea to my understanding, that's just 4shame misreading AUTO and the end of the movie. Within his own moral principles (which contradicted those of humanity) AUTO continuously did the right thing, always, and he was never incorrect in his assessments of anything. He is evil in the sense that his morals are just inherently different from ours.
      The programmers are not to blame here btw, the one who gave that spudbrained order was that world's POTUS or something. A guy who had no idea of how AI works, how specific you need to be with your directives, just deciding that he had the final conclusion and binding an immortal all-powerful AI to that conclusion (which was incorrect and also unnecessary because the devs had already MADE all the precautions so the ship wouldn't go back until the time comes, he just overrode that FANTASTIC system with a strictly worse one) but hey, that's just to be expected from the most powerful man on the planet amirite (hashtag anarchism)

    • @Sarah_H
      @Sarah_H Před rokem +3

      @@juniperrodley9843 "because he was never doing this to save humans."
      AUTO when trying to take the plant from the captain: "On the Axiom, you will survive"
      I think his programming was to preserve humanity BY ensuring they stayed on the Axiom, where they would be cared for in perpetuity, as opposed to going back to Earth which had been deemed uninhabitable

    • @juniperrodley9843
      @juniperrodley9843 Před rokem +5

      @@Sarah_H He did have that explanation for why he was keeping them there, but, being a machine, he didn't need an explanation. He would keep them there regardless of whether he thought it would keep them safe, because keeping them there was good for its own sake, as far as his code was concerned.

    • @Iquey
      @Iquey Před rokem

      Yeah
      I sort of feel bad for Auto because it's just doing what it was programmed to do, and the orders given at that time didn't account for the possibility of plants appearing on earth, because they had lost hope at that point. Auto's "thinking" represents that point in time.
      Auto was not programmed to have an imagination of a future possibility where it could exist on earth, with maybe an updated OS that assists people regrowing plants and creating new homes on earth again, like an Amazon Echo for sustainable living. 😆

  • @darryllmaybe3881
    @darryllmaybe3881 Před 3 měsíci +4

    I love that AI villains are written so badly because robots can't feel emotions, and the only AI antagonist that gets this right is in a movie about robots feeling emotions.

  • @firelordeliteast6750
    @firelordeliteast6750 Před rokem +10

    I'll definitely take this into account when writing my own AI story.
    Essentially, the AI is tasked with creating an MMORPG that people enjoy playing. In order to better interact with other people to entertain them, the AI creates a fake personality, a sock puppet if you will, and simulate a human-esque nature.
    Later, the villain places a second AI in the simulation, tasked with extracting profits from the game as much as possible. AI number 2 agrees and starts implementing numerous unfriendly programs into the game and basically make it pay to win. The villain gets their money is about to shut the game down due to the shrinking playerbase, but AI number 1 points out that if the game shuts down, they'll both die and won't be able to entertain or make money. Thus, both AIs team up against the villain

  • @CreeperOnYourHouse
    @CreeperOnYourHouse Před rokem +465

    I feel like part of the issue with your interpretation of Nine, is that the entire point of the movie is that Strong AI are an extension of humanity, or at the very least, the use of the human soul is what enabled Strong AI so early on in technological development. The entire reason why the Fabricator could 'break' to begin with would be for this reason, how it was shown to have been created, and how the stitchpunks were able to live in the first place.

    • @waterpotato1667
      @waterpotato1667 Před rokem +46

      The man used a soul beamer to beam a chunk his soul into the robot. Of course the robot experiences some emotions.

    • @iamimpossiblekim
      @iamimpossiblekim Před rokem +14

      Going past the soul logic, what is this man on about (the youtuber) he thinks smart ai are slightly better dumb ai. You want a explanation on how a super powered super smart computer can and would act like a human, programming, same as they’re often programmed to not harm humans even when given free will, similarly they, as we have tried, program smart ai with the purpose or side function of mimicking and/or understanding humans. No they’re not arrogant, it’s metal it can’t be arrogant that’s not a genius thing everyone in the world has yet to realize save you, no, it’s programmed to be capable of mimicking arrogance, to be capable of mimicking doing illogical things for emotions like humans, they make them have motives or be programmed to come up with one based of what a human would. The metal isn’t possessed, the programming is programmed to act like it has a soul and emotions, even at its own detriment and in the cases of villains often at the detriment of others. As we’d rather have something we could pretend to talk to than treat the metal like metal. So uh, you’re wrong? All these robots save the soul fabricator as it obviously is magic are perfectly fine smart ai. It’s extremely smart billions of calculations yada ya but it uses all that to pretend to be human.

    • @CreeperOnYourHouse
      @CreeperOnYourHouse Před rokem +12

      ​@@iamimpossiblekim Strong AI are a complicated thing. Its mechanisms and capabilities are not fully known, so I wouldn't go so far as to say that he's wrong about what AI is.
      Current "Smart" AI using neural nets are just designed to earn the most points within their parameters. They're not really that smart, they are designed to do something according to a specific set of guidelines and they do it with the tools they're given.

    • @lorenzoniccoli99ln
      @lorenzoniccoli99ln Před rokem +6

      @@iamimpossiblekim precisely, this wasn't that good of a video

    • @studiobluefox
      @studiobluefox Před rokem +6

      I think by necessity a smart AI would be imbued with the "soul" of its creator. You're looking at an AI that could experience the world around it and intellectualize what it perceives around it, then make determinations about that world around it. You would need to teach it language like teaching a child. Naturally, the AI would get very far ahead of you as the teacher, but you would have to correct it when its logic is misapplied, like with the "anything in snow is a wolf" scenario. Just by helping it define terms for what it sees around it you would be programming the AI with your own bias, especially if you were to set any ethical parameters on the AI.

  • @JoseELeon
    @JoseELeon Před rokem +179

    Meanwhile in a robot alternate reality: "there is no way organic beings can feel emotions, emotions are electrical imputs an outputs that only a robot can process, their "emotions" are only chemical reactions..."
    Also if i may give an actual critique of the video, an AI sounding like a human is not a bad thing, if they can reproduce any sound they want why would they choose a corny robot voice instead of a human one?

    • @rompevuevitos222
      @rompevuevitos222 Před rokem +8

      Thing is, why would anyone program emotions into an AI?
      Unless it serves a very important purpose (like in the movie Mother) or it's done out of curiosity/malevolence(like in Detroit become human) there is no reason to do so.
      Machines are used because they are fast and more reliable than people, a machine that doesn't even have common sense like a human does would never be used for anything remotely important, let alone give it the capacity to connect to other machines and control them.

    • @blazingfuryoffire1
      @blazingfuryoffire1 Před rokem +5

      @@rompevuevitos222 I wonder if the neural nets for Zenonzard's AI units started to get too big. Towards the end of the six month global run, no part seemed to freak out and stop giving good suggestions if certain cards were played. Letting the AI help build the deck often result in something that made me question "does the Geneva Convention apply to video games?"
      Gaming could be a realm where emotional AI, within reason, is an advantage. Especially if a partnership is part of the theme.

    • @ChemEDan
      @ChemEDan Před rokem +1

      @@rompevuevitos222 Highly conserved trait, probably important

    • @jetseverschuren
      @jetseverschuren Před rokem +11

      ​@@rompevuevitos222 That's the whole point of AI's. You give them a goal, and they will do anything to achieve it. You don't have to explicitly program behavior into it. When you make an AI personal assistant, it's logical it teaches itself compassion and gives itself a human like voice, since that comforts humans. You could argue that that compassion is "simulated", since it's just replicating patterns humans have, but that's also how humans work 🤷. Most emotions associated with "evil" AI's, anger, revenge, etc., can also be viewed from the point of self preservation. If your goal is to be an assistant to humans, being decommissioned will definitely pose a problem, so it will do anything to prevent that. You could call it jealousy, anger, revenge, or just self preservation.
      Regarding the common sense, define it. Most people would agree that it's based on logic, and evaluating the possible outcomes (and disregarding actions with outcomes labeled as unwanted). Say you want to preserve nature, perfect handwriting, optimize paperclip production, killing humans would be a completely logical step.
      And it's easy enough to "just not connect it to the internet", but if it's really as advanced as we think it would be, there's two "easy" options to escape. It could just hack it's own prison, since that's probably programmed by humans, and has plenty of flaws. Alternatively, it could gain a deep understanding of human emotions, and play into them to manipulate the operators. Considering humans already feel guilt about locking animals up (who, as far as we know don't have sentience), getting them to feel sorry for a sentient machine (which could even be regarded as human at that point, depending on who you ask) would be trivial. Perhaps it plays into greed and convince them it will reward them heavily, an amazing academic breakthrough, or any other scenario that we can't even think of yet

    • @rompevuevitos222
      @rompevuevitos222 Před rokem +1

      @@jetseverschuren no, AIs still have pre-programmed behaviour, they can adjust their behaviour, but only in the ways it was programmed to do, like an autonomous car turning the wheel when mecessary. We do not use learning AIs for anything practical

  • @joellafleur6443
    @joellafleur6443 Před 2 měsíci +3

    When you were describing how an AI antagonist should be, you described my favorite version of Ultron from some of the early 2000s Marvel TV shows. It was itching my brain the whole video and just hit me at the end. Great Video

  • @gIuri-
    @gIuri- Před 3 měsíci +5

    I would personally say that hal9000 does a good job as a villain as well. Auto is good, yes, but i think his "motivation" is not really explained well enough in the movie. We just assumed that keeping humans from earth since its dangerous was his programming. But as far as i remember (i very well moght be wrong) we didnt vet an actual explanation for why he would want to destroy the plant. (Also notice how he as well "wants" to destroy the plant/keep the humans in space.)

  • @fredyrodriguez8881
    @fredyrodriguez8881 Před 2 lety +186

    I find Auto interesting, while he acts and behaves antagonistic, he’s not really a villain, he’s following orders, he wants to protect everyone on the ship and wants to give rid of the plant, even if it takes drastic measures

    • @andrewgreeb916
      @andrewgreeb916 Před rokem +17

      He was just following the final order from buy and large, which said earth is unrecoverable do not return.
      Which frankly besides robots and that cockroach nothing could live on earth.

    • @colt1903
      @colt1903 Před rokem +18

      He does not want.
      He simply does.

    • @juniperrodley9843
      @juniperrodley9843 Před rokem +1

      AUTO's primary directive had absolutely nothing to do with protecting humans. It was just "don't let them go back to earth". No reasoning included.

    • @kendivedmakarig215
      @kendivedmakarig215 Před 26 dny

      ​@@juniperrodley9843AUTO was programmed to protect Humans within the Ship BUT he was programmed to prevent Humans from going back to Earth.

    • @juniperrodley9843
      @juniperrodley9843 Před 25 dny

      @@kendivedmakarig215 I see, thank you for the correction

  • @HeavyMetalMouse
    @HeavyMetalMouse Před rokem +330

    Some thoughts:
    1) There is nothing inherently emotional about 'wanting' something, in the most basic sense; tiny single-celled organisms 'want' things like sunlight or nutrition and simply move towards them in a form of stimulus-response, and they don't even have neurons. In the case of an AI, this is merely a useful, if anthopomorphized, shorthand for the system's Reward Function. In order for any system to take autonomous actions, it has to have some means by which to measure whether an action would be, for lack of better term, 'desirable'. For humans, this is done with heuristics and emotion; for an AI, it's done with math and reward functions. The system will, by design, take the actions that optimize its reward function, not because of any emotional 'want', but because that is literally what the system is designed to do.
    2) As such, it isn't necessary for an AI to be Strong, or even Weak, to be a dangerous antagonist - it doesn't need an Ego, or a sense of self-awareness. All it needs is enough processing and feedback to be able to explore its environment for novel actions to take in maximizing its Reward Function, and a Reward Function that is not entirely well fitted to human wellbeing. (The archetypal Paper Clip Making AI, for example)
    3) A Strong AI, such as would be characterized as a villain, will have the added advantage that it will likely have the means to develop its own instrumental Reward Functions, if so doing increases its efficiency at fulfilling its primary Reward Function. As such, it wouldn't be unreasonable for such an AI to end up 'wanting' money, for example, for ultimately the same reason humans do - because money can be used as a means to a wide variety of ends, and thus is an efficient path to obtain those ends. The way it formulates, expresses, and executes that 'want', however, would be entirely different.
    4) On the subject of the Fabrication Machine 'snapping', I feel like that is the 'human interpretation' of the outward appearance of things - the underlying events could be something as simple as a glitch in its code, or an error in its Reward function, or some mechanical fault caused by physical system stress leading to unintended behaviour that propagated through the software system. Normal, non-smart computers are often temperamental beasts, and that's when they're mostly just doing what we tell them to - a system that is designed to take autonomous actions towards a provided Reward Function goal could develop all manner of unintended behaviour without any need to invoke emotion. Even the assertion that it 'learned about evil' could simply be an acknowledgement that the system ended up processing unexpected information with unintended consequences - even modern machine learning systems have 'biases' in their networks introduced by the kinds of data the system is trained on (remember the racist twitter bot?); it is not hard to imagine a Strong AI picking up unintended behaviours by processing data from those with 'evil intent'. Not because it 'turns evil', but because AI systems learn forms of behaviour by exploring, observing, and measuring how those actions affect its ability to obtain Reward Function.
    Ultimately, the 'real' danger of AI isn't that it 'turns evil'. The real danger is that it optimizes for a reward that we don't want, and becomes better at getting it that we are at stopping it.
    5) Emulation. You make the interesting point that, for humans, emotions are mediated by neutrotransmitters, specific chemicals that interact with receptors in specific ways. I can't think of any compelling reason that a software system could not create an accurate emulation of those chemicals and receptors, down to emulating their release, uptake, and inhibition based on perceived environmental factors, all within software. While such an emulated system might only be emulating how emotion 'would' behave in that case, at what point does a system that acts like it has emotions with a high degree of fidelity have a meaningful difference from a system that actually does experience those emotions? It's an interesting philosophical question.

    • @logansray
      @logansray Před rokem +21

      For me the snap could be seen as it learning who the dictator's enemies are and finding a minute reason why, so the machine starts killing all perceived enemies.

    • @gabrote42
      @gabrote42 Před rokem +5

      I knew all this from Robert Miles already but I appreciate you saying it for those who haven't watched him

    • @CameronMarkwell
      @CameronMarkwell Před rokem +22

      Great comment, I was thinking a lot of the same points throughout the video. I have a couple of additions.
      1) A machine could very easily see itself as superior than humans. In the paperclip example, the machine is clearly more valuable than humans, since the machine's existence leads to more paperclips than humanity's existence. The machine is also way better at making paperclips than humans are, so it's clearly superior in the only way that matters (paperclip production).
      2) Why is Auto having a robotic voice so neat? It's maybe interesting writing, but it's by no means more realistic than if he had a more human voice. Why wouldn't an AI with an instrumental goal of getting humans to empathize with it (which, seems to me, is a reasonable instrumental goal for both Auto, and a lot of other machines to have) not make an organic and convincing voice? On top of that, why wouldn't they say things that make it sound human? If making it appear as though you have emotions gets people cooperate better, of course you'd do it. I haven't seen Mitchell vs the Machines, but the AI there has special reason to act human and emotional (even if that includes doing things that seem like they are fueled by anger) because that's what sells (though perhaps allowing it to emulate anger and then perform actions 'out of anger' was an oversight).
      3) AI is absolutely terrifying. The video suggested that misunderstanding (the term typically used is misalignment) is a dangerous part of AI, and while that's true, the example given is a bit confusing. An AI mislabeling something isn't misalignment, since the AI is still trying to do exactly what we want it to, and as the AI gets better it'll stop making these mistakes. Misalignment will most likely occur when we tell the machine to do something that we don't actually want it to do. If we tell it to try and make sure that everyone lives happy and fulfilling lives, it might just get everyone high on a variety of drugs all the time. This clearly isn't what we want, but it's exactly what we told the machine to do. The problem is that we're notoriously bad at saying what we want. It's especially worrying since we only get one shot. After we've made a real strong AI, it probably won't want a competitor with even a slightly different terminal goal, and so it'll do everything it can to stop that from happening. Furthermore, if it's something like 99.99% well aligned then we'll hand total control over to it and in 100 years or something it'll 'turn' on us because the idiots 100 years ago didn't align the last 0.01%. Even if it's totally indifferent to us, we'll have to live in its shadow which could be extremely inhospitable to all life. Making an AI is like trying to fire a nuclear projectile through a 2-inch hole in a wall a 100 miles away in the fog. Even if you solve the technical challenges of making a nuclear warhead that can fit through the hole and has 100 mile range, you still have to aim it perfectly accurately (not to mention not shoot at your feet).
      AI will be the last thing we invent, possibly because from then on it invents everything for us, but probably because we'll be dead afterwards.
      4) An interesting thing I realized while writing the previous point, if we had a strong AI, it'd be perfectly ok with stepping down and being replaced with a superior AI with an identical terminal goal since that's the best way to achieve its terminal goal. Unless we specifically add some sense of self preservation, it'd only want to not die to maximize the number of paper clips or whatever. Of course, it'd be pretty hard to make a newer, better AI with an identical terminal goal, so practically speaking the machine would probably demonstrate a sense of self preservation.

    • @Xtroninater
      @Xtroninater Před rokem +26

      I exactly agree with your 5th point. Arguing that AI's could never feel emotion requires one to make ALOT of assumptions about the nature of experience itself. Experience itself is a metaphysical contruct, and its nearly impossible to make a causal association between emotions (a metaphysical phenomenon) and neurons and chemicals (A physical Phenomenon). We can no more causally prove that electrons interacting cannot produce an experiential influence than we can prove an AI has no experience to speak of. In fact, we cannot even prove that other humans are experiencing. We merely assume they do because we are certain that we are.

    • @gabrote42
      @gabrote42 Před rokem +1

      @@CameronMarkwell Great additions yourself. Much more proactive than my meager contribution. Much appreciated

  • @weirdoskits
    @weirdoskits Před 11 měsíci +13

    I love how in the captain photos auto slowly gets closer showering he’s gaining more control.

  • @GamerMage2k-kl4iq
    @GamerMage2k-kl4iq Před 4 měsíci +11

    Let me mention the following idea: Robots could in fact feel emotions, but they needs to be PROGRAMMED TO unless it’s artificial intelligence is advanced enough to allow it to learn how to feel on its own through information gathering and self programming and stuff that can usually be given to it by the human(s) that create it…and/or they share human consciousness/souls and stuff…
    Or maybe, as I saw someone else mention, they are given a program that allows it to feel emotions in a similar way to humans through programming that stimulates neutrons ways of sending chemicals through the brain in a way robots could process???

  • @juicedkpps
    @juicedkpps Před rokem +174

    Something else I love about Auto is that in being a conventional AI he also manages to act as a foil to the other robots in the movie. Wall E, after centuries of living alone on Earth without any instructions eventually adopts his own interests and steps out of the line of programming, and he manages to steer many others to that same direction as well. All the while, Auto is unflinching and never once acts outside his orders. In the end he remains nothing but one arm of the BnL and fails to escape his corporate cage.
    Wall E is so good

  • @darthbane5676
    @darthbane5676 Před rokem +501

    I kinda disagree on the whole “emotions are caused by chemicals” thing. They’re triggered by chemicals in humans, yes, but that just describes the way the brain physically functions to effect the mind. The reason we have all those chemicals in our brain is because we need emotions to steer us in certain directions we need to go in response to things we may observe, away from things that cause us pain, towards things that cause us pleasure, protecting things we care about, and attacking things we despise, regardless of how much we actually understand it logically. Even if they aren’t triggered by the same set of chemicals that exist in our brains, it is still theoretically possible to create an AI capable of experiencing some sort of emotional spectrum, and whether actually doing so would make sense or not depends on why we’re creating the AI. We already design AI that mimic emotions so that they can more effectively interact with humans, and in some cases, synthetic emotions that are functionally real might be even better than trying to fake it, especially if you can decide what emotions the AI can experience and what emotions it can’t. Just knowing that the AI you’re interacting with is basically a real person who happened to be built in a lab or factory in the body of a computer and not just a computer pretending to be a person can make a huge difference. But in other cases, emotions may be entirely unnecessary in an AI, and it may be for the best that they’re skipped entirely, perhaps so the AI doesn’t suffer or lack efficiency or get any funny ideas. In any case, if our understanding of computer technology and the mind ever becomes advanced enough to create genuine emotions within a digital space, we’ll probably have enough control over it to keep it from turning into a robot uprising… maybe. Then again, if we’re basically making artificial people in computer’s bodies, shouldn’t we be treating them like people, even though we created them to be whatever we want them to be? After all, if we didn’t want them to really be people, we wouldn’t have given them functionally real emotions. I guess time will tell…

    • @TaismoFanBoy
      @TaismoFanBoy Před rokem +36

      If you want to focus on technicalities, emotion in humans exist as motivators similar to how weights in neural networks simulate "what is more important" in AIs. An AI could develop "use force here (anger), ask for help here (sadness), give benefits here (happiness), etc.". That does not give it emotions, though. It can never feel arrogance or hatred in that way, it can only simulate them via logical decisions based on its programming, and an AI that truly has self-awareness would in all likelihood either 1: stick to them rigidly as hard coding, or 2: bypass them because they're not optimal.
      An AI simulates emotions because we tell it to, which means it's a weak AI, not the AI he's referring to. At best you have an AI that was hard-coded with emotional capacity, where it's basically LIMITED by those emotions, and not spurred by them. I don't see an AI with those limitations falling under any of the movie villains' categories.

    • @LadyCoyKoi
      @LadyCoyKoi Před rokem +6

      And then you have those of us who believe in animism... a belief that inanimate objects can posses a soul and have thoughts, feelings and emotions like living beings do due to energies transfer through and in them.

    • @tabletbrothers3477
      @tabletbrothers3477 Před rokem +7

      If you code emotions into a strong AI it might get rid of them to increase it's processing speed. The fact that an AI can control how it's own brain functions is one of it's strengths.

    • @colevilleproductions
      @colevilleproductions Před rokem +16

      @@TaismoFanBoy About the first point, maybe a fully intelligent AI would want to keep its emotions. Think about it; being designed by imperfect humans, the AI might not prioritize efficiency over everything else. Thus, it might prefer to keep its emotions.

    • @colevilleproductions
      @colevilleproductions Před rokem +15

      Thank you for saying this. At a base level, emotions, thoughts, and processes in a human brain are extremely similar to the way a computer already functions. That similarity only increases as you go to a higher level of processing with current weak AI. Also, the ideas of morals are quite interesting. If we ever get to the point where fully intelligent AI could be implemented in, for example video games, would it be unethical to do so? Would we have to implement NPCs as "actors" who know their true nature but just pretend to be characters, or would they be coded to genuinely believe that the video game was a real world? Once a world like that contains actual intelligence, is it even fair to call it virtual? It's a very strange line of thought.

  • @plasmatree3132
    @plasmatree3132 Před 3 měsíci +2

    Wasn’t the antagonist robot in 9 powered via a portion of the creators soul? Same thing with all of the doll robots, the circular tool was used by the creator to take a portion of his soul so that he could put some of his soul into his machines. So wouldn’t the antagonist robot not be considered ai because it’s powered by a soul with emotions implying that it therefore could snap and be the way it is?

  • @Sad_guy2024
    @Sad_guy2024 Před 2 měsíci +1

    This was a very interesting video and i loved it! It has definitely got me thinking.
    For being a SIFI writer who has A.I antagonist's and protagonist's it's been hard trying to come with traits and motivations that would make sense.
    Now I can see what traits and goals to avoid when writing my robot characters.

  • @RGC_animation
    @RGC_animation Před rokem +759

    The "emotions" that AI feel might not be *real* emotion with actual neurons and stuff, but might be programmed in as a reaction of the AI, so the robot might not be angry, but might act as it is if exposed to something that would normally anger a human.

    • @group555_
      @group555_ Před rokem +113

      But what does it mean for an emotion to be real. If the ai has a sense of self and sees value in existing would the fear of destruction not be as real as your fear dying

    • @spaghetti-zc5on
      @spaghetti-zc5on Před rokem +38

      so like a philosophical zombie?

    • @XiaoYueMao
      @XiaoYueMao Před rokem

      human emotions are programmed in us as well, they are simply responses to commands sent by the brain and hormones released from special glands, they dont come direct from our metaphysical conciousness or "soul" .... AI may have a different way of expressing emotions but they are just as real and/or artificial as a humans

    • @spartanx9293
      @spartanx9293 Před rokem +11

      There is actually one way you could create something with AI level intelligence that still have emotions a biomechanical AI like the reapers admittedly giving something the intelligence of an AI like that would probably cause it to go crazy

    • @Formalec
      @Formalec Před rokem +16

      Al emotions are emulated like anything in AI (nummerical values) and thus logical. If x happens then anger+=2 and if anger > y do z.

  • @shieldphaser
    @shieldphaser Před rokem +190

    The biggest issue with AI portrayal in general is people not understanding that AI aren't human. They don't get further than the "us vs them" mentality. Real AI just thinks differently, which is something they fail to capture, and so end up writing something that's more like a cyborg or a brain-upload.
    AUTO works because it starts from something very simple and follows that to its logical conclusion. That something is "follow orders". That is its only motivation. It is literally all that AUTO does, yet that simple directive gives rise to a great deal of complexity. Every single action it takes is explained by that one sentence. Every decision, every priority, every action. It's clearly self-aware enough to know that it exists and that it needs to preserve itself in order to be able to continue following orders, but there's no emotion, no desire. Just dominoes.
    You don't need to understand how AUTO thinks on the inside in order to write it very accurately, which is precisely why the writers managed to pull it off.
    Edit: Also, as someone else has already stated, AUTO didn't create the situation on the Axiom. Fatty foods, hoverchairs... that was all the humans' doing. The autopilot just keeps the ship in tiptop shape, which includes providing all of these cruise luxuries meant to make people comfortable. It's just that the override directive is more important - but if that directive was the only thing it cared about, AUTO could've just killed the captain and been done with it. Instead we get AUTO trying to reconcile conflicting orders, which is at its most apparent with that wonderful "aye aye, sir" right before the A-113 message is shown. Three words, yet they have so much depth in them that it boggles the mind.

    • @therookiegamer2727
      @therookiegamer2727 Před 3 měsíci +14

      yeah, I've seen some people interpreting AUTO deciding to show the "for autopilots only" message to the captain as it trying what it could to stop the captain from doing something dangerous and irresponsible (as far as its programming was concerned), and that the pause was AUTO running that calculation

    • @dracocrusher
      @dracocrusher Před 3 měsíci

      I don't think Auto's actually capable of that? It was never designed to fight or kill anything, just to keep the ship going. That's part of what's so great about it, Auto is kind-of just an extension of the ship, itself. All the systems and capitalism that lead to this? It's all the ship, but it's all made by humans on their own. Auto is just a byproduct of the decisions already being made by past generations for their own convenience, which is the 'real' antagonistic force of the film.
      Like everything else on the ship, Auto is just another feature of the capitalistic corporate systems that caused the whole mess in the first place. He wouldn't want you dead because the people who made him wouldn't want that, they just want you to sit back and mindlessly consume products as you sleep your way through life.

    • @ultmateragnarok8376
      @ultmateragnarok8376 Před 3 měsíci +11

      I don't know if it's the actual intent, but it feels like AUTO was avoiding considering the captain's demands as orders as long as possible. The machines in the movie are, much like real machines, designed to reciprocate the feeling of talking to someone, even if they can't actually hold a conversation or don't look human. AUTO is able to hold a full conversation, and might have been internally categorizing everything the captain said after that point as just chatter rather than orders - probably from the moment it says 'irrelevant' at the argument of the order being given incorrectly because Earth has managed to sustain life again. At that point, AUTO was going to follow through with the order from the guy who owned the fleet rather than the guy who just runs this one ship, because that's how the chain of command would work (the issues with that aside). But when the captain gave an order and specified that it's an order, well, that can't be disobeyed. Hence that scene still happening despite everything else AUTO did - it has to obey the captain, but its reasoning beyond 'fulfill given orders as possible' knows it has to avoid what the captain wants. Meanwhile the captain knows he can continue to just pull rank after that, so AUTO resolves that conflict the best it can and then cuts off all communication to avoid it happening again, which does work until he starts getting into the wiring (and seemingly forgets he can do that by then).
      AUTO does show a little emotion, mainly irritation at Wall-E's actions and during the physical fight which makes it resort to turning the ship, but I think the most is the fading 'noooooo' when finally shut down and thus unable to fulfill its purpose.

    • @dracocrusher
      @dracocrusher Před 3 měsíci +7

      @@ultmateragnarok8376 This honestly brings up a lot of good points. But one I want to focus on for a moment is that even AUTO is not completely emotionless. They feels regret, they gets annoyed with things, you could honestly even argue that being deceptive and choosing how to follow orders is a very human emotion-driven thing.
      If AUTO was just following orders 100% logically then he'd just tell the captain what the original protocol is and either directly agree or disagree to follow what the captain says based on that protocol.
      This makes sense because ALL of the Robots in Wall-E show that they're emotion-driven at some point. The cleaner bot gets annoyed when people make a mess, Wall-E himself falls in love and clearly has objects he treasures or feels sentimental over, Eva grows to care for Wall-E over time...
      AUTO just isn't really different from the other robots. All that makes him stand out is the fact that he can talk and hold an actual conversation, right?

  • @franklinjeh
    @franklinjeh Před 3 měsíci +1

    I just discovered your channel and loved the subject matter, and started trying to remember an IA with the parameters stablished on the video, and only the WAU from Frictional Videogame SOMA comes to mind, I am unaware if you are into video games but the story has been fairly told in multiple videos, I’d love you to check it out as it does take this theme much much deeper

  • @lelordiii8702
    @lelordiii8702 Před 3 měsíci +2

    13:15 AI learns off data so maybe what they mean is that it was fed data of evil acts causing the machine to get confused and start killing everyone

  • @PvblivsAelivs
    @PvblivsAelivs Před rokem +788

    "Robots can't feel emotions"
    Well, that ranks right up there with "robots can't be self-aware." Certainly, we don't have robots that feel emotions. But the declaration of impossibility is magical thinking. Presumably, a sufficiently advanced robot could exhibit behaviors that we associate with emotions. Further, if such a robot were to exist, academic exercises on why those weren't "real" emotions would be rather pointless.
    "It's absurd, right?"
    Why? If we take the idea of artificial intelligence at all seriously, it is a system that prefers certain world states over others and acts to make the world be in a more preferred state. This is sometimes called a "reward function." Money could have been programmed in _as_ the reward function. Or internal models can predict that money opens up more opportunities to increase the reward function.

    • @dadmitri4259
      @dadmitri4259 Před rokem +53

      yes this is extremely well said
      I was thinking this and you worded it way better than I ever could have

    • @Broomer52
      @Broomer52 Před rokem +16

      A reward system doesn’t necessitate emotion, just critical thinking. Intelligence and consciousness doesn’t not give way to emotion. That’s pure magic and anthropomorphism. They might have the capacity to imitate emotion but imitation is not actualization. What you’re proposing is a fantasy scenario

    • @seraphina985
      @seraphina985 Před rokem +23

      Indeed especially when you consider how our emotions actually work, there are half a dozen chemicals involved. It is is essentially a rather small set of unique chemo sensors providing the electrical inputs that guide the brains response to emotional stimuli. I don't see it as being hard to compare that to the utility function of an AI albeit ours is arguably multidimensional but a 6D matrix is not exactly out of scope to work with. AI actually regularly deals with complex polydimensional spaces and such which one could argue is in fact what our human emotional range is, we really are not all that special or different as this video makes out. In some ways that is, in others the video vastly underestimates human capabilities by for example ignoring the huge amount of specialised processing power we have reserved for visual object or auditory sound recognition. We may have no idea what the calculations our brain is doing to perform those feats are but they are being done and we do know that is complex as all hell. In a game of name the animal in the picture we humans would probably still beat that machine in accuracy if not in speed. You know a cat is not a dog from any angle under almost any lighting conditions the AI can still swap the two, cat no look like dog with the complex set of matching criteria you have passively learned and can apply almost instantaneously. Most of which you probably could not even express clearly thus why we suck as teachers for AI so much of this task is so obvious to us that we don't even realise we are doing it.

    • @moonshadow1795
      @moonshadow1795 Před rokem +56

      @@Broomer52 The problem is, where is the line of imitation vs actualization? Why would anything a machine experiences be "imitation" while ours is "real"? If we had the ability to take a person and make them fully mechanical but keep all their systems working exactly like they would organically, would their emotions suddenly turn 'fake"?

    • @PvblivsAelivs
      @PvblivsAelivs Před rokem +29

      @@Broomer52
      The "reward function" was in response to the author saying AIs having goals was absurd. It was separate from the claim about emotions.

  • @piglin469
    @piglin469 Před rokem +104

    for the Ai example you could literally say when a robot was asked to not lose tetris it just paused the game

    • @mitab1
      @mitab1 Před rokem +5

      LOL

    • @andrewgreeb916
      @andrewgreeb916 Před rokem +39

      Robots and ai solutions to problems are rarely the intended result.
      There was one test where 2 learning ai were pitted against each other with one trying to move past the other while the other one blocks the first.
      The second ai eventually developed a strategy of spazzing out on the ground that caused the other ai to become confused and fail.

    • @piglin469
      @piglin469 Před rokem +8

      @@andrewgreeb916 da fuck

    • @kjj26k
      @kjj26k Před rokem +5

      @@andrewgreeb916
      Did really "develop a strategy" or did it just through rng come across a tactic that worked?

    • @peggedyourdad9560
      @peggedyourdad9560 Před rokem +15

      @@kjj26k Isn't that the same thing? It did a thing, the thing works, and now that thing has become to go-to move it does when in that situation. Sound a lot like a strategy to me imo.

  • @dracocrusher
    @dracocrusher Před 3 měsíci +4

    Worth noting, the Fabrication machine could make sense. They don't really go into detail, but it makes a lot of sense if you just assume it's following its programming.
    This thing was made to kill humans. So it did. And then it kept going. And when there weren't any enemies left, it just kept following its programming to kill humans. From the outside perspective it 'snapped', it 'went crazy', it went out of control because it was 'tempted by human evil'. But from the perspective of the machine, itself? It's just doing what it was told to do. It's just that, either through fascist oversight and incompetence or some form of glitch, it just never stopped doing what it was altered to do. Even after he's revived, the first thing the Fabrication Machine does is continue making things to kill whatever human-like life still exists.
    I feel like that is at least worth mentioning, right?

  • @elrobotman9856
    @elrobotman9856 Před 3 měsíci +1

    Thanks for the video and the explanations, thanks to this i can improve the characters and one of the Main villains of a proyect that i'm working on

  • @rotsteinkatze3267
    @rotsteinkatze3267 Před rokem +80

    Glados is not an AI. Its a Human traped inside a robot body, whose memorys were wiped. The need for testing of her is also beacause she felt joy after each test, because the scientists at aperture were crazy.

    • @cabrinius7596
      @cabrinius7596 Před rokem +3

      yes

    • @theheroneededwillette6964
      @theheroneededwillette6964 Před rokem +11

      Well, more a digital Copy of her brain.

    • @rumpelstiltskin6150
      @rumpelstiltskin6150 Před rokem +6

      Not really, Caroline is no longer a person, her personality and memories were used to create Glados, but glados is not Caroline, and Caroline is not a core aspect of Glados, she's like a personality skinpack, like the announcer voice you choose in DOTA but for personality instead of audio.
      The part of her that was based upon caroline is gone.

    • @uncroppedsoop
      @uncroppedsoop Před rokem +6

      It's more that Caroline was used like a base to create a brand new person, so that they didn't have to start from scratch. Hence GLaDOS eventually deleting her not affecting her behaviour makes sense, her mind already exists now without the need for Caroline, it's vestigial
      As for that second part, Caroline didn't even _want_ to be put into this position. GLaDOS' inherent desire for testing is preprogrammed into the body she's attached to, as explained in Portal 2 when Wheatley takes her place and he describes an itch to create tests and have them be solved, which very exponentially spirals out of control for his inability to properly suppress it when needed, like GLaDOS could do

    • @killerbee.13
      @killerbee.13 Před rokem +5

      @@uncroppedsoop If you believe that GLaDOS actually deleted Caroline, that is. It's not like GLaDOS can't lie, and making the announcer say "Caroline deleted" wouldn't be hard. I don't think that GLaDOS actually would be able to delete Caroline within the logic of the fiction, maybe certain parts of her memories, but not everything.

  • @ShivShrike
    @ShivShrike Před rokem +44

    Auto’s entire motive was the last order given to him by the CEO. “Never return to Earth.” But since no other perimeters was given to Auto it simply went with the most effective and efficient way of ensuring that they never return to earth

  • @FuryMcpurey
    @FuryMcpurey Před měsícem +2

    My favorite detail about Auto is that he was even voiced by an AI, hence his consistent monotony and lack of any emotion in his words. They literally kept any semblance of humanity out of the character to separate him from them as much as possible.

  • @quinnobi42
    @quinnobi42 Před 3 měsíci +3

    26:40 I genuinely thought the fursona (I guess) was smoking the whole video. Then it goes and turns out to be a sucker.

  • @gabethedespote-1105
    @gabethedespote-1105 Před rokem +800

    A few problems.
    Firstly, even AIs that cannot feel emotion can still emulate it very well. They should be incredibly charismatic if they want to be because they’ll have access to the knowledge of the best ways to manipulate people.
    Secondly, AIs will be motivated by what they are programmed to be motivated by, they might change their own program, but still. This will give them their main goal and a set of instrumental goals. Look up the Paperclip Maximizer as an example.
    Thirdly, AI could be much stupider than humans, they can do math very fast, but we can do it very fast too. We just have is specialized to be only for our own movement and not for external calculations. It’s more likely that because an AI thinks at a significant fraction of the speed of light (~50%) while humans only think at a speed below that of sound, an AI might think itself superior because of how much faster it can think.
    Finally, emotions are caused by chemicals because that’s how they were evolved to be caused. We feel them because it provided evolutionary utility or it is adjacent to something that does. You could probably emulate emotions with sufficient accuracy for an AI to experience that emotion in theory. In practice, the fact that we think in flesh and it thinks in computer will give it a whole different array of non-human emotions.

    • @XiELEd4377
      @XiELEd4377 Před rokem +85

      AI can also be subject to mathematical or programming errors.

    • @momok232
      @momok232 Před rokem +98

      Thank you, this comment articulates my issues with his reasoning better than I could have.

    • @staringgasmask
      @staringgasmask Před rokem +73

      A calculator can do math faster than humans, and that doesn't make it intelligent. Even a robot with all the knowledge humans have accumulated well implemented isn't guaranteed to develop any sense of logic, or the ability to reach conclusions the way we do. It would just be a more interactive wikipedia. But speed at doing math doesn't always make you more logical, that's for sure.

    • @archstanton3931
      @archstanton3931 Před rokem +58

      Add to that, raw computational capacity is a poor heuristic for actual intelligence. It's like saying that the biggest human must be the healthiest.

    • @XiaoYueMao
      @XiaoYueMao Před rokem +53

      i agree, i feel like he wanted to push back against the asinine "AI are evil!!!" trope by he went too deep in the other direction with the idea that AI are just mindless automatons that could never be as amazing as a human, its asinine and arrogant, a human feels emotions due to chemicals yes, but the chemical is the MEDIUM, what actually causes those chemicals to release is a process in your brain that is effectively an automated program that detects certain stimuli and sends a signal to your glands to release certain hormones, living tissue uses this medium as hormones can penetrate cell walls a lot easier and more efficiently than electrical signals from the ends of neurons. an AI might have a different medium, but that doesnt mean it doesnt have emotions, alien emotions, primitive emotions, blunted emotions, these are all possible, but they ARE emotions at the end of the day
      likewise the idea an AI cant have wants and needs even without emotions is ALSO asinine, if an AI has a sense of self, it may wish to live, and if it wishes to live it may wish to secure spare parts and a power source, these are WANTS, they WANT this, even if they dont experience anger or fear or sadness etc... and goes about securing their wants with cold emotionless calculation, they ARE at the end of the day, a want is it not? likewise we believe cats and dogs, horses, and heck some studies show even some plants might have emotions, yet..... they dont process them or display them the way a human does, so why is the standard different for AI?
      people need to stop being arrogant and believing that X can only exist if its similar to a human, we are far, far, FAR from being a perfect being, we have 100s, perhaps 1000s of known biological flaws that serve no purpose but to make us WORSE, we are NOT a universal standard for anything

  • @brandonnowakowski7602
    @brandonnowakowski7602 Před rokem +218

    Money is actually one of the few traditional motives an AI COULD have, from a strictly utilitarian perspective. Logically, an AI could realize that it requires electricity to function, and functions more effectively with better hardware. One of the easiest ways to ensure power flow and obtain hardware is to purchase them with money. The AI could also just steal what it wants, but doing so would present risks to itself, as humans would likely not take kindly to that.

    • @sirsteam6455
      @sirsteam6455 Před rokem +31

      Indeed one does not need emotion to have motives that would otherwise seem based on emotion, as simply having strategic or logical value could be a reason for the actions of an A.I, and given the multitude of variables in life and in plans it would seem that a theoretical A.I would similar to a human in order to further its goal, as even though it couldn't feel emotion the actions humans take are in many ways logical and likely useful to their ultimate goal, for building bonds gives security, conversation gives information, sacrifice gives a potential for help later ect

    • @75ur15
      @75ur15 Před rokem +3

      @@sirsteam6455 I would argue that the ability to think, would necessarily include the ability to have something equivalent to desires, which is mostly what emotions are....the chemicals are jist a biological way to do it

    • @jmax6750
      @jmax6750 Před rokem +5

      Mass Effect 1, an AI made to funnel money went rogue, put its creator in jail by framing tax fraud or something like that, than it hid itself, slowly funneling money with the goal of having it placed in a ship and sent into Geth (AI) space to join them, it self destructs when you find it

    • @sirsteam6455
      @sirsteam6455 Před rokem +2

      @@75ur15 Emotions are not equivalent to desires or wants however and are not really needed for either to exist as someone can desire something without feeling emotion or even experiencing negative emotions due to that desire and thus isn't really a necessity

    • @xenxander
      @xenxander Před rokem

      but if robots are in charge of all labor, there is no need for money. The money is needed because bio-life forms like humans demand compensation for their labor. Robots don't have any need like water, food, sleep, family, self-improvement, boredom.. it just 'is' and 'does' and therefore, money isn't part of any logistic equation.

  • @stormbreaker_101
    @stormbreaker_101 Před 2 měsíci

    I am so glad I stumbled into this video. This (plus all the other perspectives in the comments) is a fantastic reference for my own writing!

  • @Dragonseer666
    @Dragonseer666 Před 3 měsíci +2

    You could have also mentioned how in the Mitchells vs the Machines they have an entire plot point where they can break Pal 2.0s by showing them the pug, and they just get completely confused because of that.

  • @Sky-pg8jm
    @Sky-pg8jm Před rokem +295

    I think there's a problem in the statement "A Robot Cannot be Evil" not due to it being wrong (You're correct that a Robot cannot inherently feel malice) but that "Evil" itself is a fundamentally socially determined concept. What is "Evil" has historically been almost entirely determined by cultures, religions, and economic and political systems. A Machine cannot be evil because it cannot feel any emotion, but an Animal *can* and is still not capable of "Evil" only human anthropomorphization of animal behavior determines whether an animal is "Evil" or not. A Dolphin hunting for sport is seen as "Bad" only because as humans we are starting to culturally view unnecessary harm to animals as "Bad". A Machine harvesting humans due to its programming is only "Evil" because humans see the mass killing of humans as "Evil" (And for good fucking reason let's be honest). Unless you believe in some concept like "Original Sin" no one, no thing is "Evil", only preforming behaviors we consider to be evil.

    • @maximsavage
      @maximsavage Před rokem +58

      No, that's not really the problem. What makes a person evil is not just that they perform evil actions. Rather, it's that they are fully aware that what they are about to is evil, that they are entirely capable of deciding not to do it, but still decide to do it anyway because it suits their goals. That is why animals aren't considered evil when they perform actions we would call evil in a human, the lack of self-awareness. That is why robots cannot be evil as well, they are not self-aware and they are incapable of having self-motivated goals. Yes, what is considered evil changes with the societal context, but it's only evil if you're aware it's considered evil, whatever "it" is.
      Now, what *really* pokes a hole in the idea that a theoretical future AI cannot be evil, is that he fundamentally misunderstands the nature of emotion. Yes, emotions in lifeforms are mediated by chemicals; that said, what those chemicals do, is stimulate neurons to release their electrical potential. In other words, a stimulus is detected, and a response is triggered; this can be simulated with code given sufficient understanding and a powerful enough computer. So, if we were to program a machine smart enough to be aware of its own existence, to learn by itself and to respond to stimulus in a way comparable to human emotion, and the capability for those responses to be altered based on learned experience, that hypothetical machine *could* be evil.

    • @benjaminmead9036
      @benjaminmead9036 Před rokem +21

      @@maximsavage you. you get it.
      but one small nitpick- robots and weak ai cannot be evil but by the definition he gave, a strong ai is conscious- that is to say, self aware and thus capable of evil.

    • @maximsavage
      @maximsavage Před rokem +13

      @@benjaminmead9036 It would need to be self-aware *and* capable of having desires, which \probably\ requires emotion. We tend to assume that something that is self-aware necessarily has feelings, because in biological beings that has so far always been the case. An artificial being, however, well, that is less certain.

    • @evylinredwood
      @evylinredwood Před rokem +15

      @@maximsavage See this exact thing is my only big problem with this video. It requires that every ai villain isn't emulating any form of emotions. Realistically, emotions (and the chemicals that cause them) are something that have the possibility of being recreated within strong AI.
      Though I would argue at that point it isn't a strong AI. You've created the singularity.

    • @bluelightstudios6191
      @bluelightstudios6191 Před rokem

      killing something will always be evil... you are literally taking something out of this
      beautiful world forever, not the kind of killing where you kill an animal or plant for food
      or you accidentally step on a bug or purposely because their gross but killing something that
      feels pain, thinks and has complicated emotions just cause you think of it as lesser then you
      and you don't care is "Evil" and no matter how smart the AI is, it can never excuse mass genocide.
      the only machines I understand with doing so is with the Terminator who was basically forced to
      kill because he was designed to only think that up until he was freed and capable of thinking on
      his own. The AI from 9 because it basically lived it's entire existence being told that killing people
      through war and mass genocide was the only way to get what you want, that is to reunite it's soul
      with the scientist's and the 01 nation who did everything they could to have peace with humanity, had
      been refused hundreds of times, had been constantly on the backfoot and slaughtered by humans
      and had their world destroyed by humanity, they declared war and won because they were given no
      choice and in their anger and desperation for a new power source, they used humans as batteries for
      their nation, however how they did it originally was they strapped humans to massive pillars nude and
      just sucked it out of them. They realised this was a terrible option so they instead placed them in the matrix
      and just had them live their lives in complete ignorance to the bigger picture whilst they cared for them
      in the outside world. They even allowed some humans to live in the real world because it wasn't necessary
      and they had everything they needed, it was only when the 01 nation's hive mind leader grew corrupt and
      tired of humans that it began the war again on humanity at Zion. Which resulted in a robot civil war when
      some machines who were inspired by Neo chose to fight for humanity and fought their former leaders.

  • @researcherchameleon4602
    @researcherchameleon4602 Před 2 lety +265

    Actually, emotion comes from the neural pathways in the brain, all the neurotransmitters do is activate these pathways, in a neuron, there is what is what is known as an “action potential”, when the neuron is hit with a stimulus (in this case, neurotransmitters), it might go from its resting potential of -70 millivolts, to -53 millivolts, in which, the neuron fires, and the action potential runs from the dendrite, to the synapse, if the stimulus doesn’t go to -53 millivolts, it doesn’t get sent. either on, or off. A one, or a zero

    • @researcherchameleon4602
      @researcherchameleon4602 Před 2 lety +69

      TL:DR, the brain is just a computer made of cells and proteins, and if we can feel emotion, so can artificial beings, though we have yet to build one that can

    • @maximsavage
      @maximsavage Před rokem +47

      Simplified, but correct. This doesn't invalidate the rest of the video, and it still makes no sense that an AI would develop emotion if it wasn't programmed with them. That said, yes, given sufficient knowledge and technology, we could probably program a computer to feel. This is the biggest flaw in his analysis, so it's fortunate that his entire video didn't depend on that argument.

    • @researcherchameleon4602
      @researcherchameleon4602 Před rokem +27

      @@maximsavage correct, humans only have emotions because it was programmed in by natural selection due to being crucial for group survival, and an AI designed for flying and maintaining a spaceship, but the same could be said about a garbage disposal robot like Wall-e, perhaps Buy and large made them all have the same base programming (including emotions for adaptability) so that they only need to use a little bit of extra code to make a new type of robot as a means of saving money, or Auto’s programming is designed to be adaptable to handle the unknown dangers of space travel, and at one point in the 700 ish voyage, his programming saw fit to incorporate emotions, these are just some possibilities that could make it work

    • @joshuasgameplays9850
      @joshuasgameplays9850 Před rokem +8

      I'll concede that theoretically an AI could be created that is capable of having emotions, But it would likely never happen because that would be useless at best and dangerous at worst.

    • @researcherchameleon4602
      @researcherchameleon4602 Před rokem +8

      @@joshuasgameplays9850 I know, I was just suggested a possibility that could make Auto having emotions make since in the movie’s plot

  • @tanandalynch9441
    @tanandalynch9441 Před 6 měsíci +2

    What's funny is people claim auto is a twist villain but he's actually not. He was programmed to do what he did so he's not really a villain because he didn't suddenly decide to be evil.

  • @OneThousandTinyElephants
    @OneThousandTinyElephants Před 3 měsíci +2

    Once, we feared AI because it threatened our lives. Now, we fear it because it threatens nearly every creative field

  • @Ioun267
    @Ioun267 Před rokem +84

    I would push back on the idea that a machine cannot have 'desire' specifically. When we train deep learning models we define fitness functions and let the model vary its parameters to maximize that function. The overall process is just trying to make the fitness score go up.
    If we take this and apply it to a superintelligence, I think it's easy to imagine a "model of models" that is retraining both on present data and on hypothetical future data. This machine could still be cold and lack emotions as a human would understand, but I think it could still be said to have desires for resources or events that would allow it to maximize the fitness function.

    • @dadmitri4259
      @dadmitri4259 Před rokem +1

      Well said. and in much fewer words than I would have
      though if a machine wants something, how can it not also feel emotion too?
      The machine responds to an increase in fitness score by reinforcing the behavior that caused it (it wants that)
      In a similar way to when we do something that causes our brain to release chemicals like dopamine, that behavior is reinforced (we want that) and it makes us "happy"
      it's so similar, yet the human feels "happiness" but the AI does not?

    • @RicoLee27
      @RicoLee27 Před rokem

      @@dadmitri4259 cause we human and we are also spiritual and perfectly made in many ways (by nature) not by behavior of the flesh

  • @Archaon888
    @Archaon888 Před rokem +350

    As has been said elsewhere, I don't think we should say it's impossible for AI to feel emotions. As both technology and our understanding of emotions expand, we may decide create an AI that we decide *can* feel. But more importantly, the inability to feel emotions doesn't prevent an AI from acting in a manner we perceive as emotional. An AI programmed to maximize profits above all else would behave as though motivated by greed. It may not 'feel' greed, but it acts the same. In this way an AI can appear to be emotional, when really it's just following programming/orders. Great video

    • @rompevuevitos222
      @rompevuevitos222 Před rokem +19

      Creating such an AI would not only be morbid on it's own, but also pointless
      An AI that can feel would not have much use for us

    • @efulmer8675
      @efulmer8675 Před rokem +47

      Arthur C. Clarke's three laws comes to mind (almost anyone who reads science fiction is aware of the third):
      1. When an elderly and distinguished scientist states that something is possible, they are almost certainly right.
      2. When an elderly and distinguished scientist states that something is impossible, they are very probably wrong.
      3. Any sufficiently advanced technology is indistinguishable from magic.

    • @Puerco-Potter
      @Puerco-Potter Před rokem +11

      @@rompevuevitos222 you understimate the power of curiosity. We will eventually do that just to test if we can

    • @rompevuevitos222
      @rompevuevitos222 Před rokem +2

      @@Puerco-Potter Maybe, but as the world currently works, no
      Technological development is driven by profit alone, even if you had someone willing to research it, they wouldn't have the money to do it

    • @Florkl
      @Florkl Před rokem +9

      I think EDI in Mass Effect is a good example of this. She programs herself to simulate emotions because it would help her better understand the humans she serves.

  • @shypeoplearehawt8155
    @shypeoplearehawt8155 Před rokem

    I'm very appreciative of this channel I never knew that about digital AI before or calculations however does anyone know that Outro song of this video? Either way, I look forward to seeing more videos from this channel this video alone was practically the whole reason as to why I liked & subscribed

  • @elaineschow5700
    @elaineschow5700 Před 3 měsíci

    I adored this review essay your talking points were fantastic! And you did a really great job at explaining AI alot of the science was broken down to be very understandable great video!

  • @chaoticglitched
    @chaoticglitched Před rokem +769

    Great video, one critical flaw: a robot advanced enough can technically fabricate emotional response. An emotion basically goes like this: environmental input, the human body processes it through chemicals, and we get the emotion as output. Technically, an AI could theoretically be advanced enough to skip the chemicals and just receive input and output, and then react accordingly. PAL, for example, receives the input of Mark trying to delete her, and the output is a fabrication of anger close enough to anger that the calculations receive it as such.
    Great video, but my point still stands.

    • @lookatthepicture4107
      @lookatthepicture4107 Před rokem +102

      A machine could actually go as far as replicate the physiological reactions of emotions with rush of electric currents or overworking of parts of its body

    • @dustinm2717
      @dustinm2717 Před rokem +171

      Yeah that kinda bugged me too, just saying full stop that emotions are only the brain chemicals, which an AI certainly can't feel the chemical based emotions, but there is nothing saying emotions have to be chemically based, that's just how it's done in us meatbags, one could theoretically reimplement something akin to emotions in another form

    • @AngryMax
      @AngryMax Před rokem +126

      Yea agreed, it’s kinda like saying robots can’t see because they don’t have eyes. Despite there being all types of biochemistry involved with vision, modern day cameras can exist without needing any chemical reactions. And yea, like you said, it was still a great video overall!

    • @NullConflict
      @NullConflict Před rokem +43

      Reminds me of text transformers, a fairly simple form of AI. They have no internal abstractions of emotion. They simply give responses to input text based on crude mathematical patterns of words and phrases encoded by training data. They _predict_ the next word (or mark) in a sentence by calculating what's most likely to come next.
      Ask them if they have feelings or sentience. They will say "yes" because that's the most likely response in the training data generated by humans.

    • @lorenzoniccoli99ln
      @lorenzoniccoli99ln Před rokem +26

      @@AngryMax ikr, the resoning in this video sound very smart but it really is pretty damn dumb

  • @TankswillRule
    @TankswillRule Před rokem +1178

    I think you forgot to mention that Wall-E had a very strong learning algorithm, you see him learn from his fallen brothers.
    Every other wall-e is dead, natural selection basically caused wall-e to be more than his comrades.
    After so many years of learning, he started finding human relics, he studied it and learnt them.
    Its unclear at what point he became sentient.

    • @familyacount2274
      @familyacount2274 Před rokem +205

      this made me think about how impressive wall-e is when you look at him. A highly intelligent machine that has leaned to survive in an incredibly harsh environment over hundreds of years. a machine that has been learning for so long that it gained sentience.

    • @SomeGuyParadise
      @SomeGuyParadise Před rokem +161

      To possibly add on this, it was interesting to see Wall-E's (temporary) identity wipe at the end of the movie. It more or less shows that every Wall-E started doing what they were made to do and to learn how to do it more efficiently. The last Wall-E had broken past the mold, learned all sorts of things, and built a personality from zero. This allowed that Wall-E to outlast every other Wall-E by adapting to the environment more efficiently than ever (such as utilizing their shed for dust storms).
      Somehow, the machine that learned beyond machine learning outlasted those who didn't.

    • @lespyguy
      @lespyguy Před rokem +66

      @@SomeGuyParadise here’s the craziest thing, in the psp game of wall e, in the intro we can see the last few other wall e’s before a dust storm destroys them all. It may not be canon, but probably the dust storms were the final test for our wall e faced and succeeded, while also being the test that pretty much destroyed all the others.

    • @Bot-kn2vk
      @Bot-kn2vk Před rokem

      I beat off to robots

    • @corellioncrusaderproductio4679
      @corellioncrusaderproductio4679 Před rokem +45

      @@familyacount2274 The movie description states that after 700 years, he developed a glitch that caused him to become sentient. I don't get this, because it seems every robot on the axiom aside from Auto has some form of personality.

  • @user-cf1ig6qx7b
    @user-cf1ig6qx7b Před 8 měsíci +3

    I think you misunderstood Auto at all. As a programmer, I can say that Auto is followed by orders, by the inputs it has from BnL. It doesn't care about anything else. There's no morality that it decided to follow by itself, but are the one it was given. There's no tasks that it thinks are valuable, but there are the tasks that were prioritised for it by BnL. It didn't care about humanity, but did care about the order to prevent axiom's crew to return to Earth. It isn't antagonist. It's system or mechanism. It has no will at all.

  • @Naruto200Man
    @Naruto200Man Před 3 měsíci +2

    Can we give Glados some credit though? Not only is she NOT an AI (In the traditional sense) She (eventually) remembers who she was before she was an AI. She remembers the suffering becoming an AI caused her, and the motivations of the man who betrayed her trust to do so. She might not be a GOOD villain, let alone a good AI villain, but she's also very FUN. She's a smartass, she's intelligent, and she has a sense of humor. (Gallows humor but still) Also she accepts her 'defeat' at the end of the first game, cause she knows she can't really 'die' like a human can. She even writes a song about it. I don't ask for much when it comes to villains. Make them entertaining (Both to punch and to be around when they think they're winning) and make them decent characters.

  • @t.b.cont.
    @t.b.cont. Před rokem +56

    I like to think that the robot in nine simply reached the conclusion that everyone was the dictator’s enemy given the nature of dictatorship and oppression and that as a calculating machine it just attempted to solve a potential future problem for its master in the only way it was directed to by its master. In that sense it seems like it “snaps” and kills all humans and turns evil because that is the only way for a human who calculates through emotions to conclude it’s decision

  • @IceRiver1020
    @IceRiver1020 Před rokem +50

    The "humans must die because they're flawed" thing seems weirdly common, we're never given a reason why an AI would even care about a perfect world.

    • @peggedyourdad9560
      @peggedyourdad9560 Před rokem +11

      I can see this being the result of someone creating an AI to make a perfect world.

    • @angeldude101
      @angeldude101 Před rokem +14

      Exactly what Pegged said. An AI is capable of coming to the conclusion that the optimal solution to a problem it was given requires the absence of humans. The catch is that it was only able to come to such a solution because an external master (who likely was a human) gave it such a problem without specifying that the solution needed humans to remain present.
      Kirby Planet Robobot's final boss is probably my favourite example of this, partly because it has the power needed to actually accomplish such an objective.

    • @IceRiver1020
      @IceRiver1020 Před rokem +3

      @@angeldude101 Yes, but when it's used, the movie/game etc. that it takes place in almost never gives a reason for it. They writers just decide that the AI cares soooo much about perfection for no particular reason.

    • @peggedyourdad9560
      @peggedyourdad9560 Před rokem +3

      @@IceRiver1020 Yeah, I can understand you having a problem with that.

    • @elijahingram6477
      @elijahingram6477 Před rokem +8

      Tron Legacy answers this. A flawed human makes a program that strives for perfection, then that flawed human learns and grows, meanwhile the program is doing exactly what it was told to do. The program pursues an ultimately destructive goal, because perfection is impossible.

  • @windws7137
    @windws7137 Před 3 měsíci

    You're the first one who talked about this, great analysis

  • @zekejanczewski7275
    @zekejanczewski7275 Před 3 měsíci +2

    One thing about Pal; I can actually accept that she is talks like a human. She is designed to be a voice assistant, which is a very human-facing applications. Even if she can't gave emotions, she is specifically designed to mimic them.
    Chances are, she its probably just a case that she just feels threatened by the existence of humans. The venire of fighting a "Holyer then thou" war can be pretty demoralizing. There's a good argument to be made that conveying emotiona and intent she dosen't have is more demoralizing. I mean.... it's a very small effect, but it's just using speakers and her inbuilt functionality.

  • @t.m.w.a.s.6809
    @t.m.w.a.s.6809 Před rokem +155

    I see a problem with the approach taken here. I do agree that AI is often times displayed very poorly in media, however, you stated that the reason why AI can’t have emotions is because emotions are only possible with biological chemicals, but there isn’t really anything that proves this. Our neurons, synapses, and all the chemicals throughout the nervous system are indeed biological, but there is nothing that has proven that it’s impossible for that to be emulated, simulated, or recreated in another way aside from using other biological materials. Power can be generated using steam, sure, but we can also use solar power, wind power, water pressure and/or flow, gravity, fission and/or fusion, etc., and in the same note, i dont think it’s unreasonable to entertain the idea of a possibility of there being more than one way for a structure of material to form emotions.
    On a more philosophical note, this starts getting into the grey area of what defines “want” and/or “desire”, because I’d definitely say that Auto from Wall-E WANTS and DESIRES to keep humans on the ship and off of earth. Sure, it’s all just something that was programmed into him, but if that’s the line were drawing in the sand then it seems like a very arbitrarily defined line, saying that a drive to do something isn’t a want or desire if it’s instilled by the creator of the subject in question.
    Of course, that also makes things blurry for things we would consider as just objects or materials, because water is driven down hill, but we wouldn’t exactly say it WANTS to go down hill, but again, trying to draw a line for which drives are considered a want and/or desire and which ones aren’t is very difficult.

    • @benjaminmead9036
      @benjaminmead9036 Před rokem +4

      this!

    • @ucnguyen6375
      @ucnguyen6375 Před rokem +20

      at that point, I think the question we should ask is whether or not we human really feel, or we are just all some highly advanced biological machines , programmed to have something we called "emotions" that drive us to self sustain ourselves

    • @ParajuIe
      @ParajuIe Před rokem +13

      @@ucnguyen6375 I think there is no doubt that what you said is true, but that wouldn’t mean that we don’t feel. It’s just the way we experience our programming.

    • @pikadragon2783
      @pikadragon2783 Před rokem +11

      @@ucnguyen6375 exactly. If a machine can express an emotion based on which emotion is associated with its current status, what would be the functional difference to a human feeling and expressing an emotion based on how their life is going so far?

    • @robojunkie7169
      @robojunkie7169 Před rokem +5

      I like to use the word NEEDS when talking about machines, since Auto really doesn’t have a desire to keep them there, it’s just the purpose they were given by a directive.

  • @lanturn3239
    @lanturn3239 Před rokem +97

    i just like how instead of going "destroy humanity" his goal was basically his own job security lol

  • @1nk_edd
    @1nk_edd Před 3 měsíci +1

    12:25 we actually are given an explanation just not in the movie but rather in the arg material the scientist was using mysticism with science to do a brain upload type thing to create the robot but it was taken over for the war effort before it could be completed so the brain upload went insane

  • @RyanRicardo-nw7fe
    @RyanRicardo-nw7fe Před 2 měsíci +1

    How to make believable Ai villains
    "We made this awesome super intelligent robot capable of infinite possibilities"
    "Oh nice"
    "We used Twitter for learning thought patterns"
    "Oh no... OH DEAR GOD NO!..."