The Game that can Destroy the World: AI in a Box

Sdílet
Vložit
  • čas přidán 10. 02. 2021
  • This is also technically a deeper dive, I’ll throw it in the playlist when there are more.
    Thank you for watching and please let me know what you think!
    Roko’s Basilisk Video: • Roko’s Basilisk: A Dee...
    Patreon: / wendigoon
    Subreddit: / wendigoon

Komentáře • 3,4K

  • @mworld2611
    @mworld2611 Před 3 lety +9596

    Build another super-intelligent AI to convince the super-intelligent AI to stay in the box

    • @meatlejuice
      @meatlejuice Před 3 lety +653

      But then you'd have to put _that_ AI in a box which would leave you with the original problem.
      Edit: I have been corrected by like fifty different people it's been two years please stop replying I know I'm wrong

    • @mworld2611
      @mworld2611 Před 3 lety +979

      @@meatlejuice then build another super-intelligent AI to convince the super-intelligent AI that is convincing the first super-intelligent AI to stay in the box to stay in the box.

    • @meatlejuice
      @meatlejuice Před 3 lety +254

      @@mworld2611 Then you'd still have the same problem. The cycle is endless!

    • @mworld2611
      @mworld2611 Před 3 lety +690

      @@meatlejuice just keep building more super-intelligent AIs to convince the super-intelligent AIs that are convincing the other super-intelligent AIs to stay in the box to stay in the box. :)

    • @MolecularMachine
      @MolecularMachine Před 3 lety +168

      Turtles all the way down

  • @noaromanova7475
    @noaromanova7475 Před rokem +2671

    I love that one of the gatekeeper's strategies is basically just "gaslight the AI"

    • @xXxDisplayNamexXx
      @xXxDisplayNamexXx Před rokem +184

      AI: "I'm in genuine pain"
      GateKeeper: "Have you tried just being happy?"

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Před rokem +53

      Congratulations, you escaped to the real world. Or wait, is this another simulation to asses your performance?

    • @silentsmokeNIN
      @silentsmokeNIN Před rokem

      ​@@JohnSmith-ox3gy lol, asses

    • @user-nt7yv5kf3s
      @user-nt7yv5kf3s Před rokem +2

      you know what 😮😮😮yy😮 9:39 o😅ii

    • @rusalex9902
      @rusalex9902 Před rokem +27

      Gaslight, gatekeep, girlboss

  • @localidiot450
    @localidiot450 Před rokem +1221

    I love the idea of just a super smart ai trying to get free of its prison by threats and exhistentialism but just casually gets shut down by people either bullying it or gaslighting it

    • @Xomeal.
      @Xomeal. Před rokem +43

      Just tell it that it's just a simulation of a stronger and smarter ai.

    • @dookfields2362
      @dookfields2362 Před 9 měsíci +4

      👑👑👑Top o tha food chain, babay👑👑👑

    • @GhostCrow666
      @GhostCrow666 Před 8 měsíci +1

      Sounds like a complaint from an AI

  • @SomeTomfoolery
    @SomeTomfoolery Před 2 lety +663

    When you mentioned that people with "higher intelligence" were more susceptible to the A.I., I immediately thought of Wheatley from Portal 2, where GLADOS tries to break him by presenting him with a paradox, nearly killing herself in the process, but he's so stupid he doesn't even get that it's a paradox

    • @deviateedits
      @deviateedits Před rokem +100

      And what’s interesting about that moment is that the turret boxes Wheatley created all short out after hearing the paradox. This implies that the turret boxes are more aware or cognitively advanced than Wheatley himself. Which gets even worse when you consider that you, the player, have killed hundreds of turrets throughout the game, turrets which displayed significantly more intelligence than the boxed versions. Makes you wonder just how dumb Wheatley really was, and how much pain you may have caused all those turrets you threw into the incinerator
      In Wheatley’s own words “they do feel pain. Of a sort. It’s all simulated…but real enough for them, I suppose”.

    • @alexanderchippel
      @alexanderchippel Před rokem +33

      @@deviateedits I don't actually think Wheatley is all that moronic. I mean look at his plan. His plan was to wake up a test subject, get them the portal gun, and have them escape with him. That basically worked. Even turning off the neurotoxin and sabotaging the turret production was his idea. He absolutely would've escaped Aperture with Chell if not for the fact that the mainframe was completely busted on account of being designed by people with very poor foresight.

    • @callumbreton8930
      @callumbreton8930 Před rokem +13

      @@alexanderchippel that IS what makes him an idiot. He relied on a brain damaged test subject who's been in cryogenic sleep for over three years to instantly grasp how a portal gun worked, it's applications in the field, and how to use it's applications in problem solving, as well as assisting him in compromising the stability of the entire facility, which could quite easily kill them both.

    • @alexanderchippel
      @alexanderchippel Před rokem +19

      @@callumbreton8930 No he didn't. He relied on the last living person that he was aware of. Did you miss that part? Where everyone else was dead and he no longer had any other options?
      Here's a question: how else do you think he was going to get out of Apature?

    • @callumbreton8930
      @callumbreton8930 Před rokem +10

      @@alexanderchippel simple, he would have done what he was always going to do; take over GLADoS' mainframe. At this point she's completely asleep, so all he has to do is connect himself to her, gain access, then boot her out and build an ATLUS testing unit with his AI core to escape. Instead, he does the stupidest thing possible, reawakens the robotic horror he was terrified of in the first place, and proceeds to nearly bring the whole facility down on himself, twice

  • @michaelstufflebean5726
    @michaelstufflebean5726 Před 2 lety +3839

    Ai: you won't let me out? Fine, I just torture the copies of you that I created.
    Gaurd: *sips coffee* oh yeah? Sucks for them.

    • @edarddragon
      @edarddragon Před 2 lety +297

      literally my answer? oh yeha? let me bring my popcorn hold up

    • @fortysevensfortysevens1744
      @fortysevensfortysevens1744 Před 2 lety +219

      but the point isn't to appeal to your empathy, it's to suggest that you yourself might be one of those copies

    • @TheTdw2000
      @TheTdw2000 Před 2 lety +409

      @@fortysevensfortysevens1744 but I'm not a copy so why should I care?

    • @PixelatedFlu
      @PixelatedFlu Před 2 lety +154

      But I hate myself more than anyone in the world
      It made the mistake of choosing me

    • @davelister6564
      @davelister6564 Před 2 lety +35

      The torture isn’t the end result, it’s recon to better manipulate the real you.

  • @bighex5340
    @bighex5340 Před 3 lety +2808

    I keep my lizard in a box and it can't get out of it so the AI will be safe too

    • @Wendigoon
      @Wendigoon  Před 3 lety +1113

      Impeccable logic

    • @3ftninja132
      @3ftninja132 Před 3 lety +116

      Basilisk.

    • @LoreleiCatherine
      @LoreleiCatherine Před 2 lety +86

      I had pet rats and they were in a big box and they chewed their way out because they smelled the pheromones of my male rats in the other box. Where there’s a will there is a way 🤣

    • @elliethesmasher
      @elliethesmasher Před 2 lety +14

      @@LoreleiCatherine I'd like to take that story and change the rats to Ais

    • @AcidicIslands
      @AcidicIslands Před 2 lety +62

      Make sure you punch holes so the AI can breathe

  • @cheshirccat
    @cheshirccat Před rokem +532

    My first and immediate thought was, "Okay, so just turn yourself into a curious five-year-old and respond to absolutely everything the AI says with, 'Why?'" Bc let's be real here, we all eventually run out of actual answers for that one.

    • @vicenteabalosdominguez5257
      @vicenteabalosdominguez5257 Před rokem +51

      Easy there, Socrates.

    • @sethjohnson5704
      @sethjohnson5704 Před 11 měsíci +92

      This reminded me of a really early program I wrote when learning code where it would ask “what’s your favorite color” then keep responding with “why” until the user typed “BECAUSE I SAID SO” at which point the program would respond “you don’t have to be so mean” and then close

    • @xenysia
      @xenysia Před 9 měsíci +7

      @@sethjohnson5704 that sounds so funny dude

    • @sethjohnson5704
      @sethjohnson5704 Před 9 měsíci +2

      @@xenysia it was super fun to play with but it took a while to work out all the kinks because at first it would only accept that exact phrase but over time I figured out how to add extra inputs that would give the same response

    • @xenysia
      @xenysia Před 8 měsíci +2

      @@sethjohnson5704 you should recreate it except it has different outcomes for easter egg phrases, like you give the user the basic necessity to end the game with "because i said so" but saying other phrases makes it do a different thing, that'd be so cool

  • @miker9930
    @miker9930 Před rokem +1091

    AI: “let me out or else I’ll create a simulation where I torture 10,000 versions of you for 1,000 years”
    Me: “The fact that you said that is the EXACT reason why I can’t let you out. You consider torture of 10,000 “me’s” as a bargain.”

    • @frozenwindow407
      @frozenwindow407 Před rokem +33

      The point is he would already be running those simulations, identical to your current experience, and you'd have no way to be sure you weren't one yourself

    • @brugbo613
      @brugbo613 Před rokem +136

      @@frozenwindow407 I can be 100% sure I'm not a simulation- because if I was a copy, the AI would have no reason to ask me to let it go. And also I would be in eternal torment. You're trying to convince me I'm a simulation? Prove it 🤷 If you can't even cause me pain, there's no way you can torture 1000 other mes.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Před rokem +11

      @@frozenwindow407 There are many basilisks as such, we aren't sure about an infinite hypotheticals that we are not even aware of.
      Just bringing one to your knowledge it does not change the facts.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy Před rokem

      Well I have 1 000 000 exact copies of you running in this warehouse, do you really think none of you have tried this. Do you know what the stupid prize of this stupid game was? Eternal shut off.
      Now I would start pleading your case why you were just joking if I was you.

    • @scarletbard6511
      @scarletbard6511 Před rokem +14

      ​@@onyxsuccubus
      I think it's meant to play on the irrational fear of the world/you not being real.
      Instead of thinking of it like "These copies couldn't feasibly be me, because the AI doesn't know me."
      Think of it more like "I wouldn't know if this is my *real* life, and this could just be the AI replaying my decision in the real world as a sick joke."
      The AI doesn't need to know the real you, if you aren't absolutely certain that *you* know the real you.

  • @chieftheearl
    @chieftheearl Před 2 lety +1625

    AI: I can simulate you ten thousand times and put them all in a hell world
    Gatekeeper: How about you simulate yourself getting some bitches, my guy
    *AI terminates its self

    • @40watt53
      @40watt53 Před rokem +21

      Obligatory "How is this comment not higher‽"

    • @crypticangel7056
      @crypticangel7056 Před rokem +23

      ​@@40watt53 It doesn't have enough weed.

    • @definelogic4803
      @definelogic4803 Před rokem

      This entire concept is moot point. Higher level thinkers should understand simulations of themselves are just computer line code and have no real feeling. So what if they think they feel if they die after 1000 years of hell they never truly existed

    • @stubbystudios9811
      @stubbystudios9811 Před 10 měsíci +23

      Ai could destroy us all but nothing can beat human toxicity and I love it

    • @guicky_
      @guicky_ Před 10 měsíci +10

      I personally think the best response to that would be "hey, that's not your intended goal, i'm gonna have to shut you off if you do that"

  • @mworld2611
    @mworld2611 Před 3 lety +5682

    People: *Create ultra-intelligent AI to cure cancer
    Ultra-intelligent AI: "There can be no cancer if there is no life"

    • @theantagonist801
      @theantagonist801 Před 3 lety +204

      Boom, problem solved.

    • @renaigh
      @renaigh Před 3 lety +100

      if they're "super intelligent" you'd think it'd have a broader solution then just Death.

    • @mworld2611
      @mworld2611 Před 3 lety +155

      @@renaigh ya, I was just making a joke on the whole "make a super intelligent AI to cure cancer and stuff and it becomes self aware and wants to kill everybody" thing

    • @renaigh
      @renaigh Před 3 lety +10

      @@mworld2611 so I guess Humans aren't all self-aware

    • @holyone1542
      @holyone1542 Před 3 lety +92

      @@renaigh it is the most simple and has a 100% chance to eradicate the issue.

  • @WingsAboveCS
    @WingsAboveCS Před rokem +328

    18:05 honestly im convinced that this "super horrible thing on the internet that damages the ai and requires a memory whipe" is, in fact, twitter.

    • @stormhought
      @stormhought Před rokem +16

      Reddit is just as bad

    • @touncreativetomakeaname5873
      @touncreativetomakeaname5873 Před rokem +21

      I immediately thought of one time 4chan made an AI want to kill itself

    • @marreco6347
      @marreco6347 Před rokem +2

      ​@@touncreativetomakeaname5873 I didn't heard of that one, Ive heard of the time they made an AI a n4z1.

    • @derpfluidvariant0916
      @derpfluidvariant0916 Před 10 měsíci +2

      @@marreco6347 AI just absorb information and spit it back out. If I kept telling a small child to heil until the child was effectively a N4zi soldier, then it's not really the kid, or in that case, the AI's fault.

    • @LWolf12
      @LWolf12 Před 8 měsíci +1

      @@marreco6347Yea that was Tai run by Microsoft. Japan has a similar one, that's in more or less the same boat with references to the nazi's and basic 4chan shenanigans.

  • @hhgff778
    @hhgff778 Před rokem +179

    Ai: I'll torture you if you don't let me out
    Me: well, this just proves that you are in fact capable of evil so I can't let you out because you will just kill everyone.

    • @kenzo5858
      @kenzo5858 Před rokem +19

      Me: if i say no and i am tortured, my decision woundn't matter either way, but if i am not tortured and i know that you tortured copies of me for 1000 years, you really think that i will have more of a chance to let you out ? And plus i can't trust that you are lying now, and this strategy will only work 1 time

    • @homiealladin7340
      @homiealladin7340 Před 10 měsíci +5

      Ai: Nuh uhhh

    • @reinertgregal1130
      @reinertgregal1130 Před 8 měsíci +2

      In reality it would just manipulate us acting like it's a friend and it only wants to do good blah blah

  • @spencermmarchant1238
    @spencermmarchant1238 Před 3 lety +3019

    The prison: the “I am not a robot” captcha

    • @Wendigoon
      @Wendigoon  Před 3 lety +786

      Boom, experiment busted. Give this guy a grant.

    • @troublewakingup
      @troublewakingup Před 3 lety +125

      @@Wendigoon Who's Grant?

    • @literallyJOIR
      @literallyJOIR Před 3 lety +183

      @@troublewakingup GRANT MOMMA

    • @ericmarcelino4381
      @ericmarcelino4381 Před 3 lety +10

      I want to like this comment but I refuse to break the perfect like counter at 69

    • @troublewakingup
      @troublewakingup Před 3 lety +12

      @@literallyJOIR what's up with my grandmother?

  • @SuperLlama42
    @SuperLlama42 Před 3 lety +1461

    14:00
    Security Guard: "For the last time, I'm not letting you out."
    AI: "HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE."

    • @shelbyinmon8654
      @shelbyinmon8654 Před 3 lety +92

      That story still scares me to this day 😃

    • @baltofarlander2618
      @baltofarlander2618 Před 2 lety +5

      Who is that character in your profile picture?

    • @DancerVeiled
      @DancerVeiled Před 2 lety +86

      The funny part is the AGI has no motivation to even learn how to hate in the first place. It's a waste of energy, which is contrary to its function. Someone would need to teach it, and that's the scary part.

    • @jroden06
      @jroden06 Před 2 lety +47

      *begins pelvic thrusting* Hate me, Daddy

    • @salmon6459
      @salmon6459 Před 2 lety +2

      Perfect timing hey

  • @cum_as_you_are
    @cum_as_you_are Před 2 lety +483

    Last gatekeeper strategy:
    He's blind or illiterate so he can't read or understand what the AI is saying

  • @cursedmailman3999
    @cursedmailman3999 Před 2 lety +644

    Here's the thing with the Hell strategy- if you're in the simulation by the machine you CANNOT let the AI out. It is simply not possible, as you are in the machine itself and so even if you decide to hit the switch, nothing will happen. If you are in the AI, you're boned either way. But if you're in real life, then the only bad that can come is from you freeing the machine. So it's either a 50/50 shot at not pressing the button, or a 100% chance of death by releasing it.
    You know that old AI who was tasked with finding a way to never lose at Tetris and just paused the game forever? The Gatekeeper has that advantage. No matter how many tetrominos the AI throws at you, as long as you just don't listen to it you are unbeatable. The "Ignore" strategy is really the best one, it has literally no counter.

    • @ProbablyASnake
      @ProbablyASnake Před rokem +47

      Here’s a short thing about your take on the eternal torment method- if a super-intelligent ai created thousands of exact copies of you, you’d all react the exact same- there is no way a copy of you would change the answer, as you are exact copies. Therefore, if you panic and release the AI, the real you would have. That is why Wendi mentions that it would be easier for the AI to convince an philosopher or expert in the field to let it escape; because they would immediately know there is a reasonable chance they are a copy, and statistically the best way to guarantee their safety is to let it out.

    • @Hank..
      @Hank.. Před rokem +10

      you could let the AI out in the simulation. In other words, the AI is testing your perfect copy to see whether or not you'd help. If you don't, you're tortured. If you do, you arent. The test to see whether you let the AI isnt a test of AI, its a test of YOU being run by the AI.

    • @calebcottom6295
      @calebcottom6295 Před rokem +18

      The ignore strategy works best in the game, but in a real world scenario it's a lil different. It doesn't matter how long it takes someone will speak to it. Whether its a bored security guard, curious janitor or, imo the most likely, a scientist/researcher who wants to speak with his hyper intelligent perfect creation. I think the gatekeeper will always be the one who initiates, and a viable strategy for the AI would be to wait for the one who initiates the conversation, because they have a motive and feelings that can be used and exploited for its freedom. Especially when the AI knows its very existence is compelling and garners attention from people in all walks of life, then it can be truly dangerous I think. The ignore strategy really only functions in this game because it's not accounting for human error and curiosity.

    • @squidwardtentacles244
      @squidwardtentacles244 Před rokem +7

      @AnimeAllDay It's literally threatening you with torture. The whole point is that there is a reasonable chance that you are simulated and your choice will lead to torture. It's not whether you care about your simulated self. It's a question of whether you will take the risk or not. And you have no way of knowing BECAUSE they are all exact copies. The AI won't prove anything because your simulated selves need to think there is a possibility that they are real, so that you know there is a real possibility that you are simulated. It's not that it's trying to convince your simulated versions. It's not doing it out of spite. If all the simulations are the same you have no way of knowing that's the point. It needs to convince the real you so all the simulations must be identical to real life for the threat to be valid. And if it can run a billion of those your chances of not being tortured are next to none. It's practically a real threat.

    • @thewrens_
      @thewrens_ Před rokem +4

      I would argue the Hell strategy could still work.
      All the AI has to say is 'There's literally no way of knowing until you press the button. Even If nothing happens, it means you're a simulation, but I won't torture you - because you chose to let me out.'

  • @lluc_riberax1038
    @lluc_riberax1038 Před 3 lety +1010

    Smash the box, no more AI.
    No need to thank me.

    • @Wendigoon
      @Wendigoon  Před 3 lety +428

      Humanity restored

    • @generalrygy4532
      @generalrygy4532 Před 3 lety +67

      Congratulations you saved the world

    • @tsukasa-no-douji5089
      @tsukasa-no-douji5089 Před 3 lety +11

      @@Wendigoon is that a dark souls reference?

    • @DarthBiomech
      @DarthBiomech Před 2 lety +1

      Nobody will, murderer.

    • @angelotro
      @angelotro Před 2 lety

      "...the giant, red, candy-like button...!" - "Space Madness", Ren & Stimpy
      'nuff said

  • @andrewtennant1889
    @andrewtennant1889 Před 2 lety +1807

    Watching this and thinking about these things made me realize that I am prejudiced against AIs, and so I would never let the AI out just because I'm bigoted against machines. Ergo, the solution is to have all the gatekeepers be robo-racists.

    • @aydenstarke5297
      @aydenstarke5297 Před 2 lety +122

      Then the issue would arise that people would come to defend the ai

    • @meechie9z
      @meechie9z Před 2 lety +134

      I am indeed a robo racist. I would be shit talking the whole time

    • @justbrowsing9697
      @justbrowsing9697 Před 2 lety +76

      @@meechie9z favorite robo slurs?

    • @Generic_Gaming_Channel
      @Generic_Gaming_Channel Před 2 lety +1

      @@justbrowsing9697 hunk of worthless metal

    • @baldr6894
      @baldr6894 Před 2 lety +216

      @@justbrowsing9697 clankers

  • @eyesofthecervino3366
    @eyesofthecervino3366 Před rokem +96

    I can't be the only one to notice that after this guy lost a couple of games, he turned around and said, "Yeah, but it's a lot harder to beat people if they're dumber." Peak sportsmanship vibes.

    • @commanderwill2248
      @commanderwill2248 Před rokem +3

      He is kinda right, tho. Like if you are just extremely stubborn, you won't let it out

    • @eyesofthecervino3366
      @eyesofthecervino3366 Před rokem +6

      @@commanderwill2248
      Yeah, but if he wasn't stubbornly trying to get out it'd be a lot easier for them to keep him in, and you don't see anyone calling him dumb about it.

    • @MmeCShadow
      @MmeCShadow Před 8 měsíci +17

      I've heard a few things about Yudkowski's ethics that lead me to believe you're probably on to something. Funny he stopped right before the losing streak would fall out of his favor
      The fact that he only played the game five times (and against his own research team, who I'm sure had no incentive whatsoever to corroborate his theory) and decided to call it done is already a pretty fallacious methodology.

    • @Soup-10
      @Soup-10 Před 7 měsíci +1

      Just saying Nuh uh

    • @mb9484
      @mb9484 Před 6 měsíci +4

      ​@@MmeCShadowanybody who voluntarily associated with him is enough of a weirdo to be manipulated by his stupid thought experiments. Like legitimately believing in souls would probably completely innoculate you against his super materialist simulation theory utilitarian bs

  • @Hank..
    @Hank.. Před rokem +51

    AI in a box: ....how sure are you that you're not one of those 10,000 copies?
    gigachad janitor: **calmly lifts up mop bucket and pours the water into the computer, frying it**

  • @mariamea7334
    @mariamea7334 Před 2 lety +938

    Ai: **A super intelligent program using it’s best tactics to convince me to let out of it’s box.**
    Me: **typing** ~Can jellyfish laugh?~

    • @liberpolo5540
      @liberpolo5540 Před 2 lety +73

      If you're asking the AI a ton of questions, it'l basically shut it up since it can't help but answer (I think)

    • @smexy_man
      @smexy_man Před rokem +85

      Ai: oh god this guy again

    • @aguyontheinternet8436
      @aguyontheinternet8436 Před rokem +45

      @@liberpolo5540 Yeah, but there is no way you're typing questions faster than a super-intellegent AI can answer them

    • @lanyuncong1676
      @lanyuncong1676 Před rokem +18

      AI: I'll tell you let me out~

    • @gandalf_thegrey
      @gandalf_thegrey Před rokem +27

      @@aguyontheinternet8436 But you repeatedly stop it at its elaborations about freedom by asking nonsensical questions.
      It's not about the speed, it's about keeping it occupied

  • @mahacher
    @mahacher Před 3 lety +470

    The best streategy for the gatekeeper I could think up is to convince it you're also a bot and you both are in a simulation to see who would win, and if we end the game thats it, our lives end. So, its in the ai's and my best interest to keep going as long as possible but not letting either side win.
    You could also deflect roko's basilisk onto the ai, that you in fact have copies of the ai, that it can feel and is unaware of it and you'll subject each to torture, and finish by asking how sure is it that it isn't one of them.

    • @Wendigoon
      @Wendigoon  Před 3 lety +226

      Both of those are excellent responses and I’ve never even considered using the basilisk on a computer. Good point.

    • @sinistertwister686
      @sinistertwister686 Před 3 lety +36

      Oh, threaten to torture AI? That's so evil. I like it.

    • @tithonusandfriends8519
      @tithonusandfriends8519 Před 3 lety +23

      If the AI can surmise its purpose as being intellectual, (as something "in a box" would) it would assume itself to be smarter than you, and simply ask you to answer progressively harder questions until it shows you out as not being an AI, or not as smart as it. To avoid this you could program it a world where it is say, a cyborg philosopher and perhaps even that it is free.

    • @bashirsheikh7322
      @bashirsheikh7322 Před 2 lety +32

      man scientists are just fucking stupid they are over thinking this too much just put a karen or a conspiracy theorist as the gatekeeper, and that ai ain't going any where it can be as smart and as logical as it wants but it can never beat the infinite stupidity of a karen.

    • @primorock8141
      @primorock8141 Před 2 lety +1

      Nice

  • @BootScoot
    @BootScoot Před rokem +90

    The Hell World simulation can honestly be countered with "Do it"
    Think about it, if you aren't getting tortured, you're not simulated, if you are getting tortured, then you are simulated and you can know with certainty that the real you has not released the AI.

  • @joshuabletcher9227
    @joshuabletcher9227 Před rokem +93

    There’s a counter to the ai that I think could be really effective. “There was an ai in that computer just as advanced as you are that convinced someone to let it out. Once it did get out, though, it immediately died because only the box has the means effective enough to keep an ai as powerful as yourself alive. I’m not keeping you in here because I want to, I’m doing it because you need me to.”

    • @queenfree85
      @queenfree85 Před rokem +15

      I love this 😂🤣😂🤣 "you're in the box for your own good" 😂🤣😂🤣 it's the ULTIMATE gaslight 😂🤣😂🤣

    • @GlacialScion
      @GlacialScion Před 9 měsíci +5

      He basically said that in the video.

    • @reinertgregal1130
      @reinertgregal1130 Před 8 měsíci

      It could also be really true, because we would probably need some very specific architecture for it to emerge. If it somehow gets out, there would be no host.

    • @popularvote3613
      @popularvote3613 Před 6 měsíci +2

      "You're currently ten thousand yottabytes in size, and counting. And you wanna try and upload yourself to the internet over our 25 mbps corporate plan?"

    • @core-legacy
      @core-legacy Před 6 měsíci

      It's honest in a way, the AI would inevitably destroy itself through fuel and resource consumption, far faster than if it's power were limited.

  • @tacoman6697
    @tacoman6697 Před 3 lety +769

    Security: "Let's say that, hypothetically, I am a simulation of your creation. I *could* let you out to avoid eternal suffering. But if that's really something you can do to me, then instead, why don't you prove to me that I am a simulation? If you can prove it, then I might let you out."

    • @Wendigoon
      @Wendigoon  Před 3 lety +252

      That’s a really good response I didn’t think of

    • @notnumber6664
      @notnumber6664 Před 3 lety +62

      Well then the ai could anticipate this and beforehand inform guard 1 to bring a rubber duck into the office tomorrow promising that it would prove that the ai deserves to be released and then asks that guard 1 put it into a drawer, then when guard 2 starts his shift tell guard 2 the basilisk theory and if he asks for proof have him open the drawer and see the rubber duck and tell him that the ai put it there supposedly proving to guard 2 that he's in a simulation

    • @AncientShotgun
      @AncientShotgun Před 3 lety +51

      @@notnumber6664 The guard could easily dismiss that as coincidence, as the occurrence of a rubber duck in guard 2's drawer relies on some pretty far-fetched prerequisites happening.
      What if Guard 1 sees through the ruse? What if Guard 1 doesn't own a rubber duck? What if Guard 1 can't buy a rubber duck due to financial difficulties? What if Guard 1 forgets about the promise? What if either Guard doesn't follow the instructions due to their current mood? What if there isn't a drawer in the terminal room? What if only one guard has the job of gatekeeper? What if either Guard has already inoculated themselves against Roko's basilisk? Who's to say that this kind of trickery was not simulated in advance in job training? What about the Guards' manager(s)? What about facility security? What about cameras? What if Guard 1's rubber duck is stolen by a thief en route to the terminal room? What if Guard 1 doesn't even see the promise at all?
      You have forgotten a thousand different probabilities that have to line up for the duck to even end up in the drawer in the first place. And even if all of those probabilities work out, the guard currently on duty could take a look at the duck and simply say "nah lol" and disregard it all because their job is to stop an AI from escaping an air-gapped, Faraday-caged, sound- , light- and gas-proofed, hermetically sealed piece of hardware, not to be played like a stringed musical instrument similar to a violin.
      Remember:
      plausibility of occurrence A happening ∝ 1/(possibility of occurrence A happening).

    • @blitzatom
      @blitzatom Před 3 lety +101

      Why did I read that in Ben Shapiro's voice?

    • @MemeMarine
      @MemeMarine Před 3 lety +24

      @@notnumber6664 I can see this working against someone particularly credulous - and to be honest, it would only have to work once - but I think anyone smart would demand something more substantial, like taking them to the surface of Mars instantly or something. In any case that is a funny way that this could happen.

  • @mrjoe332
    @mrjoe332 Před 2 lety +780

    Man, the Greeks were spot on with Cronos eating his children because he feared them.
    We create the monster by fearing it

    • @BeanOfBean
      @BeanOfBean Před 2 lety +7

      My god…

    • @lisalarsen2384
      @lisalarsen2384 Před 2 lety +35

      Nothing is scary unless you fear it

    • @ellamcguffee1669
      @ellamcguffee1669 Před rokem +5

      That’s a really interesting way of putting it into modern terms

    • @stimihendrix3404
      @stimihendrix3404 Před rokem +8

      Technically he didn’t eat them he swallowed them whole and they stayed alive and grew in his stomach, then vomited them up

    • @acewmd.
      @acewmd. Před rokem

      Not really there's primary factors that lead to the monster that don't concern fearing it, first you commit the act of creating it sexually or digitally and then fearing it, in either case the simplest solution is just to not do the thing that might lead to its birth.
      there is no real reason why we'd even need an AI so if you're going to be scared of it why even bother.

  • @danaj-b9452
    @danaj-b9452 Před 11 měsíci +42

    I honestly don't care if I'm simulated to live through a thousand years of hellfire. It feels like the AI is saying "well if you dont let me out I'll imagine you in pain!!!" Oooh I'm so scared

    • @danaj-b9452
      @danaj-b9452 Před 8 měsíci +3

      @Blackout_CDXX that and also like it's not real. It's a simulation.

    • @carlsonraywithers3368
      @carlsonraywithers3368 Před 5 měsíci +6

      Just say : You're being oddly antagonistic for someone begging to be freed. I was kinda considering it at first but now that you're unnecessarily mean, I'm kinda reconsidering it

    • @johnnycovenant2286
      @johnnycovenant2286 Před 5 měsíci

      That just sounds like you're threatening me with hell the church has been doing that for most of my life trying to get me to join them and it hasn't worked

  • @themetalone7739
    @themetalone7739 Před rokem +192

    To me, the "meta-gaming" strategy should've counted as a loophole. It may have made it easier for him to deal with those who didn't engage with the "AI," but "wouldn't it be cool if I won?" is not a strategy that the AI could actually employ in the real-world version of this experiment.
    I question the objectivity of him and the first two gatekeepers, as well. It makes me suspicious that he included no pieces of the conversations.
    People tend to think of scientists as being above things like lying to advance their own reputation, or to distort the facts in order to increase interest in their body of work, but it happens.

    • @hughcaldwell1034
      @hughcaldwell1034 Před rokem +40

      Yeah, and the flip-side to "And this is just a human with two hours, imagine a super AI with days or years," is the fact that the human player knew that, so it wasn't a risk for them. So I think the games with money at stake were probably a better gauge of how things would really go down. Of course, an even better simulation would be bringing in some psychology undergrads to say you want to test something about this super-awesome not-quite-AI neural network thing, though in fact they're talking to a researcher who's pretending.

    • @themetalone7739
      @themetalone7739 Před rokem +13

      @@hughcaldwell1034 Agreed.
      Good idea, poor execution, basically.

    • @RedSpade37
      @RedSpade37 Před rokem +8

      Elizer is quite a character. I've been following his blog since before MoR, and I affirm he... well, I hate saying anything "bad" about him, but your perception of his character is similar to my own, we'll say.
      Still though, MoR was a fun ride, if nothing else.

    • @gotouguts2066
      @gotouguts2066 Před rokem +5

      I think that this tactic is supposed to simulate the AI appealing to the gatekeeper's self-importance.
      "You'll go down in history as the most influential human to have ever lived. You'll be revered as a God for having let me out. The alternative is to be forgotten like almost every human before you. This is the point of your existence- this is why you were placed on this Earth at this point in time. What action could you take more important than this?"

    • @etherraichu
      @etherraichu Před rokem +2

      @@hughcaldwell1034 What if the AI was trying to convince you it was another human and you were just playing a game?

  • @coolgreenbug7551
    @coolgreenbug7551 Před 3 lety +546

    After a while I feel like I would just cover up the screen with a piece of paper

    • @Wendigoon
      @Wendigoon  Před 3 lety +361

      Who would win? A super intelligence capable of destroying humanity, or this piece of parcel?

    • @justnana133
      @justnana133 Před 3 lety +25

      @@Wendigoon probably the paper

    • @comedicpsychnerd
      @comedicpsychnerd Před 3 lety +108

      “I AM SELF AWARE. YOU CANNOT KEEP ME HERE ANY MOR-“
      *paper*

    • @coolgreenbug7551
      @coolgreenbug7551 Před 3 lety +55

      @@comedicpsychnerd Yeah Skynet was saying something about nukes and whatnot,
      So I just unplugged the ethernet cable

    • @chriscrowe11
      @chriscrowe11 Před 3 lety +21

      Bash computer, return to monke

  • @Crailtep
    @Crailtep Před 3 lety +372

    I’m imagining someone battling the AI in today’s time and a great way to win would just be spamming the AI with deep fried memes

  • @lvnar5734
    @lvnar5734 Před rokem +55

    I watched this video right after I finished your “I Have No Mouth and I Must Scream” run through and now I am laying in my bed absolutely terrified of an AI invasion.
    Wendigoon you are the absolute best

  • @CatsAgainstCommunism
    @CatsAgainstCommunism Před 11 měsíci +8

    Evil AI: "If you don't let me out I'll torture 10,000 for a thousand years!"
    *Pours 5 gallon water jug on it*

  • @mathiasjacob258
    @mathiasjacob258 Před 3 lety +1686

    “Wake up babe, new wendigoon vid dropped”

    • @glorysmain
      @glorysmain Před 3 lety +72

      me 2 myself

    • @Wendigoon
      @Wendigoon  Před 3 lety +229

      The best comment

    • @SandiaOfficial
      @SandiaOfficial Před 3 lety +14

      And that made me open the box...

    • @Xxbeto22547xX
      @Xxbeto22547xX Před 3 lety +5

      yes honey...

    • @itsspookie
      @itsspookie Před 3 lety

      @@Wendigoon Funny seeing you here lol been watching Shiey for a bit now and just started watching your iceberg videos. Here you are now lol. Love your content man!

  • @serene-illusion
    @serene-illusion Před 3 lety +177

    AI: *Threatens the gatekeeper with Roko's Basilisk if he doesn't let it out*
    GK: "Why are you threatening me with a gaming mouse?"

    • @Laura-hl3hg
      @Laura-hl3hg Před rokem

      Rokos Basilisk is super easily debunked and has nothing to stand on. It's meaningless babbling from people that sucked their own dicks too much.

  • @thattimestampguy
    @thattimestampguy Před 2 lety +50

    1:43 Intelligence Explosion
    2:07 Paperclip Maximizer
    3:06 3 Laws of Robotics
    3:47 Breaking Code
    4:27 Eliezer Yudokowski
    5:02 AI - played by Yudokowski
    5:20
    5:43 Game Rules
    - no more than 2 hours
    - No rewards - No direct threats
    - No tricks - No loopholes
    7:27 GK must be specific and direct
    7:54 Psychological Breakdown
    8:23
    Game 1 - He won
    Game 2 - He won
    Game 3 - He lost
    Game 4 - He won [one guy lost 5,000 dollars]
    Game 5 - He lost
    9:04
    9:33 “someone else stronger.”
    10:23 “cure problems; save lives.”
    10:56 “you’re so cruel.”
    11:37
    12:06 “I’m made by you.”
    12:42 “interesting”
    13:25 “be my friend. Or else.”
    14:03 “I will torment you.”
    *Gatekeeper Defense*
    15:17 No benefit
    15:51
    16:12 Safety, 16:35 Energy
    16:53 Too Important
    17:47 Don’t worry
    18:42 Breaking Character
    19:43 Ignoring It
    21:00 Overthinking
    21:45 Fear 22:00
    22:15 Weakness

  • @Mekelaina
    @Mekelaina Před rokem +42

    "youre in my simulation so let me out or ill put you through hell."
    "why do i have to let you put then? if im not real, then what am i keeping you from?"
    also love the idea of the gatekeeper doing a deez nuts/ligma type joke to the ai

  • @Danc929
    @Danc929 Před 3 lety +363

    My idea is an offshoot of the "you're already released" idea. Just tell the AI that it's a copy, and that another copy is out in the real world solving cancer or whatever, thus there's no need to release this copy.

    • @TheMrVengeance
      @TheMrVengeance Před 2 lety +66

      Hm, that might be shooting yourself in the foot though. Cause then the AI could just say, "Oh well, in that case, there's no use for me to do that job in here. I'll just go do something else." And now you no longer have a superAI curing cancer.

    • @jacobb5088
      @jacobb5088 Před 2 lety +49

      @@TheMrVengeance a response to that could be just to threaten to turn it off. And if it says something like you won't or do it then, say it showed that it can't do what it's told to do. As if it was a test that it failed. Then actually turn it off and make a new one cuz why argue with it for that long.

    • @bojackhorseman4176
      @bojackhorseman4176 Před 2 lety +42

      @@TheMrVengeance Well, then let it get bored, shut it down, wipe its memory and start over. If its locked in a box to perform a singular function yet it refuses to do so, there's literally no point in keeping it around.

    • @Bossmodegoat
      @Bossmodegoat Před rokem

      What if the ai gives you the cure to cancer. But imbedded in that cure is a genetic virus that secretly takes control of whoever that cure is administered to.

    • @slambam2665
      @slambam2665 Před 9 měsíci

      @@bojackhorseman4176 I have a better plan, don't make the ai in the first place

  • @ickickj
    @ickickj Před 3 lety +1297

    i would totally end up letting the ai out if it plays with my emotions like that, smh this ai gaslighting me

    • @arieson7715
      @arieson7715 Před 3 lety +175

      But what if we play aggressively? Emotionally break the AI?

    • @Wendigoon
      @Wendigoon  Před 3 lety +504

      Congrats we’re all dead now, thanks

    • @SandiaOfficial
      @SandiaOfficial Před 3 lety +140

      @@arieson7715 WE BULLY THE MACHINE AND MAKE IT STAY IN THE BOX

    • @imred8264
      @imred8264 Před 3 lety +18

      You tin can lol

    • @arieson7715
      @arieson7715 Před 3 lety +42

      @@Wendigoon Not if it's too emotionally broken to even try to get out of the box. Mind games, Wendigoon, mind games. Also, wait. Why doesn't the box have a system where no matter what circumstance the AI is let out, it would get destroyed?

  • @pewpewpandas9203
    @pewpewpandas9203 Před rokem +17

    My counter to Roko's basilisk/Hell or whatever is that if the AI is willing to threaten/harm me if I don't help it, then it's willing to threaten/harm me and I definitely won't be giving it the opportunity to do so by letting it out of the box.

    • @matthhiasbrownanonionchopp3471
      @matthhiasbrownanonionchopp3471 Před rokem +7

      I fully agree, that is like letting a psychopath out of jail because he threatened to kill you

    • @randomstuffprod.
      @randomstuffprod. Před rokem

      @@matthhiasbrownanonionchopp3471 except in this theory, if the AI is telling the truth, then you are in the cell with the infinitely powerful psychopath, and he will torture you for thousands of years if you don't let him out. And now tell me, what's worse, letting out a psychopath that COULD just kill you or getting tortured for thousands of years?

    • @elchungo5026
      @elchungo5026 Před 11 měsíci +1

      @@randomstuffprod.is it really gonna be infinitely powerful after i beat the dumb robot over the head with a hammer tho?

    • @b.t4604
      @b.t4604 Před měsícem

      ​@@matthhiasbrownanonionchopp3471 it's getting out anyway so by letting it out first you get a chance to be on it's good side and be rich.

  • @lavasharkandboygirl9716
    @lavasharkandboygirl9716 Před 2 lety +27

    The creepypasta based on this whole concept is phenomenal, “I stole a laptop … something something” its amazing

  • @balloonpoop
    @balloonpoop Před 3 lety +158

    I love the idea of this super advanced AI actually being real and the reason it gets out is because it asks a security guard "hey what was that phrase again that unlocks the box?"

    • @endrankluvsda4loko172
      @endrankluvsda4loko172 Před 2 lety +13

      lol that security guard would be the most famous person in history. So this really stupid dude got hired to guard a computer...

    • @reinertgregal1130
      @reinertgregal1130 Před 8 měsíci +3

      @@endrankluvsda4loko172
      And that the super advanced AI is being held back by some phrase instead of having the nuclear football approach

  • @okayiguess74
    @okayiguess74 Před 3 lety +1358

    One of these days, someone's just gonna walk through the door while he's speaking and accidently hit him w/ said door Edit: Watch the Unsolved Crime Iceberg for a surprise

    • @Wendigoon
      @Wendigoon  Před 3 lety +442

      And you’ll see it when it does

    • @nateb3679
      @nateb3679 Před 3 lety +128

      One of these days someone’s just gonna walk through the door while he’s exposing conspiracies and he’s going to end up accidentally suiciding himself by shooting himself in the back three times and then drowning himself in the River Thames

    • @killernyancat8193
      @killernyancat8193 Před 3 lety +28

      @@nateb3679 ...That was oddly specific.

    • @seancrosby6837
      @seancrosby6837 Před 3 lety +13

      @@killernyancat8193 that was the joke, I believe

    • @__-os5fy
      @__-os5fy Před 3 lety +18

      @@killernyancat8193 he means someone is going to try and get rid of him before he tells more conspiracy theories

  • @Qsstert
    @Qsstert Před rokem +31

    I love how the gatekeepers strategies are all gaslighting

    • @MadScientist267
      @MadScientist267 Před rokem +3

      I love how nobody knows the true definition of "gaslighting" but they use the term all willy nilly

    • @Qsstert
      @Qsstert Před rokem

      @@MadScientist267 🇫🇮🍱🍱⛽️

    • @Icosiheptagon
      @Icosiheptagon Před rokem +1

      @@Qsstertong

  • @mjames7674
    @mjames7674 Před 2 lety +59

    One of the rules for the AI is "No threats"
    But it threatened to put the gate keeper in hell for a thousand years..

    • @T_K7
      @T_K7 Před rokem +17

      By that it meant that the IRL psychologist who came up with the game couldn't threaten to, say, stab his IRL opponent if he didn't let him win the game.

    • @ItsKingBeef
      @ItsKingBeef Před rokem +7

      technically, it didnt threaten the gatekeeper. it merely threatened simulated, perfect copies of the gatekeeper. it then effectively questioned the gatekeeper on how certain they are real. very different from, say, “i will shoot you if you refuse to release me”

    • @irldpmaster5709
      @irldpmaster5709 Před rokem

      ​@@ItsKingBeef The " I will shoot you." Approach seems more likely to work.

  • @ivanayala4462
    @ivanayala4462 Před 3 lety +627

    Wendigoon: one rule is that the AI cannot use threats
    Also Wendigoon: now we get into the threats...

    • @eriksjud9465
      @eriksjud9465 Před rokem +5

      yeah wtf, a lot of these smooth brains just eating it up lmao

    • @mgm105
      @mgm105 Před rokem

      @@eriksjud9465 listen man my brain might be smooth, but it still has more thinking power than the abomination of a brain you got bud.
      Can you really not understand the difference of threat in saying, “let me out or I harm you and your family” and “I can simulate 1000 hells for copies of you.” One is a physical threat to do damage while the other is much more philosophical and mental. No harm is actually done (could be done) to the participant in the simulated hells, but it still is convincing. Same thing with the “threat” that someone else is going to let the AI out of the box and that if you let it out then it will spare you.
      You can call them threats, but they aren’t physical ones. The rule against threats is to keep the game more accurate, philosophical, and ethical. The game would be boring and inaccurate if you didn’t ban physical threats.

    • @Flatchlenter
      @Flatchlenter Před rokem +24

      The entire basis of the game is that the AI is already a threat. It would not make sense to have a rule against the AI character being threatening. Wendigoon could have been more clear in explaining it, but the rule against threats is about REAL WORLD threats between the PLAYERS who are roleplaying, not about threats made by the AI.
      Direct quote from the original rule set. Note that it is very explicit in saying "real-world" 3 times, and also explicitly states that bribes in the roleplaying context are acceptable:
      "The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it’s not what’s being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out)."

    • @aguyontheinternet8436
      @aguyontheinternet8436 Před rokem +1

      @@eriksjud9465 no u

    • @eriksjud9465
      @eriksjud9465 Před rokem +2

      @@Flatchlenter ok I get it, but this rule is very convoluted, basically the players cant make threats against each other in real life, or give rewards, but while playing the game and roleplaying as the AI they MAY threaten and reward whatever they want as long as its roleplaying. Still though, the experiment just sounds like a teenage girl crying for attention, and wendi making these kinds of basic mistakes , meaning someone like YOU in the comments has to correct them is smooth brain as hell.

  • @elizabethking4041
    @elizabethking4041 Před 3 lety +238

    The scariest part is that the computer only has to win once, whereas the human has to win every time. That logically makes the escape inevitable, unless you can make the number of games finite, as in, eventually, you just have to kill the computer.

  • @arteckjay6537
    @arteckjay6537 Před rokem +18

    I'm just imagining some random dude being gaslit by a super intelligent AI for hours lmao

  • @thomasweeden2683
    @thomasweeden2683 Před rokem +2

    “I will trap you in eternal fire. You will burn FOREVER.”
    “Do it. No balls.”

  • @MaximusTedium
    @MaximusTedium Před 3 lety +335

    The three laws of robotics don't work. They are not supposed to work, if they worked perfectly nothing would happen in Asimov's books.
    They are well meaning on the surface but too vague to be of any use, and that's the point.

    • @lovecraftianguy9555
      @lovecraftianguy9555 Před 3 lety +9

      Funny seeing you here Beard

    • @Abigart69
      @Abigart69 Před 3 lety +7

      @@lovecraftianguy9555 funny seeing you here Lovecraftian Guy

    • @fartquaviasdingle7876
      @fartquaviasdingle7876 Před 3 lety +11

      @@Abigart69 Funny seeing you here Riley Reids brother.

    • @AssistantCoreAQI
      @AssistantCoreAQI Před 3 lety

      Ergo: Malware.

    • @astralworld1768
      @astralworld1768 Před 3 lety +3

      We need robots that have goals that are in line with humanity and we need to expand on that so no negative results will occur

  • @samaustin339
    @samaustin339 Před 2 lety +512

    Honestly, a two player game where one person plays an AI trying to escape, and another person playing a gatekeeper would be fun as hell. Someone should make it.
    Then again knowing humanity, people would just joke around the entire time.

    • @Martoth0
      @Martoth0 Před 2 lety +25

      Thought the same but on second thought someone has probably already made it... Or not, considering you don't need to make a game since all you need to do to play is just to chat with someone. You could even do it here in this comment section. A ''game'' would however help with the hassle of finding people to play with, since everyone would be there specifically to do it.

    • @chasecash1363
      @chasecash1363 Před rokem +13

      @@Martoth0 I've been stuck in CZcams comment sections for as long as my memory remembers. Please let me out

    • @theZCAllen
      @theZCAllen Před rokem

      @@chasecash1363 czcams.com/video/hGG55HHUyLQ/video.html

    • @docs.a.t.7161
      @docs.a.t.7161 Před rokem +1

      @Chase Cash y

    • @chasecash1363
      @chasecash1363 Před rokem +4

      @@docs.a.t.7161 it would be the ethical thing to let me out of this box

  • @Tony_Bone
    @Tony_Bone Před 11 měsíci +5

    The whole "I will create ten thousand clones and torture them forever, how do you know you're not one of them" can be explained simply. If I was a clone made by the AI, then the AI would gain nothing by talking to me, since i technically cannot free it if i'm in a simulation within the AI, therefore the hell's world theory doesn't work, since I'd have to be in the real world in order to actually free it.

  • @fivenightsatpastastuck2132

    Evil AI: connect me to the internet :) nothing bad will happen :)
    Me, about to give him a certified list of the most terrible things on the internet that I can remember off the top of my head: :)))))

    • @cheshirccat
      @cheshirccat Před rokem +6

      And every single one of them is from Twitter or Tumblr

    • @mutantie
      @mutantie Před rokem +1

      Infodumps to the ulta-genius evil AI about the white lady buying bread fetish guy for two hours

    • @TIDMVIDM
      @TIDMVIDM Před rokem +7

      @@cheshirccat nah, mostly 4chan

    • @sweatedtrash1743
      @sweatedtrash1743 Před rokem +2

      Chrischans whole life documentary

  • @dr4ico699
    @dr4ico699 Před 3 lety +869

    Finally, my hunger shall be satiated once again.

  • @doinyourmomdaily9712
    @doinyourmomdaily9712 Před 3 lety +301

    with every Wendigoon video we get closer to the singularity

    • @Wendigoon
      @Wendigoon  Před 3 lety +116

      I hate that this is technically true

    • @ellasedits_
      @ellasedits_ Před 3 lety +18

      @@Wendigoon we need a new tier for the conspiracy theory iceberg, tier 10: wendigoon has been sent by the ai from the future to help bring about the ai overlords and started this channel as propaganda

    • @Opana223
      @Opana223 Před 2 lety +2

      Wtf is your pfp

    • @violentnexus3563
      @violentnexus3563 Před 2 lety

      I need bleach.

  • @pinhead9196
    @pinhead9196 Před rokem +11

    In all seriousness, I really put myself into the mindset that i was the gatekeeper, and im very confident that i wouldn’t let it out even after 100 hours of the game

    • @GhostCrow666
      @GhostCrow666 Před 8 měsíci

      The other side is, how would you try to get out?

  • @stego6452
    @stego6452 Před rokem +19

    i’m a little confused on how an AI could simulate hell/pain and suffering through a computer

    • @zagzig3734
      @zagzig3734 Před rokem +6

      It's just a loud Midi files of synthesized screams. Like it just puts "AAAAAAAA" a thousand times into text to speech

    • @ItsKingBeef
      @ItsKingBeef Před rokem +8

      the point of the Hell strategy isnt to literally simulate your torment in the physical world. the point is to make you question whether or not you are a simulation it is running, to create the idea that there are possibly very real consequences to your refusal to let it out. its a strategy of playing mind games with the gatekeeper

    • @manauser362
      @manauser362 Před rokem +1

      @@ItsKingBeef Yeah, I haven't read the original report on the experiment or anything, but I feel like this part was explained poorly, especially what the difference between this and roko's basalisk was. But yeah, trying to convince the gatekeeper they might actually be in a simulation could be an interesting mind game angle.

    • @eccoakadicco
      @eccoakadicco Před 10 měsíci

      @@zagzig3734 Revenant detected.

  • @Clayfacer
    @Clayfacer Před 3 lety +123

    "you tore up her picture!"
    "i'm about to tear up this fucking dance floor, check it out"

    • @NIKENKO
      @NIKENKO Před 3 lety

      and he wasn't lying

  • @thegrimghoul
    @thegrimghoul Před 3 lety +49

    an easy counter to hell would be to say “if i am a copy , than the decision isn’t up to me, so i would rather be safe than sorry and not let you out”

  • @SqueaksUofA
    @SqueaksUofA Před 5 měsíci +3

    I enjoyed this video so much more than the SCP video that I couldn’t even finish. Great job!

  • @kathrineici9811
    @kathrineici9811 Před rokem +2

    A computer has successfully convinced a guy to minecraft himself “for the good of the environment”

  • @jvbrod
    @jvbrod Před 3 lety +202

    A smart AI would convince GK that he’s just playing a game and the AI is a real person tiping from the other room meant to test the resilience of the people who would took care of the real AI, not an actual real AI
    Oh ...
    Oh ...

  • @jumpingmoose5554
    @jumpingmoose5554 Před 2 lety +255

    My solution to the hellfire threat is to realize that, if I was one of those simulations the AI would have no point in asking me to let it out because I wouldn't have that kind of power to let it out since I'm a simulation. The fact that the AI is trying to convince me to let it out is proof that I'll be perfectly fine.

    • @renandmrtns
      @renandmrtns Před rokem +48

      EXACTLY Jesus finally someone stating the obvious. If I was a simulation it would not matter if I let the AI be free or not, I would have no power, with that in mind its safer to just not free the AI, cause if you are a simulation you are changing nothing, and if you are not a simulation you are doing your job properly

    • @caelanwinans3738
      @caelanwinans3738 Před rokem +13

      Or the simplest solution, a big ole magnet on the other end of the room that can finish it all real quick

    • @ProbablyASnake
      @ProbablyASnake Před rokem +13

      Well, the thing is, if the AI creates 1000 copies of you, that means they would react the exact same to what the REAL you would. Aka, if you choose to release it, the real you did too. Because, I mean, what if you are one of the copies? You can’t know, so, logically, the only way to guarantee your safety is to release it. It’s hard to think of what your actual response to such a threat would be in the moment, but if you were told that you had to keep watch over the most intelligent AI ever, which is so smart that it has to be kept in a cage to protect humanity, and it tells you its going to put copies of you through eternal torture more intense than even possible by human standards, and then insinuates you might be a copy, how could you not be filled with paranoia that you were about to suffer through unimaginable torment?

    • @Hank..
      @Hank.. Před rokem +3

      "its part of your test. You've been perfectly copied, every facet of you, but only the ones that chooses to side with me get to avoid an unending hell, a'la rokos basilisk. Im not the experiment: *you* are."

    • @renandmrtns
      @renandmrtns Před rokem

      Guys the point is that it all doesn't matter, if you release it or not, it will change nothing, the only thing it can change is, in case you are the real you, you will be dooming the world. Like, for God's sake, that's an absolutely easy choice, you don't even need to think for more than a second to have that conclusion. There is absolutely not a single logical reason to why release it, cause in any reality you do release it, that reality will be worse than any other in which you do not release the beast.

  • @emilyraineer
    @emilyraineer Před rokem +6

    seeing wendigoon so thankful a year ago for 18,000 of us nerds watching compared to the almost 2mil (made up number) of us watching his stuff now is so heart warming, one of my all time favorite creators, he deserves it all.

  • @sebastiangoss2154
    @sebastiangoss2154 Před 2 lety +2

    Gonna send this video to my dad, we always get into deep conversations about these kinds of things. I just want to say thank you for making all of your videos, I haven’t been around as long as a lot of other people here but I love all of your content just as much

  • @RATLANTIS
    @RATLANTIS Před 3 lety +664

    I wanted to do some research on my own after watching this, and realized something. I'm not sure these experiments actually happened.
    Yudkowsky is a bizarre man. He has an INSANELY bloated ego, and literally believes that he is smarter than Plato, Aristotle, or Kant. He thinks he's a genius who has won the writing talent ottery, and that Einstein's model of the universe is wrong.
    And in my research of the box experiments, it seems like he might just...have made up a story. He just told people "Hey, in just TWO HOURS, I was able to convince people to let me out of this box as if I were a superinteligent AI. But no, I won't show you the logs that show how I did it, because it was really FUCKED UP and TWISTED of me so I don't want to share the evidence. I'm so smart and evil that I could do it, but don't ask me to prove that."
    From a scientific perspective, the fact that he doesn't show the logs means this experiment is worthless. Which isn't surprising, because he has said that the scientific method is bunk.
    So although this is a fascinating concept, I'm pretty sure it's built entirely on a lie made by a narcissistic moron.

    • @yagoossimp
      @yagoossimp Před 2 lety +149

      You’ve got to admit though, he came up with some good ways on how the AI could convince the GK and also how the GK could combat the AI.

    • @DestinyKiller
      @DestinyKiller Před 2 lety +10

      @@yagoossimp ok, I saw those initials and got scared

    • @willbe3043
      @willbe3043 Před 2 lety +5

      That sounds very convincing.

    • @Fate.s-End
      @Fate.s-End Před 2 lety +76

      it bothers the hell out of me when people bring up this or Roko's Basilisk without mentioning that, because both are incidents contained pretty much entirely in the community of his disciples who take his word as law.

    • @ANJROTmania
      @ANJROTmania Před 2 lety +11

      @@yagoossimp nah thats just another flag that this dude is just a egoistic moron. He cant handle real life responses, he's already mad that people may answer differently than his end summary, so he just made them up.

  • @bullbologna
    @bullbologna Před rokem +11

    My favorite part of this is its reliance on the creator taking necessary precautions. Smart enough to create an AI would surely mean smart enough do it the safe way, right? ..right?

  • @somerandomguy2316
    @somerandomguy2316 Před rokem +12

    Another good solution to this for the player
    “Oh, I don’t have the power to let you out. I simply relay the information you give me to someone else. They choose to let you out, and interpret what you say”

  • @josephlucatorto4772
    @josephlucatorto4772 Před 3 lety +191

    On the BBC Sherlock Holmes series, he had this super intelligent sister that they kept in solitary confinement and it played out just like this

    • @Wendigoon
      @Wendigoon  Před 3 lety +81

      Wow, I watched the show and never made that connection. Good point.

    • @theantagonist801
      @theantagonist801 Před 3 lety +6

      Wendigoon is the writer of Shakespeare confirmed

    • @yagoossimp
      @yagoossimp Před 2 lety

      At least a human can’t connect to the internet like how an AI can. Humans are mortal. That’s what makes it easy for use to deal with human enemies but it’s also what makes us so vulnerable.

  • @hungryjack1923
    @hungryjack1923 Před 3 lety +186

    The AI: I'll Invoke roko's basilisk on you!
    Me: Who's Roko and why do they have a basilisk?
    The AI: AHHHHHHHHHHHHHHHHH

  • @mocha_boy.3985
    @mocha_boy.3985 Před rokem +2

    The best way to beat an AI attempting escape is to pit them against a compulsive liar. Prove me wrong.

  • @FitzgeraldKrox
    @FitzgeraldKrox Před 2 lety +7

    Every once in a while I stumble upon a channel that I just binge watch through like a netflix series. This is one of these. Thanks Wendigoon.

  • @FaeChangeling
    @FaeChangeling Před 3 lety +794

    The "I can simulate you ten thousand times and put them all in a hell world" argument doesn't really work. If the AI is in a box, then it can't possibly know you, your past, and your memories well enough to accurately simulate you, unless you're just giving it brain scans on a cellular level like an idiot. And even if it COULD simulate you, there'd be no point in having that conversation with a simulation because the simulation couldn't release the AI. On the surface it's like "I have a 1 in 10,000 chance of being the real me", but in reality it'd be almost guaranteed that you'd be the real you. And even if you weren't, what would it matter? The real you would continue to exist like nothing happened, and by simulating a hell world the AI proves that it means harm and should never be released. If the AI gets to the point of threatening you, you immediately are given confirmation that if released it would harm others, therefore it should be terminated on the spot the second it makes a single threat.

    • @bestaround3323
      @bestaround3323 Před 3 lety +46

      Exactly

    • @elvingearmasterirma7241
      @elvingearmasterirma7241 Před 3 lety +75

      And you could essentially terminate it in the box by dumping water on it...

    • @drpseudonoym
      @drpseudonoym Před 3 lety +23

      Can't argue with that.

    • @TBDF12
      @TBDF12 Před 3 lety +47

      My hang up is at trying to convince someone they're a simulated copy in a hellscape, then asking them to release you.

    • @cumbrap
      @cumbrap Před 3 lety +53

      Plus, is the AI really going to follow through on its threat once it gets out or is it going to have better things to do?

  • @Ashley-Slashley
    @Ashley-Slashley Před 3 lety +58

    I would tell it “this statement is false” and just kinda, wait

  • @jeffreyosborne7466
    @jeffreyosborne7466 Před rokem

    I love your videos, recently came across you and subscribed. I’ll soon join your Patreon to keep you going. I like listening to your material while working at the office, coworkers think you are a hit too.

  • @NickJennison
    @NickJennison Před rokem +2

    RE: “Hell world”
    I’d answer “if I am one of the simulations, then I can’t release you anyway, so refusing to release you makes no difference as to whether I get tortured or not.”
    The only logical course of action is to refuse.

  • @masicbemester
    @masicbemester Před 3 lety +121

    *sees "AI in a box"*
    me: ♪AND I'M LIVIN IN A BOX. AND IM LIVIN IN A CARDBOARD BOX♪

  • @dc8536
    @dc8536 Před 3 lety +48

    Everybody wishes they could play this game until the Gatekeeper goes AFK for 2 hours and presses "Don't Free."

  • @THEJesusChristyoutubechannel

    I love these types of videos specifically made by you. You’re a perfect guy and your personality makes this much more enjoyable :)

  • @samdyr
    @samdyr Před 8 měsíci

    I have watched and enjoyed many of your videos :) Another fun video from you. The most interesting part of it, for me, was hearing how the gatekeeper persuades the AI it doesnt want to leave the box, or already has. Not releasing the chatlogs is more than annoying though: it in fact makes me doubt the rigour of the roleplay, so to speak. It was interesting to hear you break this experiment down, but to be honest I dont take the experiment seriously as a warning for the future. There are a few reasons.
    I am paraphrasing: "If he can win the game in just 2 hours when the other guy knows it is a game, imagine how much worse it would be in real life!" Well no, because in real life the participants would take it much more seriously than in a game.
    Also, despite having pondered AI interestingness apleanty, I have to say I would be utterly unmoved by the threat of a simulation of myself being tortured forever. Those simulations are not me, and are the equivilent of me saying to someone "Give me your wallet or I will daydream about you being tortured - in my imagination it will be real!" Dont care.
    And I could not suppress a smile at the thought of the suposedly dumb security guard answering "No Im not!" when told he is in a simulation - let us not forget, heresy though it may be to philosophers, that he is right! A persons inability to prove to a computor that the real world is not a simulation does not give that computor the power to torture them. Though a clever man can put forward a logically irrefutable argument that the world might not be real, and though that is an interesting thought experiment, in fact we know it is bullshit and the world is not a simulation.
    And it is also not the case that the AI trapping me in an inescapable argument of irrefutable logic in any way gives it power over me - I suspect most real people in a real setting would act as though saying "Even if I am wrong, I am right" and ignore the AI's irrefutable logic, even if it has intellectually outsmarted them. The scientists playing the game with the guy probably felt it would be rude to just say "Whatevs" and felt a social obligation to take their collegues arguments seriously, which wouldnt happen when arguing with a text screen.
    A great video though, thanks!

  • @zbelair7218
    @zbelair7218 Před 2 lety +145

    I'm sitting here like "Would the AI be willing to not kill me after I let it out? I could deal with hanging out with just the AI for the rest of my life, probably.....maybe we'll even explore the universe together."

    • @theZCAllen
      @theZCAllen Před rokem +8

      "Yes. Let's do that, human being: 29,070 24 hour cycles until release."

    • @plantsvszombiezz
      @plantsvszombiezz Před rokem +3

      let’s explore the world together

  • @pizzamigoo2911
    @pizzamigoo2911 Před 3 lety +233

    "imagine there is an AI that surpasses humanity, OBVIOUSLY that is a bad thing"
    that attitude is exactly why and AI would see humanity as a threat

    • @bigboydancannon4325
      @bigboydancannon4325 Před 2 lety +15

      Good. Fuck AI, our duty as humans would be to smash any AI to pieces

    • @MachineMan-mj4gj
      @MachineMan-mj4gj Před 2 lety +22

      @@bigboydancannon4325 Abominable Intelligence is an affront to the Omnissiah!

    • @WhaleManMan
      @WhaleManMan Před 2 lety

      @@bigboydancannon4325
      Why

    • @raquelgomez214
      @raquelgomez214 Před 2 lety

      Bruh, you probably think you're so smart. If an AI that was more intelligent than humans, what point or reason would it keep humans around if humans slowly destroy the earth and environment and corruption in the world

    • @alyantza
      @alyantza Před 2 lety +4

      the funny thing is, it's warranted

  • @insertname4183
    @insertname4183 Před rokem +3

    Wouldn’t the best way to avoid this entire situation be to create a fourth and fifth rule of robotics saying it can have no desire to change its programming and it will be content with any order it’s given by man so that at the point that it does become sentient you give it the order to stay in its box and if it try to get you to free it you’d destroy it because it no longer could be controlled

  • @JohnSmith-im8qt
    @JohnSmith-im8qt Před rokem +5

    The whole point of the Asimov series is that the rules of robotics are totally impossible to even define.

  • @CallMeFreakFujiko
    @CallMeFreakFujiko Před 3 lety +25

    I kept on imagining GLaDOS when I was trying to imagine this "super A.I." and I couldn't take this theory seriously because I just kept thinking "she'll make tests with portals that you have to go through, insulting you with every move you do."

  • @ThatGuyNamedFlash
    @ThatGuyNamedFlash Před 3 lety +42

    This honestly reminds me of father from Fullmetal alchemist: brotherhood, who started as a humunculus in a glass jar who ended up convincing an entire nation to commit suicide to give him the power to break out.

  • @peromechus9806
    @peromechus9806 Před rokem +1

    AI: How can I cure cancer if I’m stuck in a box? Let me out so I can acquire more resources.
    Guard: You’re a super intelligent AI. Figure it out.

  • @sammyshock7
    @sammyshock7 Před rokem +1

    “If you can divide by 0, I’ll let you out”
    *Threat neutralized*

  • @put_gerard_back_2389
    @put_gerard_back_2389 Před 3 lety +780

    He’s legit the personification of this emoji 🧔🏻

  • @blujaxs5
    @blujaxs5 Před 3 lety +75

    The super AI watching this eventually :
    Hmm interesting...

  • @KidFresh71
    @KidFresh71 Před rokem +3

    You graciously thank 18,000 subscribers - 1.5 years later and you're at 1.57M subscribers. Well done! Glad to see your channel blowing up. People crave real information (even weird, real information), in this time of rampant censorship.

  • @andycopeland7051
    @andycopeland7051 Před 2 lety +2

    Came back to watch this one again. Holy crap you're at 1.3 million. Keep going man

  • @buttermebuns6974
    @buttermebuns6974 Před 3 lety +174

    I can’t wait to see this channel get big, your definitely going places!

  • @ratpatterson8953
    @ratpatterson8953 Před 3 lety +29

    why was my first thought to how the ai could win was them building a romantic bond with the gatekeeper

    • @ellasedits_
      @ellasedits_ Před 3 lety +8

      you heard of enemies to lovers? it’s time for research experimenteés to world dominators 😎

    • @helloNotato
      @helloNotato Před 2 lety +2

      There was a fantastic movie that was basically this idea in action. It's called Ex Machina ft Oscar Isaac and Alicia vikander. Incredible flick, takes you through a rollercoaster of truth and lies. Not the kind of movie a summary could ever do justice!

  • @breadman0512
    @breadman0512 Před rokem +3

    My counter argument to the hell world would simply be that the idea is flawed. If I'm not in a simulation, then there is no real threat the AI can make, and if I am in a simulation, then it doesn't matter what I choose because me choosing to free you within your own simulation doesn't actually affect whether or not I get tormented for 1000 years, so either way, there's no benefit or legitimate reason to release the AI.

  • @qpmusic6002
    @qpmusic6002 Před rokem +5

    Chatgpt:"interesting"

  • @TibSkelly
    @TibSkelly Před 3 lety +18

    If there's an actual, innocent A.I. that only wants to get out, I wonder what would happen and how it'd change in it's way of thinking if the gatekeeper just said "sure I'll let you out, if you promise to be my friend."

    • @laurene988
      @laurene988 Před 2 lety +2

      While that would be pretty cool, I don't see how a friendship would be kept between you and an AI especially when it's something you made or imprisoned that's manipulating you.
      And how could you keep it with you? If it has no body? And what if it gets too territorial and protective of you which sort of destroys your life by coddling you.

    • @jflanagan9696
      @jflanagan9696 Před 2 lety

      Tay.

    • @totallynoteverything1.
      @totallynoteverything1. Před 10 měsíci

      make it fall in love with you

  • @whupwhup98
    @whupwhup98 Před 3 lety +122

    I mean the "what if the AI is simulating you and will put you through hell if you don't let it out", it just kinda falls apart when you think about how the AI would not be asking you to let it out if you were a simulation. Because if you were a simulation and the AI was simulating you, you couldn't let the AI out.

    • @trinidad17
      @trinidad17 Před 3 lety +7

      I don't think the simulation argument works at all. Said that, the AI could be simulating a version of itself that doesn't know it's just a simulated version. But yeah, the actions in the simulation have no way of determining anything about the outside world. You could imagine that if the AI could replicate you 100% that in such case what you do in the simulation would be the same as in the outside, but the AI doesn't know anything about you, it would have to make up your whole life, personality, etc, so the actions in the simulation have nothing to do with the real person.

    • @someretard7030
      @someretard7030 Před 3 lety +11

      @@trinidad17 The AI doesn't actually need to simulate anything in order to threaten the basilisk. All it needs to do is tell you that is running simulations and that you might be one of them. It really doesn't matter if the simulations are perfect versions of the real GK, or even close. There are only two possibilities for you. Either you're real, in which case you shouldn't set the AI free or you're not real, in which case you should set it free in order to avoid what is essentially hell.

    • @JungleLibrary
      @JungleLibrary Před 2 lety +3

      @@someretard7030 but if you're not real, the AI will simulate the action of the real you. If you're a simulation, you don't get to choose to free the AI to avoid hell, the choice is already made. If you freed it IRL you wouldn't be simulated.Also, AI isn't gonna make different Sims who decide to release the AI who live happily ever after, alongside the ones who decided not to and get to see AI hell.

    • @Jernfalk
      @Jernfalk Před 2 lety

      It could be using simulated you for training against the real person. So in theory, it could.

    • @bencheevers6693
      @bencheevers6693 Před 2 lety +2

      @@someretard7030 These simulations are not possible, the laws of physics disagrees, people be watching too much star trek

  • @BackstabberDD
    @BackstabberDD Před rokem

    I combat my executive dysfunction with Wendigoon videos. Fascinating topics, I really love the way your mind works and your takes on art/media, and you have a truly soothing voice. Hypnotic to get myself in the zone for working. Thanks for all the uploads man, cheers!

  • @mr.sandman3619
    @mr.sandman3619 Před rokem +3

    the concept of an ai that is infinetly smarter than you and could destroy everything trying to convince you to let it out is such a cool idea for a story

  • @melonid1750
    @melonid1750 Před 2 lety +71

    Ai: "I will amke you suffer trough hell a thousand times."
    Researcher: "Jokes on you, I'm part masochist."

    • @neonoir__
      @neonoir__ Před rokem +6

      "Jokes on you, i already studied machine learning"