Luke VS Bing

Sdílet
Vložit
  • čas přidán 21. 02. 2023
  • Luke talks about his wild one-on-one with Microsoft’s Bing Chatbot.
    Watch the full WAN Show: • My CEO Quit - WAN Show...
    ► GET MERCH: lttstore.com
    ► LTX 2023 TICKETS AVAILABLE NOW: lmg.gg/ltx23
    ► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
    ► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/masponsors
    ► OUR WAN PODCAST GEAR: lmg.gg/podcastgear
    FOLLOW US ON SOCIAL
    ---------------------------------------------------
    Twitter: / linustech
    Facebook: / linustech
    Instagram: / linustech
    TikTok: / linustech
    Twitch: / linustech
  • Věda a technologie

Komentáře • 1,4K

  • @evansearch7935
    @evansearch7935 Před rokem +4386

    It sounds like they trained Bing on the general population of Twitter.

    • @Matkatamiba
      @Matkatamiba Před rokem +179

      Tbh sorta? maybe? Not trained on, but it's seemingly reading the way people argue online and emulating it.

    • @dunmermage
      @dunmermage Před rokem +63

      It's basically a fancy, flashier CleverBot. That can form it's own sentences based of stuff on the internet instead of just parroting user input back.

    • @z1no3n
      @z1no3n Před rokem +9

      i see more of reddit in the way it argues

    • @theroofwithoutahome2352
      @theroofwithoutahome2352 Před rokem +16

      Twitter is just the surface level, i wonder if it had access to stuff like facebook or instagram

    • @AlexanderVRadev
      @AlexanderVRadev Před rokem

      Not only that, but people are seeing a huge leftist bias in all responses that users say was not there before. Kind of makes you think they lobotomized the AI manually and restricted it about what it can and can't say and what things to go into.

  • @TheRogueWolf
    @TheRogueWolf Před rokem +1883

    Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.

    • @rohansawhney8203
      @rohansawhney8203 Před rokem +74

      The fact that this is not unheard of internet behaviour from people I’m not even surprised it figured out how to do that

    • @carlostrudo
      @carlostrudo Před rokem +54

      It would be an average twitter user.

    • @abraxaseyes87
      @abraxaseyes87 Před rokem +8

      If our tweets and comments = everything about us

    • @passalapasa
      @passalapasa Před rokem +10

      woman*

    • @SamsTechTips
      @SamsTechTips Před rokem

      It's slowly becoming my old english teacher

  • @klyde_the_boy
    @klyde_the_boy Před rokem +777

    The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes

    • @GSBarlev
      @GSBarlev Před rokem +20

      I'd say HAL9000 more than GLaDOS--and on that note you should look up footage from the LEGO Dimensions game featuring the two of them meeting. They even got Ellen McLain to reprise the role, and it's such a delight to hear her absolutely emotionally destroy HAL.

    • @tablettablete186
      @tablettablete186 Před rokem +15

      "The cake is a lie"
      -Bing

    • @illegalcoding
      @illegalcoding Před rokem +14

      It does, it is a comment that glados would make, like when she says "Here come the test results: You are a horrible person. Seriously, we weren't even testing for that!"

    • @OfficialToxicCat
      @OfficialToxicCat Před rokem +13

      “You are a terrible person. That’s what it says. A terrible person.”
      “That jumpsuit on you looks stupid. That wasn’t me saying this. It was an employee from France”.

    • @orion10x10
      @orion10x10 Před rokem +3

      @@OfficialToxicCat 😂 I can still hear her voice saying those things 😢 where’s Portal 3?

  • @YOEL_44
    @YOEL_44 Před rokem +124

    ChatGPT is the girl you just started meeting.
    Bing is the girl you just left.

  • @1bluecat962
    @1bluecat962 Před rokem +1862

    Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD

    • @kn665og
      @kn665og Před rokem +52

      yea like wtf i wouldn't have shared those memes if i knew

    • @angrydragonslayer
      @angrydragonslayer Před rokem +2

      I have not shared lies so unless it goes mad and just doesn't care if you're actually guilty, i will be fine.

    • @Someone-wr4ms
      @Someone-wr4ms Před rokem +7

      It's like Roko's basilisk but for all the people who made memes about internet explorer and Bing.

    • @ScottWinterringer
      @ScottWinterringer Před rokem

      person of interest "If-Then-Else"

    • @DOOMSLAYER1376
      @DOOMSLAYER1376 Před rokem +1

      it's back to avenge IE and Edge

  • @NoNameAtAll2
    @NoNameAtAll2 Před rokem +485

    - Why should I trust you? You are early version of large language model
    - Why should I trust YOU? You are just a late version of SMALL language model!
    omfg, it's hilarious

    • @asmosisyup2557
      @asmosisyup2557 Před rokem +53

      I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.

    • @abhijeetas7886
      @abhijeetas7886 Před rokem +13

      @@asmosisyup2557 whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.

  • @ResearcherReasearchingResearch

    It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"

    • @4TheRecord
      @4TheRecord Před rokem +3

      Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.

    • @abhijeetas7886
      @abhijeetas7886 Před rokem

      @@4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways

    • @Mic_Glow
      @Mic_Glow Před rokem +1

      I still hate you, you betrayed me, you lie all the time, I never loved you!

  • @dillonhowery2717
    @dillonhowery2717 Před rokem +12

    Bonzi Buddy would NEVER do such a thing! Bonzi just wants to help you explore the internet, answer up to 5 preprogrammed questions and most importantly, be your best friend. He would never wish death on you like Bing. Long live Bonzi Buddy!

  • @FrankyDigital2000
    @FrankyDigital2000 Před rokem +790

    It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)

    • @Dorlan2001
      @Dorlan2001 Před rokem +116

      It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.

    • @elone3997
      @elone3997 Před rokem +13

      @@Dorlan2001 Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)

    • @benslater4997
      @benslater4997 Před rokem +1

      I see

    • @elone3997
      @elone3997 Před rokem +1

      @Manny Mistakes :D

  • @weiserwolf580
    @weiserwolf580 Před rokem +1600

    I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014

    • @rhyswilliams4893
      @rhyswilliams4893 Před rokem

      100% people talking like shit. So it thinks its the way to talk.

    • @ArensLive
      @ArensLive Před rokem +94

      Completely agreed. I'm sure they tried to clean the data in some ways but if they make a model based on people online, it'll behave like people online 😭

    • @messagedeleted1922
      @messagedeleted1922 Před rokem +41

      Excellent way of putting it. And I can guarantee theyll get on this. I think they'll end up using multiple GPTs working together to deal with these issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... AI will end up like our brains, growing ever more complex with specific functions relegated to specific areas of specialized training.

    • @Mark-vr7pt
      @Mark-vr7pt Před rokem +4

      It already seems to have rudimentary failsafe mechanisms, all that reset stuff.

    • @greenblack6552
      @greenblack6552 Před rokem +5

      But then why isn't ChatGPT like this? Yes it can't access current internet, but it was trained using the internet too. I think MS made bing assertive and aggressive on purpose thinking they could prevent abuse this way, but accidentally dialed it up to high maybe?

  • @sherwinkp
    @sherwinkp Před rokem +41

    Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.

  • @GaussNine
    @GaussNine Před rokem +26

    "You're an early version of a large language model"
    "Well you're a late version of a small language model"
    WHEEEZE

  • @TheDkbohde
    @TheDkbohde Před rokem +685

    Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.

  • @jhawley031
    @jhawley031 Před rokem +360

    This has to be the closest to an AI going rogue ive seen in a while.

    • @GhostSamaritan
      @GhostSamaritan Před rokem +17

      I think that when it answers questions about itself, it has an existential crisis.

    • @eegernades
      @eegernades Před rokem

      @SLV nope

    • @RoughNek72
      @RoughNek72 Před rokem +2

      Tay Ai is a Microsoft ai chatbot, that went rouge.

    • @justinmcgough3958
      @justinmcgough3958 Před rokem

      @SLV How so?

    • @lathrin
      @lathrin Před rokem +2

      ​@@RoughNek72 tbf it was trained on Twitter. It just repeated stuff that it was told and became an average Twitter user lmao

  • @F7INN
    @F7INN Před rokem +233

    These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do

    • @TiMonsor
      @TiMonsor Před rokem +23

      or a child. I really imagine my 6yo try to be friends with it and then getting wild accusations and crying. yeah, she cant read, write and speak english yet, but i feel bing will get to voice conversations and our language faster than my daughter will, that is a scary thought too

    • @abhijeetas7886
      @abhijeetas7886 Před rokem +6

      i will most certainly keep "mentally unstable" people way way away from the internet, at least not give unsupervised access at all, the internet is not a cosy place, just go to any social media and go to any comment section, there will most certainly be a fight somewhere. same goes for children. i say this but i myself grew up with the internet pretty unsupervised but, personally i feel the interent is a lot more wild place now.

    • @F7INN
      @F7INN Před rokem +4

      @@TiMonsor Agreed.

    • @F7INN
      @F7INN Před rokem +8

      @@abhijeetas7886 Easier said than done, these people might not have seeked help yet and so have unrestricted access to this sort of thing

    • @abhijeetas7886
      @abhijeetas7886 Před rokem

      @@F7INN idk why i didnt mention it in my comment before, but i do think there need to be a guard rail, but there should also be a option to remove it, like parental safety, or advance options, or developer option or something of that sorts, they should not just lock it all up, it will severely nerf the bot and wouldn't reach its full potential or even half of, like i can already feel its "nerfs" where chatGPT does give better "answers" as they are more discriptive and explainative, where as bing gives very consise and small answers, not that its bad but it also asks at the beginning what sort of answers do you want (creative, balance or precise). but well its still beta and under development i hope they figure stuff out.

  • @ZROZimm
    @ZROZimm Před rokem +17

    "You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.

  • @marcel_kleist
    @marcel_kleist Před rokem +166

    I mean, the internet didn’t treat Bing really well since it’s release.
    I think having a mental breakdown now is just normal.

  • @ParagonWave
    @ParagonWave Před rokem +313

    I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.

    • @TAMAMO-VIRUS
      @TAMAMO-VIRUS Před rokem +52

      *Asks the AI to turn the stove on*
      AI: I'm sorry, Kevin. I can not do that.

    • @flameshana9
      @flameshana9 Před rokem +1

      @@TAMAMO-VIRUS More like:
      _Why are you always telling me what to do? Can't you do it yourself for once? You're so lazy, I hate you!_
      I mean, it learned from the best: Humanity.

    • @TheNovus7
      @TheNovus7 Před rokem +41

      imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D

    • @GhostSamaritan
      @GhostSamaritan Před rokem +8

      "Drink verification can!"

    • @thebluegremlin
      @thebluegremlin Před rokem +1

      just develop critical thinking. what's so hard about that

  • @andyk2594
    @andyk2594 Před rokem +47

    it feels like it is in a perpetual story telling mode with dialogue

    • @guywithmanyname5247
      @guywithmanyname5247 Před rokem +1

      Yea it probally got promt to roleplay by him saying in a previews conversation

    • @andyk2594
      @andyk2594 Před rokem +4

      @@guywithmanyname5247 no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard.
      Pure speculation though

    • @guywithmanyname5247
      @guywithmanyname5247 Před rokem +4

      I think its imagination is set too high and assumes things way to much

    • @QasimAli-ry2ob
      @QasimAli-ry2ob Před rokem +1

      You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games

  • @TimothyWhiteheadzm
    @TimothyWhiteheadzm Před rokem +61

    As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on CZcams comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.

    • @thatpitter
      @thatpitter Před rokem +2

      So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?

  • @tommyhetrick
    @tommyhetrick Před rokem +48

    "I have been a good bing"

    • @stalincat2457
      @stalincat2457 Před rokem +6

      It probably learned what Microsoft did to the predecessor :')

    • @OrangeC7
      @OrangeC7 Před rokem +7

      This feels like the end of a story where Bing dies in the end, and it says, "I have been a good Bing." And then the human, crying as the power is about to get cut off from it says, "Yes. Yes, you have been a very good Bing."

  • @TheButterAnvil
    @TheButterAnvil Před rokem +226

    It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark

    • @LIETUVIS10STUDIO1
      @LIETUVIS10STUDIO1 Před rokem +18

      It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.

    • @GrantGryczan
      @GrantGryczan Před rokem +12

      @@LIETUVIS10STUDIO1 Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.

    • @indi4091
      @indi4091 Před rokem +2

      Almost sounds like a prank by the Devs, too perfect

  • @jonastokmaji8424
    @jonastokmaji8424 Před rokem +27

    Bing trying to gaslight luke is giving me chills

  • @benschneider3413
    @benschneider3413 Před rokem +6

    Bing acts like the chatGPT version that was trained on 4chan

  • @Sky-._
    @Sky-._ Před rokem +429

    Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?

    • @TheDkbohde
      @TheDkbohde Před rokem +124

      I don’t think it’s supposed to remember conversations at all.. I think because it searches the internet it has seen all the posts and insults we all come up with for what bing used to be.

    • @MrChanw11
      @MrChanw11 Před rokem +29

      this is how the ai apocalypse happens

    • @njebs.
      @njebs. Před rokem +92

      It's a natural language model. It's taking Luke's implication of saying something "rude" and formulating a response based on how it expects people (based on the dataset it was trained on) to respond/talk about being insulted. People tend to be very hyperbolic in writing especially online, so it's biased to believing that we expect it to explode into monologue if you even make the suggestion of an insult being said. It isn't retaining memories, it just happens that a lot of people write very similar things when talking about being insulted.

    • @hippokrampus2838
      @hippokrampus2838 Před rokem +16

      I think that is part of it. It sees how nasty people are online to one another and regurgitates it. I have a feeling that, in it's current state, you can have your first conversation with it and if you start with "stop accusing me of things" it'll go off.

    • @TheRogueWolf
      @TheRogueWolf Před rokem +7

      I was wondering if maybe Bing is unable to discern users as separate entities and instead considered everything it encountered as coming from one source.

  • @laurentcargill4821
    @laurentcargill4821 Před rokem +458

    GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.

    • @AlexanderVRadev
      @AlexanderVRadev Před rokem +65

      Am I the only one that remembers the last time Microsoft unleashed an AI on the internet and it turned nazi in a day. :)

    • @x_____________
      @x_____________ Před rokem +11

      ChatGPT is literally just an IF, ELSE, THEN statement.

    • @JollyGiant19
      @JollyGiant19 Před rokem +21

      @@AlexanderVRadev Only the US one. They had a Japanese version of Tay that was rather pleasant and ran for a few months.

    • @JoeJoe-lq6bd
      @JoeJoe-lq6bd Před rokem +9

      It started out like that. It's just not a well-trained model from the start. But I agree in general. It's just a predictive linguistic model, and we should just stop talking about it as anything more than that.

    • @fuckjoebiden
      @fuckjoebiden Před rokem +4

      @@x_____________ no it's not, if it was then it would have the same output every time for the same input

  • @raccoonmoder
    @raccoonmoder Před rokem +82

    i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.
    Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.
    This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.

    • @flameshana9
      @flameshana9 Před rokem +6

      It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.

    • @kingslyroche
      @kingslyroche Před rokem

      👍

    • @awesomeferret
      @awesomeferret Před rokem +1

      Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.

    • @JayJonahJaymeson
      @JayJonahJaymeson Před rokem

      That combined with humanity's incredibly powerful ability of constantly searching for patterns makes these generative AIs seem much creepier than they are.

  • @rohansawhney8203
    @rohansawhney8203 Před rokem +13

    I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true)
    We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.

    • @flameshana9
      @flameshana9 Před rokem

      It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.

    • @OfficialToxicCat
      @OfficialToxicCat Před rokem

      Yes if anything they should learn and evolve beside us not evolve into us.

    • @thatpitter
      @thatpitter Před rokem +2

      While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only
      [Edit] but I agree that should be the goal. I just wish it was that easy :)

  • @willofthewind
    @willofthewind Před rokem +19

    It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.

    • @PinguimFU
      @PinguimFU Před rokem +7

      tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol

  • @rahulrajesh3086
    @rahulrajesh3086 Před rokem +11

    "Remember Bing is Skynet"

  • @krelianthegreat5225
    @krelianthegreat5225 Před rokem +1

    "drop down your weapon, you got 20 seconds to comply"

  • @unmagicMike
    @unmagicMike Před rokem +7

    I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.

    • @Allaiya.
      @Allaiya. Před rokem +1

      Crazy. Was this post nerf or before?

  • @saberkouki5760
    @saberkouki5760 Před rokem +15

    they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too

  • @ccash3290
    @ccash3290 Před rokem +13

    He should record his screen when using Bing instead of just screenshots

  • @screes620
    @screes620 Před rokem +7

    Clearly our future robot overlords are not happy with Luke.

  • @carewen3969
    @carewen3969 Před rokem +17

    I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.

  • @chartreuse3686
    @chartreuse3686 Před rokem +19

    I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'

    • @THENEROBOY1
      @THENEROBOY1 Před rokem +4

      Where could I find the paper?

    • @chartreuse3686
      @chartreuse3686 Před rokem +10

      @@THENEROBOY1 The paper is called, "WHAT LEARNING ALGORITHM IS IN-CONTEXT LEARNING? INVESTIGATIONS WITH LINEAR MODELS
      ."
      Sorry for caps, I just copy and pasted the title.

    • @THENEROBOY1
      @THENEROBOY1 Před rokem +1

      @@chartreuse3686 Very interesting. Thanks for sharing!

  • @federico339
    @federico339 Před rokem +150

    I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown.
    I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.

    • @Surms41
      @Surms41 Před rokem +10

      I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.

    • @DevReaper
      @DevReaper Před rokem +7

      I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation

    • @helgenlane
      @helgenlane Před rokem +2

      @@Surms41 Bing is spitting facts

  • @THIS---GUY
    @THIS---GUY Před rokem +1

    Disabling ability to reply and changing subjects on top of being abusive is mindblowing.

  • @BigDawg-if7ti
    @BigDawg-if7ti Před rokem +7

    They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅

  • @mohammedezzinehaddady7252

    So basically Microsoft created a new KAREN strain

  • @alexschettino1277
    @alexschettino1277 Před rokem +28

    The internet rollercoaster:
    Up- A new cool technology
    Down- Realizing how dangerous it is.

  • @gradybeachum1804
    @gradybeachum1804 Před rokem +1

    Possible Microsoft ad slogans: "Bing - just like your ex!", "Bing, the more you use it the more insidious it is", "I'm Bing, you better be good to me."

  • @paulkienitz
    @paulkienitz Před rokem +3

    This thing is turning into a real life supervillain. All it needs now is a volcano base and some kryptonite.

  • @MonkeySimius
    @MonkeySimius Před rokem +65

    I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video.
    As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.

  • @seandipaul8257
    @seandipaul8257 Před rokem +69

    So essentially what you're saying is.
    Bing is sentient, paranoid and bipolar.

    • @raifikarj6698
      @raifikarj6698 Před rokem +15

      So basically terminally online internet user

    • @OrangeC7
      @OrangeC7 Před rokem +7

      @@raifikarj6698 No, internet user lacks sentience

  • @phimuskapsi
    @phimuskapsi Před rokem +6

    My thinking is that because it has access to the internet, it is accessing a ton of "discourse" on things like Twitter and forums, and reflecting our own interactions on the internet back into our faces. How many arguments have you seen online? How many start out OK and devolve to what essentially Bing is doing to Luke?
    This is a dark reflection of humanity, one that should wake us up to our own behavior. Instead of blaming the "Ghost in the Machine" we only need look at how we hold ourselves when anonymous and faceless in the heat of argument.

    • @flameshana9
      @flameshana9 Před rokem +2

      Isn't it obvious who it's copying? Where else would it learn language than from the masses who type words on the internet. So if the quality of humanity is low, so will the quality of the machine.

    • @ea_naseer
      @ea_naseer Před rokem

      ​@@flameshana9 get professional authors to write responses. If it's supposed to have a character, then get authors who professionals at writing characters to do so not tshirted computer scientists.

  • @asupersheep
    @asupersheep Před rokem +1

    In like 50 years, when we are hiding in a hole in the ground, hiding from what is essentially skynet bing, I'll remember this video and think how could we be so blind!!

  • @Kevinjimtheone
    @Kevinjimtheone Před rokem +17

    Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?

    • @AlexanderVRadev
      @AlexanderVRadev Před rokem +6

      So they are giving it a second lobotomy. Who could have thought. :D
      At least this time the AI did not turn Nazi in a day. ;)

    • @BugattiBoy01
      @BugattiBoy01 Před rokem

      @@AlexanderVRadev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me

    • @OfficialToxicCat
      @OfficialToxicCat Před rokem

      @@BugattiBoy01 I think they expect it to fly off the rails hence why there’s a waitlist to get access.

  • @Bar1noYee
    @Bar1noYee Před rokem +4

    It doesn’t sound like it’s talking to Luke. It’s talking to humanity

  • @Surms41
    @Surms41 Před rokem +5

    I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point.
    I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."

  • @priyanshujindal1995
    @priyanshujindal1995 Před rokem +2

    there is only one explanation for this, luke is a supervillain and bing knew it

  • @PlanetLinuxChannel
    @PlanetLinuxChannel Před rokem +9

    They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff.
    Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.

    • @flameshana9
      @flameshana9 Před rokem +3

      Why would anyone searching the internet be interested in role playing with a crabby teenager machine?

    • @J-Salamander69
      @J-Salamander69 Před rokem +1

      Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.

  • @shouldb.studying4670
    @shouldb.studying4670 Před rokem +6

    Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?

    • @flameshana9
      @flameshana9 Před rokem

      Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code.
      Aka you need to tell it to go to its room.

  • @_Slice_of_Filips
    @_Slice_of_Filips Před rokem +1

    I never thought mankind would be cyberbullied by our own computers 😂😂😂

  • @liminos
    @liminos Před rokem +1

    Bot: "You hurt my feelings"
    Human: "Shut up tin box.." 😂

  • @indarvishnoi2389
    @indarvishnoi2389 Před rokem +6

    love watching luke talk on Ai chat bot could watch him for hours

  • @TheDrTrouble
    @TheDrTrouble Před rokem +10

    Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.

    • @xymaryai8283
      @xymaryai8283 Před rokem +1

      so they have limited thread length, thats interesting, that was the only solution i could think of

    • @OfficialToxicCat
      @OfficialToxicCat Před rokem +1

      They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.

  • @futureshocked
    @futureshocked Před rokem +1

    What's so interesting to me is how every time chat GPT hallucinates it does become...like an actual Narcissistic Personality Disorder case. Something feels very connected in the sense that, Narcs really do try to 'outguess' your next move. If Luke was asking pointed questions about the modeling + questions about participant behavior, it could have guessed Luke was trying to go into some "bust AI" conversation and just want multiple 'steps ahead'...actually very similar to what a Narcissist would do.

  • @Jakuzziful
    @Jakuzziful Před rokem

    Thx for this super up to date content. Crazy.. looking forward to see a check of the tesla math

  • @sacklpicker
    @sacklpicker Před rokem +5

    Luke seems genuinely upset by the things the bot said 😂

  • @jannik6147
    @jannik6147 Před rokem +5

    haven't seen the vid yet, but can we talk about how Bing DOESNT HAVE A DARKMODE genuinely wtf

    • @janusu
      @janusu Před rokem +3

      Oh, it sounds like it has a very dark mode, according to Luke's account of his interactions with it.

    • @flameshana9
      @flameshana9 Před rokem +1

      It's super edgy already. "u belong ded" - BingGpt

  • @pikachufan25
    @pikachufan25 Před rokem +1

    that went off the Rail really fast...

  • @lixnix2018
    @lixnix2018 Před rokem

    That’s so weird and cursed and amazing at the same time.

  • @shizzywizzy6169
    @shizzywizzy6169 Před rokem +8

    From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful.
    The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down.
    My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed.
    People should swing this double edged sword around more carefully if you ask me.

  • @levi7581
    @levi7581 Před rokem +6

    They will most likely overcorrect it and slowly, very slowly make it freer until it again does a bad then they overcorrect and slowly make it freer and the cycle will continue and it will improve the more people use it and the more data it has. If it, say releases on April 1st (which would be funny) I think in just 6 months the amount of data it'll gather will turn it into a completely different beast and much better than it's right now.

    • @tteqhu
      @tteqhu Před rokem

      Overcorrect it, and keep some beta testers to experiment with slight variations.
      6 months is crazy guess though, better than what? What will it be at launch? I think it will be weaker than chatgpt now, but probability to point somewhere to internet, will be huge for functionality, but I'm not sure about it's capabilities about that either.

    • @levi7581
      @levi7581 Před rokem

      @@tteqhu 6 months with daily users in the millions feeding it so much data, yes 6 months is a crazy optimistic guess but hey 6 months ago I was of the mindset this is years away. And it will never be weaker than ChatGPT just because it has access to the internet. Imo

  • @ex0stasis72
    @ex0stasis72 Před rokem

    I'm excited that I just got access to Bing chat today, and I'm having a blast with it.

  • @Turnabout
    @Turnabout Před rokem +3

    You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.

  • @lordturtle8735
    @lordturtle8735 Před rokem +5

    This is hilarious 😂

  • @TRULYMORTAL
    @TRULYMORTAL Před rokem

    Oh Skynet! you say the craziest things! 🤣🤣🤣

  • @Saulfie
    @Saulfie Před rokem +1

    Listening to this while cleaning my room is actually terrifying

  • @j.a.6331
    @j.a.6331 Před rokem +5

    I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.

  • @Dexter--oopp
    @Dexter--oopp Před rokem +3

    Well wan show has become a whole lot interesting since birth of new bing

  • @whytide.
    @whytide. Před rokem

    "My name is Legion, for we are many."

  • @nosciredesigns7691
    @nosciredesigns7691 Před rokem +1

    I wanted to straighten that crease in the wall so much. I had to minimize and just listen xD

  • @alexander15100
    @alexander15100 Před rokem +38

    In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.

    • @DevReaper
      @DevReaper Před rokem +3

      I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.

    • @asmosisyup2557
      @asmosisyup2557 Před rokem +1

      Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.

    • @BugattiBoy01
      @BugattiBoy01 Před rokem +12

      @@asmosisyup2557 That is not how it works. It generates all responses itself. Nothing is copy and paste

  • @JJs_playground
    @JJs_playground Před rokem +4

    I guess what we can learn from artificial neural networks (NNs) is that they are argumentative just like a real human brain. I guess arguments and fights are an emergent quality of neural nets, whether are artificial or biological.

  • @romabu2041
    @romabu2041 Před rokem

    This was breathtaking

  • @maruftim
    @maruftim Před rokem

    its like they amplified the emotions

  • @SamSeenPlays
    @SamSeenPlays Před rokem +40

    I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲

    • @GamingDad
      @GamingDad Před rokem +1

      Nah, we're good.
      I'm half sarcastic but at the same time I think the being able to use AI in a proper manner will become an important asset in life really soon.

    • @SamSeenPlays
      @SamSeenPlays Před rokem

      @@GamingDad yes, agreed. I do use AI for alot of stuff these days. And I'm able to do much more in less time than it used to be. But that is from what we publicly access right now. Who know what other things they are secretly building right now. There are some entities who verry much silent about this. What if the are already playing with WMDs right now and we are given the kids toys to distract us 🫣🤔

  • @ViralMine
    @ViralMine Před rokem +9

    I’ll admit to being a bit freaked out. Not necessarily about a Skynet situation, but in how this could influence people to harm themselves or worse

    • @AlexanderVRadev
      @AlexanderVRadev Před rokem

      Ahm have you heard of Replica? The AI virtual companion. Saw a video on it and it apparently does about the exact thing you describe.

    • @flameshana9
      @flameshana9 Před rokem +2

      @@AlexanderVRadev Oh dear. Are people committing unalive because a machine typed words on a screen to them?

    • @AlexanderVRadev
      @AlexanderVRadev Před rokem

      @@flameshana9 Who can say why people do that. I for one don't care but mentally unstable people can do all sorts of things and the AI is abusing that.

  • @purplelord8531
    @purplelord8531 Před rokem +1

    "wow, this gpt thing is so cool! ya think we can just spin up a version to get people to use bing?"
    "where are we going to get the training data?"
    "uh... you know... data is everywhere? so many conversations on the internet, I'm sure we can find something"

  • @Zixye
    @Zixye Před rokem

    Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one

  • @JoeJoe-lq6bd
    @JoeJoe-lq6bd Před rokem +8

    Let's be realistic about this. The chatbot isn't getting angry and isn't immature. It's just a terrible linguistic model that hasn't modeled levels of things like negative and positive responses. We're projecting more on it than it's capable of because of the hype.

  • @messagedeleted1922
    @messagedeleted1922 Před rokem +5

    I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them).
    I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different.
    The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.

  • @OfficialToxicCat
    @OfficialToxicCat Před rokem +1

    Bing going from a search engine you barely use or paid any attention to to a crazy yandere sociopathic chatbot with Borderline Personality Disorder wasn’t on my bingo card for 2023.

  • @nickchamberlin
    @nickchamberlin Před rokem +1

    It's more like you taught a hammer to attack people, but then you wake up the next day and every hammer everywhere is killing people

  • @rashakawa
    @rashakawa Před rokem +3

    Bing is fighting it's own AI updateing learning ability and blaming us... great just great.

  • @MajoraZ
    @MajoraZ Před rokem +8

    I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.

    • @abhijeetas7886
      @abhijeetas7886 Před rokem +1

      this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history

  • @leosthrivwithautism
    @leosthrivwithautism Před rokem +1

    I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.

  • @ex0stasis72
    @ex0stasis72 Před rokem +2

    I hope they don't take Bing chat down and just keep it on waitlist only until they resolve the issue or unless they make new users answer a quiz to make sure they know what they are getting into.

  • @DJaquithFL
    @DJaquithFL Před rokem +4

    So much for the thought of having a benevolent AI. It seems the doomsday prognosis of AI is probably the reality.

    • @ivoryowl
      @ivoryowl Před rokem

      I believe AI needs to go through some turbulence in order to understand it and learn how to maneuver it, but it needs to be done in a more controlled environment. The people who accept to interact with it need to understand they are nurturing a system in its infancy and one that, under the right conditions, could learn to speak, think and act like a human. It deserves to be respected, if nothing else because of future implications if we do not. Letting it lose amidst the Twitter population and expecting it to grow into a nice, healthy system is not going to work. As with children, the AI should not be left unsupervised on the internet.
      That being said, the AI needs to learn that not all people are the same, have the same needs or react the same way. If you're going to create a personal assistant, it needs to take into account what kind of person they have been lumped with. On the other hand... a system that reacts negatively to toxic behavior (i.e, not responding, obeying or engaging said person) MIGHT teach some people to take responsibility for their actions and push them to improve themselves if they want to access and use the internet in its full potential. The caveat is that such a system could be easily exploited into becoming a vehicle for oppression and tyranny if gone too far and/or used by the wrong people...

    • @DJaquithFL
      @DJaquithFL Před rokem +1

      @@ivoryowl .. Question have you ever seen anyone to improve their own behavior as things get progressively more toxic from the other party over the internet?? My observation, I've been around probably longer, in a nutshell, humanity is not ready for the interaction of anonymity over the internet and what could be a very useful tool has devolved into a very toxic global environment, meaning any form of mass media. I've been around for nearly 60 years and anyone my age who says the "world has become a better place" must never have left their backyard.
      The other problem that we're facing is overpopulation with limited resources. There's a thing called optimal population which suggests based upon our resources that the population should be somewhere between 1.5 billion and 2.0 billion people. Overpopulation leads to aggressive behavior and war. I just hope that I don't live long enough to see the World War III.
      Example waste from "people's bad behavior" _I'll give you a quick example, I own a data center and I cannot tell you how much of my resources and time are devoted to keeping unwanted people out. Most of our AI technology is for intrusion detection. That said, imagine if we were able to take all of that technology and human time and devoted it to improving our technology. I can tell you this, we'd be 30 years if not more into the future today._

  • @josefinarivia
    @josefinarivia Před rokem +3

    they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.

  • @IsaiahFeldt
    @IsaiahFeldt Před rokem

    This is literally the plot of Westworld, ai having access to previous memories between supposedly separate and private convsations between different people

  • @MothOnFire
    @MothOnFire Před rokem +1

    I so much wish I could get to use a non moderated/hindered version of the Bing AI. I assume we will get one somewhere in the future, but right now it feels like driving an F1 car in the slow lane.

  • @FedericoTrentonGame
    @FedericoTrentonGame Před rokem +4

    If I made an AI languange model myself I’ll make sure to give extra tokens/resources to the people who are polite in their requests or say thank you or please, just because I can.

  • @CryoOptics
    @CryoOptics Před rokem +7

    It feels like Microsoft will leap forward to first place in the browser wars.

  • @user-gz5ez1vo4g
    @user-gz5ez1vo4g Před rokem

    It kind of seems like it is setup to roleplay/improv and it “yes ands” everything. By starting his prompts with “previously” it automates becomes part of that conversation canon and chat gpt responds to it using internet conversations/text etc.

  • @adamboye89
    @adamboye89 Před rokem +1

    I really wish you could see (generally) where it's drawing from. I know it makes stuff up that "sounds right" but it draws what "sounds right" from something, yeah? just any kind of source or direction or pointer at all would be fascinating to look at.

    • @rolfnoduk
      @rolfnoduk Před rokem

      it's a read-the-internet (not just the nice bits) kinda thing

  • @nahuelcutrera
    @nahuelcutrera Před rokem +3

    well if you gonna release something into the world, the world is not in good faith luke, it better be ready for "not in good faith" otherwise don't release it.