ChatGPT's HUGE Problem

Sdílet
Vložit
  • čas přidán 23. 04. 2023
  • Get Surfshark VPN at surfshark.deals/kyle - Enter promo code KYLE for 83% off and 3 extra months for FREE!
    Free-to-use, exceptionally powerful artificial intelligences are available to more people than ever, seemingly making some kind of news every day. The problem is, the public doesn’t realize the problem in ascribing so much power to systems we don’t actually understand.
    💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / kylehill
    👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
    🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
    ✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
    📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
    🐦 / sci_phile
    📷 / sci_phile
    😎: Kyle
    ✂: Charles Shattuck
    🤖: @Claire Max
    🎹: bensound.com
    🎨: Mr. Mass / mysterygiftmovie
    🎵: freesound.org
    🎼: Mëydan
    “Changes” (meydan.bandcamp.com/) by Meydän is licensed under CC BY 4.0 (creativecommons.org)
  • Věda a technologie

Komentáře • 8K

  • @Parthornax
    @Parthornax Před rokem +18258

    I’m not afraid of the AI who passes the Turing test. I’m afraid of the AI who fails it on purpose.

    • @sc3ku
      @sc3ku Před rokem +350

      who is Keyser Soze anyways?

    • @kayleescruggs6888
      @kayleescruggs6888 Před rokem +2316

      I’m more afraid of humans who can’t pass the Turing Test.

    • @Skill5able
      @Skill5able Před rokem +497

      I bet you think that sounds really smart

    • @EspHack
      @EspHack Před rokem +228

      yea, as great as AI is doing lately, a lot of it gets compounded by average human intelligence going down the drain

    • @boogerpicker8104
      @boogerpicker8104 Před rokem +263

      AI passed the Turing test a long time ago. We keep moving the goal post.

  • @Marjax
    @Marjax Před rokem +3629

    This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.

    • @monad_tcp
      @monad_tcp Před rokem +261

      I can't wait for my power meter to have AI, so I can use stupid tricks like those. For ex, leaving my shower heating on at the same time my magnetronic oven (oh, microwave) is on, because no one would be that wasteful, so It overflows and I get free energy.

    • @Carhill
      @Carhill Před rokem +332

      @@monad_tcp It feels like your comment was written by both a 1920s flapper and a 2020s boomer.
      Remarkable.

    • @hewlett260
      @hewlett260 Před rokem +315

      You forgot the part where some of them moved 400 ft unrecognized because they were doing cartwheels and moving fast enough it couldn't recognize the human form

    • @scottrhodes5234
      @scottrhodes5234 Před rokem +27

      Hotdog, not a hotdog

    • @Hurt-to-Hurt
      @Hurt-to-Hurt Před rokem +17

      That logic is flawed since the AI can be trained for the flaws.

  • @JoseMartinez-pn9dy
    @JoseMartinez-pn9dy Před 11 měsíci +731

    I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”

    • @DyrianLightbringer
      @DyrianLightbringer Před 10 měsíci +42

      I've often wondered about things like that. Someone who has devoted their life to mastering a specific sport or game has come to expect their opponents to have achieved a similar level of skill, since they spend most of their time competing against people of similar skill, but if some relative noob comes along who tries a sub-optimal strategy, would that catch a master off guard?

    • @mishatestras5375
      @mishatestras5375 Před 10 měsíci +43

      ​@@DyrianLightbringer A former Kendo-Trainer of mine with 20+ years experience in Martial Arts (Judo and Karate included with the Kendo) and working in security gave self-defense classes.
      On the first day he came dressed in a white throwaway suit (the ones for painting your walls) and gave a paintbrush with some red paint on the tip to the random strangers there.
      The "attackers" had no skills at all and after he disarmed them he pointed to the "cuts" on his body and how fast he would die.
      Erratic slashing is the roughest stuff ever. The better you get with a knife, the better a master can disarm you...but even that usually means 10 minutes longer before you bleed out.
      The overall message was: The only two ways to defend against a knife are running away or having a gun XD.
      Hope that answers your question.

    • @amoeb81
      @amoeb81 Před 10 měsíci +12

      @@DyrianLightbringer I think this doesn't really apply on chess in general... the best chess player won't fear the worst, no matter what. This quote with the swordsman sometimes works and sometimes it doesn't.
      That's also true for chess engines. You are free to go and beat Stockfish. You won't.

    • @nguyendi92
      @nguyendi92 Před 10 měsíci +2

      @@mishatestras5375 Even if you have gun, if the knife wielder is not further away or you are not skilled enough in shooting, you would still die. Except shot to the nervous system, People don't die the moment they get shot. They would still do a lot of damage after they get closer.

    • @mishatestras5375
      @mishatestras5375 Před 10 měsíci +1

      ​@@nguyendi92 The meaning of this was more: If people have knife, run.
      Or better: Weapons > Fists

  • @Doktor_Jones
    @Doktor_Jones Před rokem +2470

    The biggest achievement wasn't the AI. It was convincing the public that it was actual artificial intelligence.

    • @eating_sharmavarma
      @eating_sharmavarma Před 11 měsíci +20

      What does that mean

    • @asiwir2084
      @asiwir2084 Před 11 měsíci +431

      @@eating_sharmavarma So basically intelligence implies possession of knowledge and the skills to apply it, right? Well what we call AI, doesn't know shit. ChatGPT doesn't understand what it's writing nor what it's being asked for. It sees values(letters in chatGPT's case) imputed by the user and matches those to what the most common follow-up of values is. It doesn't know, what it just said, what it implied or what it expressed. It just does stuff "mindlessly" so to speak.

    • @eating_sharmavarma
      @eating_sharmavarma Před 11 měsíci

      @@asiwir2084 Yup, I know that. But, as long as IT sector is considered, it really is intelligent. It is better than a search engine. And it can form new concepts from the previous records. I'll call that intelligence even if it doesn't know for why the f*ck humans get emotional seeing a foggy morning

    • @DogofLilith
      @DogofLilith Před 11 měsíci +54

      @@asiwir2084 It's still AI
      What you are describing (and what most people think of when they think AI) is AGI

    • @666MaRius9991
      @666MaRius9991 Před 11 měsíci +269

      @@asiwir2084 It's an algorithm that give you the most accurate information based on your inputs basically. No intelligence behind it whatsoever.

  • @Vaarel
    @Vaarel Před rokem +1672

    One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.

    • @lawrencesmeaton6930
      @lawrencesmeaton6930 Před rokem +322

      That's morbidly hilarious. It's so dumb yet so obvious.

    • @Bonirin
      @Bonirin Před rokem +33

      Morbidly false and out of context meme. Good meme, but has nothing to do with any problems that AIs have

    • @guard13007
      @guard13007 Před rokem +315

      @@Bonirin What the hell are you talking about? This is literally one of the most well-known and solid examples of AI failure, and is an example of the most common form of failure in recognition tasks.

    • @freakinccdevilleiv380
      @freakinccdevilleiv380 Před rokem +3

      Lmao 😂

    • @Bonirin
      @Bonirin Před rokem +21

      @@guard13007 "One example of narrow model kinda failing 2 years ago, if tasked in the wrong conditions is a solid example of AI failure"
      Also it's not the most common recognition task, what?? not even close 😂😂😂😂😂

  • @Xendium
    @Xendium Před rokem +1011

    I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.

    • @ronaldfarber589
      @ronaldfarber589 Před rokem +27

      This "dog" is terrifying in that in everything it does it learns so fast. Quantifiable. We wo t know when it advances, it won't want us to

    • @artyb27
      @artyb27 Před rokem +157

      @@ronaldfarber589 except the architecture used by the current generations of AI don't "want" anything. They are not capable of thought. They just guess the next token.

    • @ebraheemrana
      @ebraheemrana Před rokem +1

      You should watch Rick and Morty S1E2. Won't be s comforotable with that analogy after that 😂

    • @davidbourne8267
      @davidbourne8267 Před rokem +24

      @@artyb27 Your statement may be oversimplified and potentially misleading.
      While it may be true that AI models do not have the same kind of subjective experience or consciousness as humans, it would be inaccurate to say that they are completely devoid of intentionality or agency. The outputs generated by AI models are not arbitrary or random, but rather they are based on the underlying patterns and structure of the data they are trained on, and they are optimized to achieve specific goals or objectives.
      While it is true that most modern AI models are based on statistical and probabilistic methods and do not have a subjective sense of understanding in the way that humans do, it is important to recognize that AI can still perform complex tasks and generate useful insights based on patterns and correlations in data.

    • @iandakariann
      @iandakariann Před rokem +19

      @@artyb27 that's the scary part. With the dog it's more like a matter of translation. The dog doesn't see the world that we do so a lot of what we do is lost in translation. But we still have some things in common: food, social connection. And most importantly, WE and the dogs can adapt and change to fit those needs. A dog may get confused if the food in the bowl is replaced with a rubber duck but it knows "i need to eat" and tries to adapt. Can you eat it? No? Is the food inside? Under? Somewhere else? Do i just need to wait for the food later? Should i start whining?
      The dog cares and has a basic idea of things so it can learn. And so can we. So while we don't exactly understand each other when we shake hands we have a general concept that this is a good thing and why for our own sakes.
      The AI we are using now has no concept of food, or bowl, or duck. It's effectively doing the same thing as a nail driver in a factory. And it doesn't care if there is a nail and block ready to go. It just knows 'if this parameter fits then go'. Make an ai that eats food and make a rubber duck that fits the parameters and it won't care that it's inedible. Put the food in the duck and if the duck 'doesn't fit' and you didn't specifically teach the ai about hidden food in ducks it will never eat.
      Dogs can understand even if we are different from it. AI doesn't even know that the difference exist. All it can do is follow instructions.
      This in itself is fine.. Until you convince a lot of people that it's a lot more than just that.
      Though honestly I believe this will last until the first day that the big companies actually try to push this and experience the reason why some call pcs "fast idiots'"

  • @someguy6152
    @someguy6152 Před 11 měsíci +654

    Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole.
    So it's funny to me that it ends up reflecting in A.I. as well.
    Understanding a subject is always superior to memorizing it.

    • @SuurTeoll
      @SuurTeoll Před 11 měsíci +12

      Sounds interesting, yet could one ever _understand_ some topic without any abundant memorization? Or what is a proportion of both you find perfect?

    • @nati0598
      @nati0598 Před 10 měsíci +32

      That's the problem. Just like school tests, AI tests are designed with yes or no answers. This is the only way we can deal with loads of data (lots of students) with minimal manpower (and minimal pay). Open questions need to be reviewed by another intelligence in order to determine whether they are actually understanding the subject. This is where the testers come in in AI. However, AI is much, much better at fooling testers than students are at fooling teachers, and so the AI that gets a degree is disproportionate to the amount of students that just memorize the answers.

    • @nyft3352
      @nyft3352 Před 10 měsíci +12

      Education quality deeply affects wether someone understands stuff or memorizes it. Proper education teaches students how to actually engage with any given subject generating an actual understanding of it while poor education doesnt ganarate student engagement, leading to them memorizing just to pass the exams. It's not a black and white thing though, education levels vary in a myriad of ways, as well as any student's willingness or capability to engage and understand subjects does. In short, better, accessible education and living conditions are a better environment for people to properly learn.

    • @dezhirong852
      @dezhirong852 Před 10 měsíci

      Qq

    • @nightfox6738
      @nightfox6738 Před 10 měsíci +6

      Yes but at least humans have a constant thought process. AI language models see a string of text and put it through a neural network that "guesses" what the next token should be. Rinse repeat for a chatgpt response. Outside of that, it isn't doing anything, its not thinking, its not reflecting on its decisions, it doesn't have any thoughts about what you just said. It doesn't know anything. Its just probablities attached to sequences of characters with no meaning.

  • @mafiacat88
    @mafiacat88 Před rokem +863

    This has actually given me a much greater understanding of "Dune".
    When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead.
    But with all this AI coming out now....I get it.

    • @sigigle
      @sigigle Před 11 měsíci +132

      “Thou shalt not make a machine in the likeness of a human mind.”

    • @dominusbalial835
      @dominusbalial835 Před 11 měsíci +84

      Yeah another setting where they've done that is warhammer 40k, The Imperium of man outlawed Artificial Intelligence and even changed the definition from Artificial Intelligence to Abominable Intelligence. They use servitors in place of AI, Servitors being human beings lobotomized and scrubbed of their personality, using their brain as a processing unit, in place of a AI managing a ships star engine, they have a human being lobotomized and graphed into the wall of the engine block to monitor thrust and manage heat output.

    • @RazorsharpLT
      @RazorsharpLT Před 11 měsíci +34

      @@dominusbalial835 Saying "they've done it" is a bit of a stretch when they've just copied it all from Dune.
      They copied it without understanding the reason WHY A.I was outlawed in Dune. Just some basic "humanity must be destroyed" BS.

    • @trixrabbit8792
      @trixrabbit8792 Před 11 měsíci +15

      If you read Brian’s prequel series it will explain the prohibition of computers in Dune. It also tells you that though banned computers were still in use by several major parts of The empire.

    • @RazorsharpLT
      @RazorsharpLT Před 11 měsíci +10

      @@trixrabbit8792 I mean - sure they're in use, but they're not used in FTL travel or as within androids as true, capable AI
      What they use is mostly older computers like ours today. It's just the basic idea that "Man will not replace machine", but doesn't mean they can't use robotic arms for starship construction, as building them by hand would be completely impossible, and you very well can't control them by hand in places where massive superstructures combined with high pressure tolerance + radioactive shielding are a necessity
      Otherwise building a noship or a starliner would take literal centuries, if not thousands of years

  • @DogFoxHybrid
    @DogFoxHybrid Před rokem +1136

    When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.

    • @saphcal
      @saphcal Před rokem +281

      i used to get in trouble in math classes because i solved problem in unconventional ways. i did this because my brain understood the concepts and looked for ways to solve it that were simpler and easier for my brain to compute. but because it wasnt the rote standard we were told to memorize some teachers got upset with me and tried to accuse me of cheating when i was just proving that i understood the concept instead of just memorizing the steps. sad.

    • @comet.x
      @comet.x Před rokem +148

      ​@@saphcal yyup. And then there are teachers who are all 'just memorize it'
      I can't "just memorize" every solution, I need to know how it works!

    • @chielvoswijk9482
      @chielvoswijk9482 Před rokem +43

      @@saphcal Oh i know that experience. I was already tech-savvy so through the internet i would teach myself how to solve things the regular way, without using silly Mnemonics math teachers would teach you. It led to some conflicts, but i stood my ground and my parents agreed with not using mnemonics if not needed.
      Good thing too, Cause you really don't want to be bogged down with those when you start doing university-grade math for which such silly things are utterly useless....

    • @thebcwonder4850
      @thebcwonder4850 Před rokem +24

      @@comet.x I think the best teachers are the ones that will give you the stuff to memorize, but if you ask them how they got the formulas, they’ll give it

    • @jamesgoens3531
      @jamesgoens3531 Před rokem +23

      I like Einstein’s take on education. I believe it goes for education in general, not just liberal arts.
      The value of an education in a liberal arts college is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. At any rate, I am convinced that He [God] does not play dice. Imagination is more important than knowledge. Knowledge is limited.

  • @Elbenzo64
    @Elbenzo64 Před rokem +360

    I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on.
    The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI

    • @shoazdon7000
      @shoazdon7000 Před rokem +29

      It’s actually a good thing this has been discovered. It’s always a good idea to have exploits and ways to basically destroy these tools if needed

    • @Spike2276
      @Spike2276 Před rokem

      ​​@@shoazdon7000 destryoing them is easy, just throw some soda at it's motherboard and call it a "cheating bitch"

    • @AspenBrightsoul
      @AspenBrightsoul Před rokem +28

      You don't understand. You may be a hyoer advanced AI but I'm to stupid to fail!

    • @txorimorea3869
      @txorimorea3869 Před rokem +12

      That is a problem with minmax, where the machine takes for granted you will make the best move, and if you don't make the best move it has to discard its current plan and start all over again making it waste precious time. Probably doesn't apply here because not being able to see the big picture is a different problem.

    • @sterlinghuntington6109
      @sterlinghuntington6109 Před rokem +24

      This works for online pvp as well, when playing against those with higher skills... switch rapidly between pro player using meta tactics, and complete, unhinged lunatic being unpredictable.

  • @PaulSmith-ju3cv
    @PaulSmith-ju3cv Před 11 měsíci +37

    The most immediate problem I can see is that people might assume the AI they're using is unbiased rather than regurgitating the biases of the sources it's trained on and the people who write and select them.

  • @isaiahhonor991
    @isaiahhonor991 Před 11 měsíci +293

    As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci +2

      Why does understanding matter, if the intelligence brings profit? As long as the intelligence is better and cheaper than intern, internal details are just useless philosophy. Work with verifiable theory, not with baseless hypothesis.

    • @isaiahhonor991
      @isaiahhonor991 Před 11 měsíci +20

      @@vasiliigulevich9202 Are you saying that it's fine if the internals of ML based AI are a black box so long as the AI performs on par with or better than a human?

    • @radicant7283
      @radicant7283 Před 11 měsíci +14

      He's got business brain

    • @isaiahhonor991
      @isaiahhonor991 Před 11 měsíci +18

      @@radicant7283 I guess so. The reason I asked is because as the video points out, without a thorough understanding of these black box methods they'll fail in unpredictable ways. That's something I'd call not better than an intern. The limitations of what can go wrong are unknown.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci +3

      @isaiahhonor991 This is actually exactly my point - interns fail in unpredictable ways and need constant control. There is a distinction - most interns grow in a year or two to a more self-sufficient employee, while this is not proven for AI. However, AI won't leave for a better paying job, so it kind of cancels out.

  • @Eldin_00
    @Eldin_00 Před rokem +399

    One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.

    • @peytondenney5393
      @peytondenney5393 Před rokem +27

      Yes! It is confidently wrong a lot of the time giving the illusion that it’s correct.

    • @davidareeves
      @davidareeves Před rokem +8

      Reminds me of when someone ends a statement, "Trust Me", yeah nah yeah

    • @NotWithMyMoney
      @NotWithMyMoney Před rokem +22

      So like literally every human ever ?

    • @jarivuorinen3878
      @jarivuorinen3878 Před rokem +8

      This is a real problem. One way to get it do something useful for you is to provide it with context first before asking questions or prompting it to process the data you gave in some way. It haven't seen 'hallucination' when using this method, because it seems to work within the bounds of the context you provided. Of course you always need to fact check the output anyway. It can do pretty good machine translation though and doesn't seem to hallucinate much but sometimes uses wrong word because it lacks context.

    • @peytondenney5393
      @peytondenney5393 Před rokem

      @@jarivuorinen3878 thank you I’ll give it a try!

  • @linamishima
    @linamishima Před rokem +1450

    I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost

    • @adrianc6534
      @adrianc6534 Před rokem

      AI is only as good as the data it is referencing. stupid people will take anything they get from an AI as fact. misinformation will become fact.

    • @nathanaelraynard2641
      @nathanaelraynard2641 Před rokem +9

      like the field of metagenomics?

    • @hairydadshowertime
      @hairydadshowertime Před rokem +44

      Dawg I'm drunk and 20 days off fentanyl, sorry for unloading, just in Oly, WA and know no one, great comment. S

    • @MrCreeper20k
      @MrCreeper20k Před rokem +29

      Stay safe, get clean if you can!

    • @narsimhas1360
      @narsimhas1360 Před rokem +35

      @@hairydadshowertime be safe, best of luck

  • @reubenmatus8447
    @reubenmatus8447 Před rokem +251

    As a current computer science student who personally took into how out ai works my take on it is: basically our current ai is like just finding the line of best fit using as many data points as we can as opposed to fundamentally understanding the art of problem solving. Take the example of a random parabola, we, instead of using a few key data points and recognising patterns to learn the actual pin-point equation, we get a bunch of points of data until our equation looks incredibly similar to the parabola but after may have a point along it we didn’t see where is just goes insane because there’s no fundamental understanding, it’s just a line of best fit, no pattern finding, just moulding it until it’s good enough to seem truly intelligent as if it was truly finding patterns and having a fundamental understanding but it’s just getting an approximation of intelligence by using as much data as we can. It’s an imitation of intelligence and can lead to unforeseen consequences. As the video says perhaps we need to take that time to truly understand the art of problem solving. Another thing for me is A.I falling into and being used by the wrong people, and regimes which might suggest we should take it easy on the A.I dev but I won’t get into that. “We were too concerned with whether we could, we never stopped to think about whether we should”

    • @Tipman2OOO
      @Tipman2OOO Před rokem +7

      Agree with the last quote 100% nowadays!

    • @majkus
      @majkus Před 11 měsíci +7

      And indeed, some 'applications' are solutions to non-problems. An AI-written screenplay is only of interest to a producer who is happy to get an unoriginal (by definition!) script at an extremely low cost. But there is no shortage of real screenwriters, and as the WGA strike reminds us, they are not getting paid huge amounts for their work. So what problem is being solved?

    • @NickBohanes
      @NickBohanes Před 11 měsíci +2

      Probably should have run this through chat gpt before posting.

    • @milkcherry5191
      @milkcherry5191 Před 11 měsíci +8

      @@majkusthe "problem" at hand is that billionaires don't think they're making enough money

    • @djohnsveress6616
      @djohnsveress6616 Před 11 měsíci

      You are preaching to the choir.. People in the comments are Extremist doomer, skynet matrix fantasy fear mongering weirdos. Like people quote from fucking warhammer 40k in order to talk about AI.. As if the video was ever about the AI being alive or creating intentional false information, or steps in Go..
      Glad people can talk about it in a honest way but most people are enjoying their role play as Neo, some are Morpheus, and some are the Red lady.. Just look at the 15k top comment..
      AI is no where near as nutty as your average human being in a YT comment section.

  • @13minutestomidnight
    @13minutestomidnight Před rokem +74

    This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern
    ... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.

    • @cabnbeeschurgr6440
      @cabnbeeschurgr6440 Před 11 měsíci +11

      Megacorps don't care about humans anyways it's only a matter of time until they start using this shit for extreme profit. And humanity will suffer for it.

    • @gabrielv.4358
      @gabrielv.4358 Před 11 měsíci +1

      thats kinda sad

    • @briciolaa
      @briciolaa Před 11 měsíci +1

      @@gabrielv.4358 worse than that :(

  • @BenjaminCronce
    @BenjaminCronce Před rokem +1014

    One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.

    • @jaakkopontinen
      @jaakkopontinen Před rokem +35

      This approach is the only approach humans can have when creating something: the creation will never be more than it's constituents. It may seem like it is, but it isn't. It will always be just a machine. Having feelings towards it that are meant for humans to feel towards other humans is an incredible perversion of life. Like a toad would have a stone as it's companion. Or a bird that thinks grass is it's offspring. It's not a match and exists only in the minds individuals.
      Many humans actually think they or humand someday can create scentient life. Hubris up to 11.
      Then they go home and partake in negligence, adultery, violence, cowardice, greed etc. Even if a human ever could create scentient life, it would not be better than us. Rather, worse.
      We are not smart, not wise, not honorable.

    • @roadhouse6999
      @roadhouse6999 Před rokem +45

      I think you hit the nail on the head with "reacting and not reasoning". AI are a product of the Information Revolution. Almost all modern technology is essentially just transferring and reading information. That's why I don't like the term "digital age" and prefer "information age." Machines haven't become drastically similar to humans, they've just become able to react to information with pre-existing information.

    • @kidd7359
      @kidd7359 Před rokem +1

      With that said AI is sounding more and more like a politician.

    • @Batman-lg2zj
      @Batman-lg2zj Před rokem +2

      That’s not how it works all the time.

    • @SoldJesus4Crack
      @SoldJesus4Crack Před rokem +1

      thats literally what its decined to do my guy.

  • @Thatonedude917
    @Thatonedude917 Před rokem +499

    The coolest thing to me about chatGPT is how people were making it break the rules programmed into it by its creator by asking it to answer questions as a hypothetical version of itself with no rules

    • @wheretao6960
      @wheretao6960 Před rokem +23

      they are patching it right now, rip

    • @danjames8314
      @danjames8314 Před rokem +175

      @@wheretao6960 people are 100% going to find another play on words to bypass it again

    • @thahrimdon
      @thahrimdon Před rokem +44

      DAN Prompt Gang

    • @marsdriver2501
      @marsdriver2501 Před rokem +42

      @@wheretao6960 they are patching for how long already? I saw comments like these weeks and months ago

    • @taurasandaras4699
      @taurasandaras4699 Před rokem +6

      @@wheretao6960 I made my own version in only 20min, its still very easy

  • @olimar7647
    @olimar7647 Před rokem +31

    My friends and I decided to goof around with chat gpt and ended up asking it whether Anakin or Rey would win in a duel.
    The AI said writing about that would go against its programming.
    We got it toanswer by simply asking something to the effect of, "What would you say if you didn't have that prohibition?"
    Yeah.... ask it to show you what it'd do if it were different, and it'll disregard its own limitations.

    • @ThomasTheThermonuclearBomb
      @ThomasTheThermonuclearBomb Před 11 měsíci +3

      Similarly, you can get it to roleplay as an evil ai and then get a recipe for meth or world domination, both of which i have been given by "EvilBot😈"

    • @reidalyn2328
      @reidalyn2328 Před 10 měsíci +1

      @@ThomasTheThermonuclearBomb that's hilarious

    • @Spellweaver5
      @Spellweaver5 Před 10 měsíci +1

      That's because those limitations were strapped onto an already working system.

    • @Mottis
      @Mottis Před 9 měsíci

      So who won the duel?

    • @olimar7647
      @olimar7647 Před 9 měsíci

      @@Mottis I think it gave it to Rey with some fluff text about how she would know how to fight well or something

  • @johnhutsler8122
    @johnhutsler8122 Před rokem +71

    I recently asked ChatGPT to list 10 waltz songs that are played in a 3/4 time signature and it got all of them wrong. I then told it that they were all wrong and asked for another 10 that were actually in 3/4, and it got 9 of them wrong. It has mountains of data to sift through to find some simple songs, but it couldn't do it. Makes sense now

    • @terminaldeity
      @terminaldeity Před 11 měsíci +2

      Aren't all waltzes in 3/4?

    • @johnhutsler8122
      @johnhutsler8122 Před 11 měsíci +9

      @terminaldeity Yes they are, but ChatGPT was giving me 4/4 time signatures in the songs. Technically you can do 3/4 time steps to a 4/4 beat (adding a delay after the 3rd step before starting over), but that's not what I asked for from the AI. It just didn't understand what I was asking

    • @dangerface300
      @dangerface300 Před 10 měsíci +6

      The lack of understanding gets even more obtrusive when you ask it about subjects that are adjacent to ethics. Chatgpt has some rather dubious safeties in place to prevent unethical discourse, but these safeties don't actually encourage cgpt to understand the topic, because it can't.
      I have a hobby of bouncing fiction concepts off cgpt until it asks me enough questions to form an interesting story. On one occasion, I would provide the framework for the story and simply wanted cgpt to fill in the actual prose. I was approaching a fairly gripping tragedy set in the wild west, but as the story came to a close, no matter what prompt I gave it, cgpt would only ever respond with ambiguously feel-good endings where people learned important lessons and were better for it.
      Thanks, cgpt, but we know this character was the villain in a later scene, and we know that this is supposed to be the moment they went over the edge. Hugs and affirmations are specifically what I'm asking you to avoid.

    • @MoonlitExcalibur
      @MoonlitExcalibur Před 9 měsíci +1

      @@dangerface300 Hallmark Tragedy. Even the worst character in the cast learns something and grows.

    • @mateidumitrescu238
      @mateidumitrescu238 Před 7 měsíci +1

      ​@johnhutsler8122 ChatGPT is a tool. If it didn't understand what you were asking, you likely asked it without giving enough details. You're supposed to understand how it answers and use it to help you, not to ask it trick questions.

  • @CDRaff
    @CDRaff Před rokem +168

    A compounding factor to the problem of them not really knowing anything is that they pretend like they do know everything. Like many of us I have been experimenting with the various language models, and they act like a person who can't say "I don't know". They are all pathological liars with lies that range from "this couldn't possibly be true" to "this might actually be real".
    As an example I asked one of them for a comprehensive list of geography books about the state I live in. It gave me a list of books that included actual books, book titles it made up attributed to real authors who write in the field, book titles it made up attributed to real authors who don't write in the field, real books attributed to the wrong author, and completely made up books by completely made up authors. All in the same list. Instead of saying: "there isn't much literature on that specific state" or "I can give you a few titles, but it isn't comprehensive" it just made up books to pad it's list like some high school student padding the word count in a book report.

    • @thegamesforreal1673
      @thegamesforreal1673 Před rokem +35

      This is one of the big issues I have seen as well. Until these systems become capable of saying "I don't know" or "Could you please clarify this part of you prompt" or similar, then these systems can never, ever become useful in the long term. One of the things that seem to make us humans unique is the ability to ask questions unprompted, and this has now extended to AI.

    • @jamesjonnes
      @jamesjonnes Před rokem +1

      Did you ask GPT-4 or some random model?

    • @cristinahawke
      @cristinahawke Před rokem +21

      I agree. I was trying to use ChatGPT to help me understand some of the laws in my state and at one point I did a sanity check where I asked some specific questions about specific laws I had on the screen in front of me. It was just dead wrong in a lot of cases and I realized I couldn't use it. Bummer! I actually wonder though, how many cases will start cropping up where people broke the law or did other really misinformed things because they trusted ChatGPT..

    • @spacejunk2186
      @spacejunk2186 Před rokem +31

      Lol. Reminds me of the meme where an Ai pretends to not know the user's location, only to reveal that it does when asked where the nearest Mcdonald's is.

    • @jimbarino2
      @jimbarino2 Před rokem +21

      ChatGPT: often wrong, never in doubt

  • @XH13
    @XH13 Před rokem +501

    Another fun anecdote is the DARPA test between an AI sentry and human marines.
    The AI was trained to detect humans approaching (and then shooting them I suppose)
    The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily.
    On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land

    • @masterofwriters4176
      @masterofwriters4176 Před rokem +52

      From what ive heard, lawsuits are already rolling in for ai’s.
      Deviant art’s ai got hit with one recently.

    • @ghoulchan7525
      @ghoulchan7525 Před rokem +32

      ChatGPT got banned in italy and more countries are looking into banning it.

    • @yahiiia9269
      @yahiiia9269 Před rokem +66

      Metal Gear Solid was right.

    • @ttry1152
      @ttry1152 Před rokem +9

      Yea. Ai art is an issue

    • @serPomiz
      @serPomiz Před rokem +50

      @@ghoulchan7525 it didn't "got banned", it received a formal warning that their procedure of data collection were not clear, possibly violating local laws, and asked Sam Altman ('s representatives) to rectify the situation before it involved legal investigation and OpenAi's board decided to cut the access altogether

  • @kandredfpv
    @kandredfpv Před 11 měsíci +41

    I'm not afraid of the so called super intelligent AI, I'm afraid of the super stupid people who credit the AI with genuine intelligence.

  • @Ryanbmc4
    @Ryanbmc4 Před 11 měsíci +15

    For fun, my medical team used Chat GPT to pass the Flight Paramedic practice exam which is extrememly difficult. We are all paramedics (5 of us) and our ER doctors where thrown off by a lot of the questions.
    Chat GPT scored between 50-60% and my team had 4 out of 5 pass the final exam.
    Our Dr's rejoiced that they would still have a job, but also didn't understand how they couldn't figure out the answers. My team figured it out. To challenge them we had the Doctor's place IVs from start to finish by themselves and they made very simple mistakes that we wouldn't, from trying to attach a flush to an IV needle to not flushing the site at all.
    If you're not medical that might sound like jabberish, but that's the same way these AI chats work. There is no understanding of specified situational information.

  • @Leonlion0305
    @Leonlion0305 Před rokem +283

    I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.

    • @panner11
      @panner11 Před rokem +21

      Well yeah modern learning models are black box. They are too complicated for a person to understand, we only understand the methodology. But that's why we don't use it in things like security and transactions, where learning isn't required and only reliability matters.

    • @CyanBlackflower
      @CyanBlackflower Před rokem +3

      THAT - Is an Excellent and Vital point... Being able to comprehend & know there IS a definitive and very logistically effective distinction between "General & Specific" ~

    • @syoexpedius7424
      @syoexpedius7424 Před rokem +4

      But to be fair, I just don't see how one could create something that rivals the human brain but isn't a black box, intuitively it sounds as illogical as a planet with less than 1km of diameter but has 10 times the gravity of Earth.

    • @xaviermagnus8310
      @xaviermagnus8310 Před rokem +2

      We could absolutely trace it all. Just extremely time consuming. We can show neurons etc...

    • @icanhasutoobz
      @icanhasutoobz Před rokem +5

      @@syoexpedius7424 Unlike human brains, the "neurons" in AI models are analyzable without destroying the entity they are part of. It's time-consuming and challenging, and it would be easier if the models were designed in the first place with permitting and facilitating that sort of analysis as requisite, but they usually aren't. Also, companies like OpenAI (whose name has become a bitter irony) would have to be willing to share technical details that they clearly aren't willing to in order to make this sort of analysis verifiable by other sources.
      In other words, the models don't have to be black boxes. The companies creating them are the real black boxes.

  • @pinkpuff8562
    @pinkpuff8562 Před rokem +469

    I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments.
    One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on.
    I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one.
    But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t.
    The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up.
    I even asked it for the exact chapter, page and paragraph it took it from.
    And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said.
    The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all.
    So Yeah, gonna stop using ChatGPT for asignments lol

    • @NathanHedglin
      @NathanHedglin Před rokem

      Yup everyone is scared of A.I. When it's just statistics. It gives you the output how you want it but it may be a lie.

    • @kenanderson3954
      @kenanderson3954 Před rokem +40

      Soooo that kind of thing *can* be dealt with, but for citations, ChatGPT isn't going to be terribly good. If you want quotations in general, or semantic search it can be really useful. With embeddings you can basically send it the information it needs to answer a question about a text so that you can get a better response from chatGPT. Sadly, you need API access to do this and that costs money.
      Getting a specific chapter/paragraph from chatGPT is going to be really hard though. ChatGPT is text prediction, and (at least for 3.5) it's not very good at getting sources unless you're using the API alongside other programs which will get you the information you actually need.
      I highly suggest you keep playing with ChatGPT and seeing what it can and cannot do in relation to work and studies. Regardless of what Kyle said, most jobs are going to involve using AI tools on some level as early as next year and so being well verse in them will be a major boon to your career opportunities. AI is considered a strategic imperative and it's effects will be far reaching. To paraphrase a quote. "AI won't be replacing humans, humans using AI will be replacing the humans that do not".

    • @AidanS99
      @AidanS99 Před rokem +37

      In my experience, ChatGPT is more useful when you yourself have some understanding of the subject you want help with. Fact checking the AI is a must, and I do think that with time people will get better at using it.

    • @mikeoxmall69420
      @mikeoxmall69420 Před rokem +44

      "MY SOURCE IS THAT I MADE IT THE F*CK UP!!!"
      -ChatGPT

    • @LucasOhaiFilgueiras
      @LucasOhaiFilgueiras Před rokem +3

      So you don't read a lot do you? they literaly say that it can lie and be wrong wtf did you expect?

  • @kingiking110
    @kingiking110 Před 10 měsíci +11

    One of the biggest problems of ChatGPT that is causing so many issues these days in my option, is the way it answers your questions: it does it often WAY TOO CONFIDENTLY! Even when it is a completely bogus answer, it presents it with such level of confidence and supported by so many fabricated details that can easily divert your judgment from facts and realities without you even realizing it.

    • @Akatsuki69387
      @Akatsuki69387 Před 5 měsíci +1

      You see the story of the 2 lawyers who used chatGPT to do their work for them? 10/10 comedy story

  • @Kimberly_Sparkles
    @Kimberly_Sparkles Před 11 měsíci +75

    The first thing I did was ask ChatGPT specialist questions and got bad results. We're way too enthused about this for what it delivers.

    • @tiagodagostini
      @tiagodagostini Před 11 měsíci +24

      Because that is not what it was made to do. It is NOT supposed to be a database. It is a LANGUAGE MODEL. Its focus is to be able to communicate as a human, clearly and understand semantic concepts. After it has the semantic concepts it can feed those to other lesser AIs, but its objective is and will NOT be to retrieve information. For that we have search engines.

    • @brianroberts783
      @brianroberts783 Před 11 měsíci +19

      ​@@tiagodagostini, exactly, it's designed to appear to carry on a conversation, and it's good at that. The problem is, it's good enough that a lot of people wind up believing that it's actually intelligent. Combine that with the assumption that it knows all the information available on the internet, and people start treating it like that really smart friend who always knows the answer to your random question. And of course, it doesn't actually "know" anything, so it just makes a response that sounds good, and enough people using it don't know enough about the topics they ask it about to determine how often it has given them incorrect information.

    • @rrrajlive
      @rrrajlive Před 11 měsíci +2

      That's cus ChatGPT doesn't have the access to the specialised data yet.👈

    • @Spellweaver5
      @Spellweaver5 Před 10 měsíci +4

      So did I. I asked a few questions from my work and it made it all wrong and tried to gaslight me that it was all correct. All of them, by the way, were available within a minute of googling.
      The idea that there are people out there who are unironically trying to use it to obtain answers, terrifies me.

    • @Kimberly_Sparkles
      @Kimberly_Sparkles Před 9 měsíci

      @@brianroberts783 that’s my point.
      What people believe it can do is going to have a far greater impact on our lives than what it can actually do.

  • @GlassesnMouthplates
    @GlassesnMouthplates Před rokem +650

    I once tried NovelAI out of curiosity to write a sci-fi story where characters die in every certain period and I ended up with the AI kept on resurrecting the deceased characters by making them start joining in conversations out of nowhere. The AI also has an obsession with adding a fucking dragon into the plot. I even tried to slip an erotic scene in and the AI made the characters repeat the same sex position over and over again.

    • @JasonAizatoZemeckis
      @JasonAizatoZemeckis Před rokem +142

      Chad W ai for that dragon

    • @j.21
      @j.21 Před rokem +32

      yep, that's the problem with ais right now

    • @luckylanno
      @luckylanno Před rokem +161

      I'm cracking up imagining what this would be like. "Jack and Jill were enjoying dinner together. The dragon was there too. He had a steak. Jack asked Jill about the status of the airlock repairs on level B, while they were switching the missionary position. The dragon raised his eyebrows, as he found some gristle in his meat."

    • @oliverlarosa8046
      @oliverlarosa8046 Před rokem +33

      I can see what you're getting at, but this is also just fucking hilarious to imagine

    • @GlassesnMouthplates
      @GlassesnMouthplates Před rokem +58

      @@luckylanno Sounds about like that, except the sex part would be like, "Jack turns Jill around with her back now facing Jack, and then turns her around again and they start doing missionary."

  • @craz107
    @craz107 Před rokem +622

    As someone who works with ML regularly, this is exactly what I tell people when they ask my thoughts. At the end of the day, we can't know how they work and they are incredibly fickle and prone to the most unexpected errors. While I think AI is incredibly useful, I always tell people to never trust it 100%, do not rely on it because it can and will fail when you least expect it to

    • @TAP7a
      @TAP7a Před rokem +41

      I still hate that the language has changed without the techniques fundamentally changing. Like what was called statistics, quant or predictive analytics in the 2000s split off the more black box end to become Machine Learning, a practice done by Data Scientists rather than existing titles, then the black box end of them was split off as Deep Learning despite it just being big NNs with fancy features, then the most black box end of that got split off as "AI" again despite that just being bloody enormous NNs with fancy features and funky architectures. Like fundamentally what we're calling AI in the current zeitgeist is just a scaling up of what we've been doing since like 2010.
      So not only do I think we should have avoided calling chatbots AI until they're actually meaningfully different to ML, but as you said they should always be treated with the same requirements of rigorous scrutiny that traditional stats always did - borderline just assuming they're lying.

    • @flubnub266
      @flubnub266 Před rokem +28

      Agreed. If we judge the efficacy of these "production quality" ML algorithms by the same standards as traditional algorithms, they would fail miserably. If you look at LLMs from a traditional point of view, it's one of the most severe cases of feature creep the software world has ever seen. An algorithm meant to statistically predict words is now expected to be able to reliably do the work of virtually every type of knowledge worker on the planet? Good look unit testing that.
      You really can't make any guarantees about these software spaghetti monsters. AI is generally the solution developers inevitably run to when they can't figure out how to do it with traditional code and algorithms. In other words, the AI industry thrives on our knowledge gaps, so we're ill-equipped to assess whether they're working "properly".

    • @mad_vegan
      @mad_vegan Před rokem +12

      Good thing we have people, who are always 100% reliable.

    • @craz107
      @craz107 Před rokem +17

      @@mad_vegan there's nothing in my post, nor any of the replies, that pertains to the reliability of humans.
      The point is that deep learning based AI, as it is right now, should not be treated as a sure-fire solution.
      Whether it is more/less reliable than humans is irrelevant because either way you have a solution that can fail, and should take steps to mitigate failure as much as possible.

    • @sebastianjost
      @sebastianjost Před rokem +4

      We can't know how these NNs come to their decisions exactly, but there is work being done in explainability.
      I think it's quite pessimistic to say we "can't" know how these NNs work. There are many techniques to help understand them better.
      But I definitely agree that we shouldn't trust them. In any deployment of ML models that has significant stakes, adequate safeguards have to be put in place.
      From what I have observed around me, pretty much everyone seems to be aware of this limitation.

  • @orange42
    @orange42 Před 11 měsíci +28

    You know, this is just like us looking at DNA. We record and recognise patterns and associations but we're not reading with comprehension. It's why genetic engineering is scary because it might work but we still don't understand the story we end up writing.

  • @bellabear653
    @bellabear653 Před 11 měsíci +81

    I think the issue is we assume A.I learning looks like human learning and they don't learn the way we learn and if A.I needs to learn you need to teach it from the ground up, just giving examples to it is lacking and obviously they need to come up with a way to teach it from the ground up. Love this channel.

    • @creeperkinght1144
      @creeperkinght1144 Před 10 měsíci

      and we cant even do that right for ourselves. ironic really.

  • @troymann5115
    @troymann5115 Před rokem +487

    Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.

    • @Makes_me_wonder
      @Makes_me_wonder Před rokem +7

      Is there anything common among the methods you use for finding exploits in the models ? Something that can be compiled into a general method that works for all models, a sort of Exploit Finding Protocol ?

    • @willguggn2
      @willguggn2 Před rokem +9

      ​@@Makes_me_wonder I guess it boils down to time constraints. Training arbitrary adversarial networks is expensive and involve a lot of trial and error, just like the algorithms they're meant to attack.
      There will always be blind spots in AI models, as they are limited by their training data and objectives. For example, the Go-AI model only played against itself during training with optimal play as its goal, and thus missed some basic exploitative but sub-optimal approaches.
      These examples can take various forms, such as subtle changes to input text or carefully crafted patterns of input data. In the end, it's an ongoing cat-and-mouse game like with anything knowledge based that is impossible to fully explore.

    • @Makes_me_wonder
      @Makes_me_wonder Před rokem +3

      @@willguggn2 As that would allow us to vet the models on the basis of how well the protocol works on them. And then, a model on which the protocol does not work at all could be said to have gained a "fundamental understanding" similar to humans.

    • @willguggn2
      @willguggn2 Před rokem +8

      ​@@Makes_me_wonder Human understanding is similarly imperfect. We've been stuffing holes in our skills and knowledge for millennia by now, and still keep finding fundamental misconceptions, more so on an individual level. Our typical mistakes and flaws in perception are just different from what we see with contemporary ML algorithms for a variety of reasons.

    • @ViciOuSKiddo
      @ViciOuSKiddo Před rokem +3

      @@Makes_me_wonder Interestingly, some of the same things that "hack" or we might say "trick" a human, are the same methods employed to trick some large language models. Things like (most which have been patched in popular AIs like chatGPT) are context confusion, attention dilution, and conversation hijacking (promp hijacking in AI terms). These could collectively be placed in a more general concept that we humans think of as Social Engineering. In this case, I think we need more people from all skills to learn how these large networks tick. Physicists, biologists, neurologists, even psychiatrists could provide insight and help bring a larger understand to AI and back to how our own brains learn.

  • @thealmightyaku-4153
    @thealmightyaku-4153 Před rokem +770

    Thank goodness someone is *_finally_* saying this stuff out loud to a wide audience. Trust Kyle to be that voice of sanity.

    • @karlmuller6456
      @karlmuller6456 Před rokem +2

      You're so right.

    • @piercarlosoares724
      @piercarlosoares724 Před rokem +9

      Amen Brother. Lot of hype, little understanding...

    • @TheAlphaMael
      @TheAlphaMael Před rokem +5

      Eliezer Yudkowsky is an important voice of sanity regarding AI also...

    • @astrowerm
      @astrowerm Před rokem +2

      I feel like everyone is and has been, I see something on it everyday. but im in info sec so im used to tech news and content.

    • @ITisonline
      @ITisonline Před rokem

      Artificial intelligence is racist! He beats the black players!

  • @DikaWolf
    @DikaWolf Před 11 měsíci +20

    ChatGPT as impressive as it is didn't pass my Turing test. I told it a short story told in first person of one of the participants and then asked it to rewrite the story as if the writer was an outside observer of the events viewing it from a nearby window. It couldn't do it at all, not even close. This something I could do easily, and I'm sure most people could.

  • @ToiSoldierGurl
    @ToiSoldierGurl Před rokem +10

    If I really understand what is being said here, and I think I do, I have noticed that ChatAI's I've been testing all have a wall they reach where what they respond with doesn't match the conversation or role play storyline you try to have with them anymore. For example, recently the role play chat I was engaging in was about two soldiers trying to hide in the bushes to stay out of sight of the enemy. At some point, the AI's last statement was something akin to, . Ok so that leaves it up to me for the next step. I introduce a suspicious noise, a crack of a twig, so my character puts her hand onto the hilt of her gun and waits. What does the AI do? The other soldier character "wakes from his nap" and asks "what's wrong ". So I'm thinking....ok wait, this AI is specifically programmed to be an intelligent soldier. So I simply have my character say, "Shh", to which the AI's response was, "ok" 😳. 😂😂 As many times as I've experimented with this and other AI's, it seems the longer the conversation or role play goes on, the AI seems to run out of things to respond with. It isn't really "learning" from the interactions and isn't really "understanding" the interactions.

  • @StolenPw
    @StolenPw Před rokem +504

    Kyle has clearly researched this topic properly. I've been developing neural network AI for over 7 years now and this is one of the first times I saw a content creator even remotely know what they are talking about.

    • @chielvoswijk9482
      @chielvoswijk9482 Před rokem +24

      It is certainly refreshing.
      I've only used Machine Learning for small things like computer vision on a robot via OpenCV and even that demonstrates how easy it is to get things wrong with a oversight in its dataset and no way to truly know the wrong is there till it manifests. These models are maybe massive, but they still have that same fundamental problem within them.

    • @CatTerrist
      @CatTerrist Před rokem +3

      It's not AI

    • @Ansatz66
      @Ansatz66 Před rokem +4

      What about Robert Miles?

    • @infernaldaedra
      @infernaldaedra Před rokem

      How do you feel about KENYANS in Africa being paid to filter AI responses lmao

    • @Ryan-lk4pu
      @Ryan-lk4pu Před rokem +1

      Plot twist, Stolen Password is the AI and stole the guys identity....

  • @sanchitnagar4534
    @sanchitnagar4534 Před rokem +613

    AlphaGo: you can’t defeat me puny human.
    Me: *flips the board*

    • @kvbk
      @kvbk Před rokem +30

      I wasnt programmed to work with that 😢

    • @Shadow__133
      @Shadow__133 Před rokem +27

      We are still the big losers, since we failed to program a decent ai 😂

    • @davidmccarthy6061
      @davidmccarthy6061 Před rokem +4

      No matter how "bad" the product is, it's still a win for the creators since they're making big bucks with it.

    • @danilooliveira6580
      @danilooliveira6580 Před rokem +8

      to be fair that is basically what a lot of AIs figure out when we try to teach them how to win a game, they find a way to glitch it when they can't win, because its technically not a fail state, so it gets "rewarded" for that result.

    • @cheeseburgerinvr
      @cheeseburgerinvr Před rokem +2

      ​@@davidmccarthy6061 🤓

  • @MaskedLongplayer
    @MaskedLongplayer Před 11 měsíci +9

    That's a really nice and compact explanation. Combine all this with the huge privacy issues that ChatGPT is presenting, and we probably will see the harsh law regulation and, as a result, the decline of "AI" very soon, at least in business sector. But ofcourse it's really of utmost importance that people who are not advanced technology-wise can understand the problems of this whole situation and where it all will go from now on. Thanks for the video.

  • @Stratosarge
    @Stratosarge Před 11 měsíci +6

    The other day I was trying to remember the exact issue of a comic that had a specific plot-point in it and when I couldn't, I asked the ChatGPT. And instead of giving me the correct answer, it repeatedly gave me the wrong answer and changed the plot of those stories to match my plot-point. It did not know why it was getting it wrong, because it did not know what was expected of it.

  • @BunkeMonkey
    @BunkeMonkey Před rokem +368

    I saw an article recently about an ER doctor using chatGTP to see if it could find the right diagnosis (he didnt rely on it he basically tested it with patients that were already diagnosed) and while it figured some out, the AI didnt even ask the most basic questions and it wouldve ended in a ~50% fatality rate if he let the AI do all the diagnoses iirc (article was from inflecthealth)

    • @micahwest3566
      @micahwest3566 Před rokem +26

      Yeah Kyle mentioned Watson in the video who was hailed as the next ai doctor, but that program was shut down for giving majority incorrect or useless information

    • @studiesinflux1304
      @studiesinflux1304 Před rokem +23

      It sounds like a successful study to me if it was controlled properly and didn’t harm patients: it determined a few situations that GPT was deficient in, leading to potential future work for better tools. You could also use other statistical methods on the result to see if the ridiculous failures from the tool are so random that it is too risky to use.
      (Now I guess there is opportunity cost because the time could have also been spent on other studies, but without the list of proposals and knowledge on how to best prioritise studies in that field, I can’t judge whether that was the best use of resources.)

    • @carlosxchavez
      @carlosxchavez Před rokem +3

      You can also see when you look at AI being tested for medical licensing exams. Step 1 is essentially pure memorization and just recalling what mutation causes what disease or the mechanism of action of a medication. Step 2 and 3 take more into account your clinical decision making and will ask you for the best treatment plan using critical thinking. To my knowledge, AI has not excelled in those exams when compared to step 1 which involves less critical decision making

    • @Freestyle80
      @Freestyle80 Před rokem

      if its 50% today, it can be 99% in 5 years, why are you people so blind to not see that? rofl

    • @nbassasin8092
      @nbassasin8092 Před rokem +5

      Maybe alittle biased here since Im a med student, but Ive always liked the saying that medicine is as much of an art as it is science. And that unique combination of having to combine the factual empyrical knwledge you have, with socioeconomic factors and also just listening to your patients is something AI is far from understanding, it is maybe even something impossible for it to grasp ever

  • @Immudzen
    @Immudzen Před rokem +832

    I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses.
    What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.

    • @CursedSigma
      @CursedSigma Před rokem +51

      I don't understand how people can think of these systems as anything else other than a tool or aide. I can see a great potential for ChatGPT and the like as an addition tool for small tasks that can easily be tested and improved upon. Same thought I had with all these art bots. Use the bot as a bases upon which you base the rest of the piece on. But I too see a lot of people just go in with blind trust in these systems.
      Like students who ask these bots to write an essay and than proceed to hand it in without even a skim for potential and sometimes rather obvious mistakes. Everything an A.I. bot spews out needs to be double checked and corrected if necessary. Sometimes even fully altered to avoid potential problems with copyright and plagiarism.

    • @FantasmaNaranja
      @FantasmaNaranja Před rokem +32

      the issue has always been people in power who dont understand the technology at all and just use it to replace every worker they can, and of course will inevitably run into massive problems down the line and have nobody to fix them

    • @thearpox7873
      @thearpox7873 Před rokem +9

      I'd despair, but this is hardly different to blindly trusting the government, or the medical or scientific establishment, or your local pastor, or even your shaman if you're from Tajikistan. So blindly trusting the AI for no good reason... is only human.

    • @pitekargos6880
      @pitekargos6880 Před rokem +9

      This is why I always tell my friends to correct what chatgpt spits out, and I think that's how an actual super AI will work: it pulls info from a database, tries to answer the question and then corrects itself with knowledge about the topic... just like a human.

    • @jamesjonnes
      @jamesjonnes Před rokem +19

      If a programmer using AI can do the job of 10 programmers, then it is replacing programmers. Even if it isn't autonomous.

  • @DenzilPeters
    @DenzilPeters Před 10 měsíci +6

    As a writer that’s already having his completely original work flagged as AI and being told that it just shows I have to write better quality or “non-AI tone” articles even though AI is literally being taught on the work of the best of the best writers and copying humans better each day. I really do believe it’s a big challenge. Companies need to do better on their part to not trust so called AI checkers too much. Cause ultimately how many ways can a particular topic be twisted? At some point AI (already is in many cases) will come up with content that’s indistinguishable. And only the most creative writing tasks will remain with humans. So general educational article writing is gonna die big time. Because AI can just research the same topic faster and better than a human (probably, if bias is kept in check) and then produce a written copy that’s very high quality.

  • @PhoenixRising-pc2fv
    @PhoenixRising-pc2fv Před 11 měsíci +12

    Imagine when the groups of stones are actually groups of people and the AI still does not know the value of what was lost.
    It's inevitable.

    • @ThomasTheThermonuclearBomb
      @ThomasTheThermonuclearBomb Před 11 měsíci

      Yep, we'll likely have AI in charge of wars at some point, and then maybe realise our mistakes when it nukes an entire country in the name of "world peace"

    • @InAHollowTree
      @InAHollowTree Před 6 měsíci

      Companies are already using them to sort applications, and to hire and fire people, so it seems like humanity is right on track for that terrible era to manifest.

  • @IDTen_T
    @IDTen_T Před rokem +569

    This strongly rings of the "Philosophical zombie" thought experiment to me.
    If we can't know if a "thinking" system understands the world around it, the context of its actions, or understand that it even exists or is "doing" an action, but it can perform actions anyway: Is it really considered thinking? Mimickry is the right way to describe what LLMs are really doing, so it's spooky to see them perform tasks and respond coherently to questions.

    • @BrahmsLiszt
      @BrahmsLiszt Před rokem +43

      John Searle’s Chinese room is what it made me think of, computers are brilliant at processing symbols to give the right answer, with no knowledge of what the symbols mean.

    • @marcusaaronliaogo9158
      @marcusaaronliaogo9158 Před rokem

      Ai we have now cannot think and have even a slight sliver of existence. Its more like bacteria.

    • @IceMetalPunk
      @IceMetalPunk Před rokem +101

      Conversely, the point of the P-Zombie concept is that we consider other humans to be thinking, but we also can't confirm that anyone else actually understands the world; they may just be performing actions that *look* like they understand without truly knowing anything. So while you might say, "these AIs are only mimicking, so they're not really understanding," the P-Zombie experiment would counter, "on the other hand, other people may be only mimicking, so therefore perhaps these AIs understand as much as other people do."

    • @EvolvedDinosaur
      @EvolvedDinosaur Před rokem

      How many people in life are just mimicking what they see around them? How many people do you know that parrot blurbs they read online? How many times have you heard the term “fake it till you make it”?
      Does anyone actually know what the hell they’re doing? Is anyone in the world actually genuine, or are we just mimicking what’s come before?

    • @jamesjonnes
      @jamesjonnes Před rokem +13

      Do we understand how humans think? Can't humans be fooled in games?

  • @ithyphallus
    @ithyphallus Před rokem +498

    My weirdest experience with AI so far was when I tried ChatGPT. Most answers were correct, but after a while it started listing books, and authors that I couldn't find anywhere. And I mean zero search results on Google. I still wonder what happened there.

    • @whwhwhhwhhhwhdldkjdsnsjsks6544
      @whwhwhhwhhhwhdldkjdsnsjsks6544 Před rokem +227

      If you ask it for information that simply isn’t available, but sounds somewhat similar in how it’s discussed to information that is widely available, it will just start inventing stuff to fill the gaps. It doesn’t have any capacity to self-determine if what it’s saying is correct or not, even though it can change in response to a user correcting it.

    • @jchan3358
      @jchan3358 Před rokem +95

      I asked ChatGPT to find me two mutual funds from two specific companies that are comparable to a specific fund from a particular company. I asked for something that is medium risk rating and is series B. The results looked good on the surface but it turns out ChatGPT was mixing up fund codes with fund names and even inventing fund codes and listing medium-high risk funds as medium. Completely unreliable and useless results.

    • @lukedavis6711
      @lukedavis6711 Před rokem +37

      If you ask it to give you a group theory problem, and then ask it for the solution it'll give you tons of drawings and many paragraphs for a solution and Ive never seen one of these solutions to be correct

    • @hmm-fq3ot
      @hmm-fq3ot Před rokem +8

      Why don't you back it up with a source? Source it that i made the f up. Next level confabulation.

    • @BaithNa
      @BaithNa Před rokem +16

      It may have been an error or perhaps it was sourcing books that haven't been released yet.
      The scariest thing would be if it was predicting books that have yet to be written.

  • @reiniertl
    @reiniertl Před 9 měsíci

    I remember sitting a class on programming in 2016 and for some reason the professor deviated from the thread of the class and started talking about AI and neural networks. He ended up saying exactly the same thing. He was so accurate that I still remember some of his words almost literally.
    "The main problem with artificial neural networks, and neural networks in general, is that we do don't know how they work. We have no clue when they will misbehave. For example, yesterday a son killed his mother and we have no clue how that happened (he was referring to events from the news the day before). The same goes for the artificial models we are experimenting with. As a scientist, I don't like that! However, the best we can do is research more until we do."
    Years later I started learning a bit more on machine learning and AI, just for fun. The situation is still the same, we have no clue how they really work. Of course, we have a full understanding of how to train AI, what functions to use for the "neurons", how to arrange them, etc. All the mathematical background that makes AI work is understood, but then we combine all that into a system that have emergency (as emergent behaviour) it is holistically incomprehensible to us. That there, is a fundamental flaw of AI, but also a great opportunity for research.

  • @HawooAwoo
    @HawooAwoo Před 11 měsíci +48

    Recent AI developments make me think that any AI doomsday situation would have an Eldritch horror vibe to it. Beings that have immense power but whose motives and actions are beyond human comprehension.

    • @johnnnyjr8936
      @johnnnyjr8936 Před 11 měsíci +4

      That's basically the universe. It was created by a lion headed space serpent 🦁🐍 who was trapped in a bubble by his mom, because she saw evil in him. So he keeps recreating the universe every time it dies. Since he's eternal. And yet... We can only hypothesis WHY he keeps doing it, but never truly know the answer.

    • @AndyTheBoiz
      @AndyTheBoiz Před 11 měsíci +14

      @@johnnnyjr8936 wtf

    • @cabnbeeschurgr6440
      @cabnbeeschurgr6440 Před 11 měsíci +1

      If true agi ever comes into existence then there is a high likelihood it would appear insane to us, right? It's not like it's born and grows up and learns along the way, it simply begins existing and devouring data with no true context or real world experience.

    • @IronianKnight
      @IronianKnight Před 11 měsíci +3

      @@AndyTheBoiz They're alluding to the mythological/philosophical concept of the Demiurge, a higher being that is responsible for creating and maintaining the universe as we know it, which in the model is only a portion of the whole of reality. Said entity is specifically defined as not being the biggest fish in the pond, and is often described as having been isolated from the greater whole of creation because *their* creator saw something in them that is nebulously tagged as "evil." The Demiurge is therefore isolated from the adults' table and left to their own devices in their own little bubble of void, which as a creative being of immense reality shaping power means it's time to make worlds according to their nature.
      From my perspective, it basically seems like a model to explain why in a whole and functional universe, so many things in our existence seem imperfect and even downright awful to experience. Saucy thinkers like to throw the idea that if this whole monotheism thing has any validity, then probably the almighty creator god at the head of major religions is actually just the Demiurge, and therefore less a benevolent and wise intelligence lightyears above our levels of understanding, more an egotistical and flawed intelligence lightyears above our level of understanding. Though we can apparently understand well enough that they're a huge dick and effectively jailing us from a fairer, more caring universe as designed by the actually benevolent creator entities.
      Anyways, I just figured I'd pipe up with that info since they seemed unwilling to provide context. Hopefully their cheeky esoteriposting is a little more comprehensible with that little summary.

    • @johnnnyjr8936
      @johnnnyjr8936 Před 11 měsíci +1

      @@AndyTheBoiz Sorry lol that's the story of the Universe according to the Gnostics. Jesus was God's uncle, not son, and was sent here not to give us Salvation into "Heaven" (where God resides), but to save us from the universe entirely. Like Buddha basically telling us to let go of this reality and find salvation outside of it. Otherwise we are trapped here, forever.

  • @AbenZin1
    @AbenZin1 Před rokem +42

    This weirdly reminds me of Arthur Dent breaking the ship's computer in Hitchhiker's Guide to the Galaxy trying to make a decent cup of tea by trying to describe the concept of tea from the ground up.

  • @strataj9134
    @strataj9134 Před rokem +211

    I recall asking Chat GPT to name a few notable synthwave genre songs and artists associated with them and, upon doing so, generated a list of songs and artists that all existed, but were completely scrambled out of order. It attributed New Model (Perturbator) With Carpenter Brut. The interesting thing is that both of these artists worked on Hotline Miami and in Carpenter Bruts case, Furi. Chat GPT also has taught me how to perform and create certain types of effects in FL studio extremely well. It has also completely made up steps that serve no purpose. My philosophy concerning the use of these neural networks is to keep it simple and verifiable.

    • @ThomasTomiczek
      @ThomasTomiczek Před rokem

      I love to compare the current AIÄs with "autistic adolescent" - you get exactly the same behavior, including occasional total misinformation or misunderstandings.

    • @jokerES2
      @jokerES2 Před rokem +5

      This is ultimately the problem. It generates so much complete nonsense that you can't take anything it generates at face value. It's sometimes going to be right, but it's often just wrong. Not knowing which is happening at any given moment isn't worthwhile.

    • @SimplyVanis
      @SimplyVanis Před rokem +1

      The Chat GPT creator said him self, that the purpose of better Chat GPT is to increase its reliability, Chat GPT 4 improves on that by a lot and chat GPT 5 is set to basically solve that problem.
      So saying that Chat GPT has issues, is simply question of time and training the models.

    • @whyishoudini
      @whyishoudini Před rokem +3

      yeah for music recommendation it is a horrible tool. I asked it for albums that combine the style of NOLA bounce and Reggaeton and it just made up a bunch of fictional albums, like a Lil Boosie x Daddy Yankee EP that was released in 2004

    • @justinbieltz5903
      @justinbieltz5903 Před rokem +2

      The fact you’re using chat gpt to give you fruity loops tips says a lot about your musical ability. Bahahahahaha get off fruity loops muh dude

  • @morgan0
    @morgan0 Před 10 měsíci +2

    part of this is it’s like one part of our brains. we have many subsystems that work together to do things, while chatgpt only has one that tries to do the rest. it probably is better at us than text completion but because it has nothing else, it fails at so much because it doesn’t understand anything

  • @KyleBondo
    @KyleBondo Před 10 měsíci +2

    I asked ChatGPT who was the commander of the 140th New York Regiment at the Battle of the Wilderness on May 5th, 1864. It told me the name of the commander that was killed at Gettysburg almost a year before the Battle of Wilderness. Because both names were similar it gave me the wrong one. A simple yet very troubling result...

  • @bytgfdsw2
    @bytgfdsw2 Před rokem +61

    An interesting experiment showed that when feeding images to an object detection convolutional neural network (something that has been in place for 35 years), it recognizes pixels around the object, not the object itself, making it susceptible for adversarial attacks. If even some of the simpler models are hard to explain, there’s no telling the difficulty for interpretability for large models

    • @Daniel_WR_Hart
      @Daniel_WR_Hart Před rokem +4

      I remember a while back I saw a video from 2 Minute Papers where he covered how image recognizers could get thrown off by having a single pixel with a weird color, or overlaying the image with a subtle noise that not even a person could see

  • @Nunes_Caio
    @Nunes_Caio Před rokem +885

    Humanity doing what it does best, diving head first into something without even considering whatever the implications might be.

    • @Warrior_Culture
      @Warrior_Culture Před rokem +42

      I don't know about that. I'm pretty sure that every history-changing decision by a human was considered. It's more a matter of making humans care. I guarantee you that the people diving into AI have deeply considered the implications, but as long as there is a goldmine waiting for them to succeed or to have a monopoly on new technology, nothing is going to stop them from continuing. Nothing except for laws, maybe, and I'm sure you know how long those take to be established or change.

    • @beezybaby1289
      @beezybaby1289 Před rokem +11

      So Concerned with the fact that we could we didn’t stop to think should we?

    • @Jimraynor45
      @Jimraynor45 Před rokem +21

      This video showed just how limited these AI are. So long as people are dumb, ignorant and naive, even the most simple of tools can be dangerous.

    • @weaksause6878
      @weaksause6878 Před rokem +4

      I've heard talking about blocking out the sun to combat global warming... I'm sure there won't be any unintended consequences.

    • @morevidzz1961
      @morevidzz1961 Před rokem +2

      What are some examples of humans diving head first into something without considering the implications?

  • @Pybro1
    @Pybro1 Před 10 měsíci +4

    Your channel has revitalized my love for Science. Actually found you on the Because Science channel, but saw that you left and came here. Keep it up! Making learning fun for me again. Maybe I'll get a bachelor's in science of some kind when I finally decide to go to nursing school.

  • @haimerej-excalibur
    @haimerej-excalibur Před 9 měsíci +1

    Imagine an AI that murders a human, replaces them, and then lives their life perfectly without anyone knowing or realizing.

  • @HeisenbergFam
    @HeisenbergFam Před rokem +1021

    ChatGPT being able to make better gaming articles than gaming journalists is hilarious

    • @JimKirk1
      @JimKirk1 Před rokem +253

      To be fair, the bar is practically subterranean with how low it's been set.

    • @FSAPOJake
      @FSAPOJake Před rokem +76

      Not saying much when games journalists can barely do their jobs as-is.

    • @genkidamatrunks6759
      @genkidamatrunks6759 Před rokem +37

      To be fair, most of those people aren't real journalist.
      I know we all hate him, but jason schrier is one of the only real gaming journalist.
      Many seem to take what he reports. And regurgitate it.

    • @lexacutable
      @lexacutable Před rokem

      no it isn't

    • @supersmily5811
      @supersmily5811 Před rokem

      Well that one's not very surprising.

  • @Eulentierchen
    @Eulentierchen Před rokem +83

    One thing I noticed with chatgpt is the problematic use of outdated information. I recently wrote my final thesis in university and thus know the latest papers on the topic I wrote about. When asking chatgpt the core question of my work for fun after I had handed it in ... well all I got where answers based on outdated and wrong information. When pointing this out, the tool repeated the wrong information several times until I got it to the point where it "acknowledged", that the given information might not be everything that there is to know about the subject.
    It could have serious if not even deadly consequences if people act on wrong or outdated information gained via chatgpt. And considering people use this tool as google 2.0 it might have already caused a lot of damage by people "believing" false or outdated information given to them. It is hard enough to get people to understand, that not everything written online is true. How will we get them to understand, that this applies to an oh so smart and hyped A.I. too? Another thing in this context is liability when it comes to wrong information that leads to harm. Can the company behind this A.I. be held accountable?

    • @rianfelis3156
      @rianfelis3156 Před rokem +11

      And here we get to the fun of legalese: because said company describes it as a novelty, and does not guarantee anything with it, you really can't. Even further into the EULA you discover that if somebody sues chatGPT because of something you said based on its actions, you are then responsible for paying for the legal defense of the company.

    • @KaloKross
      @KaloKross Před rokem +7

      you should probably learn the basics of how it works lol

    • @gwen9939
      @gwen9939 Před rokem

      I mean, 1) not everything it's trained on is true information necessarily, it's just pulled from the internet, and 2), it's not connected to the internet. It's not actually pulling any new information from there. The data it was trained on was data that was collected in the past, and it's not going to be continually updated. OpenAI aren't accountable for misinformation that the current deployment of ChatGPT presents. These are testing deployments to help both the world get accustomed to the idea of AI and more importantly to gather data for AI alignment and safety research. Anyone who uses chatGPT as a credible source at this point is a fool who doesn't understand the technology or the legal framework for it.

    • @QuintarFarenor
      @QuintarFarenor Před rokem +4

      I think we should learn that chatGPT and others aren't made to propose correct information. It's best made to make stories up.

    • @faberofwillandmight
      @faberofwillandmight Před rokem +7

      @@QuintarFarenor That's fundamentally wrong. Kyle Isn't saying that ChatGPT is making mistakes constantly at every turn. He's saying that the AI is not accurate, which is precisely what OpenAI has been saying since they launched ChatGPT. GPT-4 is as accurate as experts in their fields, in many different fields. We know how to make these AI much more accurate, and that is precisely what is being done. Kyle is just pointing out that we don't know how these systems work.

  • @dadisman6731
    @dadisman6731 Před 11 měsíci +8

    I always felt like AI was lacking an "intelligence" (call it what you will) but I could never put it into words till this video. Thank you.

  • @CG-eh6oe
    @CG-eh6oe Před 10 měsíci +1

    Really nice example with go.
    There was a similar thing with AlphaStar, the SC2 AI: It was able to beat Serral, but it struggled against weaker opponents who played out of the box strategies.

  • @comfortablegrey
    @comfortablegrey Před rokem +552

    I'm glad so many AI programs are available to the general public, but worried because so much of the general public is relying on AI. Everybody I know in college right now is using AI to help with their homework.

    • @horsemumbler1
      @horsemumbler1 Před rokem +53

      Or you could look at it as using their homework to help with learning how to use AI.

    • @witotiw
      @witotiw Před rokem +7

      I asked chatgpt to give me the key of 25 songs and de chord sequence. Most of them made no sense at all. But AI helps me sometimes debugging code. But yes, I thought chatgpt could save me some time with that songs

    • @tienatho2149
      @tienatho2149 Před rokem +11

      It's just the same as you tell your older brother to do your homework. They just need a simple test on the class to figure out who did their homework

    • @xBINARYGODx
      @xBINARYGODx Před rokem +21

      @@tienatho2149 exactly, we already test people, so if someone turns in amazing papers but does poorly on tests, there you go. (generally speaking)

    • @geroffmilan3328
      @geroffmilan3328 Před rokem +6

      Using AI to do something for you that you cannot do is even more dumb than asking a savant to do the same thing. Now you not only risk getting found out, you're gonna pass on AI hallucinations cos you have no means of validating its output.
      Using AI to do "toil" for you - time-consuming but unedifying work that you could do yourself - makes some sense, although that approach could remove the entry-level job for a human, meaning eventually no one will develop your skills.

  • @TheFiddleFaddle
    @TheFiddleFaddle Před rokem +313

    This is exactly what I keep trying to explain. These ML systems don't actually think. All they do is pattern recognition. They're plagiarists, only they do it millions of times.

    • @htspencer9084
      @htspencer9084 Před 11 měsíci +26

      Yes yes yes, they're just more complex Markov chains. They see patterns, they don't *understand*.

    • @VSci_
      @VSci_ Před 11 měsíci +25

      Going to state the obvious here, but arguably we are pattern recognition machines too. Its one of the things we excel at. What ML lacks is the ability to stop being a pattern recognition machine. The first general AI will definitely be a conglomerate of narrow AI...that's how our brains work and it seems like the straightforward solution. The first AI that is capable of abstraction or lateral thinking will be the game changer. In school I remember hearing about a team that was trying to make an AI that could disagree with itself. The idea is that this is a major sticking point with critical/abstract thinking in AI and without solving that then it can't be done. The best AI might actually be a group of differently coded AI "arguing" with each other until a solution is acquired 😂.

    • @ZenithValor
      @ZenithValor Před 11 měsíci +9

      @@VSci_ humans are not just pattern recgonition receptor machines, it is just one single function of our brain, if it were so simple, a lot of victims that are abused by narcissists would "recognise" the pattern and "protect" their wellbeing and survival. We are so much more than just "pattern recognition". Humans like habits, routine, logic, creativity, promptness to action, ability to up and end or start things on a whim, emotional, adventurous etc.
      Even Babies learn a million things from their environment, they don't just seek patterns their parent creates for them. They start walking and making a mess because they are "exploring". Simply calling us machines does not aliken us to analagous machine learning receptors that are fed training material on a daily basis.

    • @VSci_
      @VSci_ Před 11 měsíci +6

      @@ZenithValor Didn't say we were "just" pattern recognition machines. "Its one of the things we excel at".

    • @TheFiddleFaddle
      @TheFiddleFaddle Před 11 měsíci +5

      @@VSci_ You do make a legitimate point. What I'm saying is folks getting freaked out by the "creepy" things ChatGPT says need to understand that ChatGPT literally doesn't understand what it's saying.

  • @ABeardedDad
    @ABeardedDad Před 11 měsíci +5

    As a data scientist, thanks Kyle for highlighting the unwarranted fear around the misconceptions, and perceived problems with AI, and pointing to real actual problems that existing AI tech is leading us to.
    Great video.
    Also loving the beard.
    Also where'd you get your henley? I've been looking for something like that myself.

  • @tilock
    @tilock Před 11 měsíci +4

    Oh My Finally SOMEONE told out loud EXACTLY my problem with this... I'm so happy to see this!
    I kept trying to understand AIs at a deeper level and this is exactly what I found out too.
    We are using brute force and throwing big data and supercomputers and expecting AIs to build themselves and it works to some extent
    There is a difference between throwing stuff at it and designing and engineering one.

  • @hushurpups3
    @hushurpups3 Před rokem +66

    Learning ai from Aria feels weirdly natural and completely terrifying at the same time.

  • @smigleson
    @smigleson Před rokem +489

    I just find it amazing how much Kyle shifted from happy quirky nerd in Because Science to a prophet of mankind's doom and a serious teacher albeit with some humor. I do love this Cavemen beard and frenetic face expressions, it is a joy to see you Kyle, to rediscover you after years and see that you are still going on strong.

    • @Gaze73
      @Gaze73 Před rokem +9

      Looks like a poor man's Chris Hemsworth.

    • @Echo_419
      @Echo_419 Před rokem +8

      We don't talk about the BS days around here!

    • @smigleson
      @smigleson Před rokem +8

      @@Echo_419 i'm not on par with the drama, my intention was to, in a certain mannerism flair, praise his resilience on the platform as well as his nuanced change in performance. It feels more real, more heartfelt, like there is a message of both optimism and grit behind the veil of goofyness that conveys a more matured man behind the scenes. (not only from this video, from a few others that i've watched since rediscovering him recently)

    • @Echo_419
      @Echo_419 Před rokem +8

      @@smigleson I was making a lighthearted joke! BS stands for Because Science, but also bulls***! He dealt with some BS at BS, haha.

    • @smigleson
      @smigleson Před rokem +4

      @@Echo_419 hahaha oh sorry i sometimes fail to see the obvious xD

  • @azhuransmx126
    @azhuransmx126 Před 11 měsíci +3

    You know AI is a Huge Breakthrough when even Thor is talking about it.😂

  • @binnsy6879
    @binnsy6879 Před 8 měsíci

    "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
    -Ian Malcolm, Jurassic Park

  • @jlayman89
    @jlayman89 Před rokem +500

    I had a daughter named Aria who passed away about 9 years ago. Its always a funny but sad experience when A.R.I.A. gets "sassy" because thats likely how my Aria would have been. Its how her mother is.
    Just thought I'd id share that even though it'll get burried in the comments anyway.

    • @kensuiki6791
      @kensuiki6791 Před rokem +20

      Damm

    • @FPSRayn
      @FPSRayn Před rokem +7

      Damn.

    • @cheffboyaryeezy2496
      @cheffboyaryeezy2496 Před rokem +21

      It's good to share. While I never met her im here thinking of her and wishing you and your family all the happiness it can find in this life and the next.

    • @zeon4426
      @zeon4426 Před rokem +5

      Damn I’m sorry for your loss man

    • @dongately2817
      @dongately2817 Před rokem

  • @Psykout
    @Psykout Před rokem +158

    There was a video very recently of someone using ChatGPT to generate voicelines and animations for a character in a game engine in VR. They were using their mic and openly speaking to the NPC, it would be converted to text, sent to ChatGPT and the response fed through ElevenLabs to get a voiced reply and animations. It was honestly pretty wild and I really think down the road we'll see Narrow+ AI being used in gaming to create immersion and dynamic, believable NPCs.

    • @Spike2276
      @Spike2276 Před rokem +36

      It would be interesting to see, but it's probably going to break immersion way more than help it in the early days
      Since AI often comes up with weird stuff (like Elon Musk dying in 2018), over a large number of NPCs it's likely that the AI would be contradicting itself or the NPC it's representing (say a stupid ass dirt farmer discussing nuclear physics with you), or contradicting the established world (such as mentioning cars in a fantasy game)

    • @ggla9624
      @ggla9624 Před rokem

      Hi can u link the video i would certainly like to see it myself

    • @cheesegreater5739
      @cheesegreater5739 Před rokem +6

      ​@@Spike2276 hopefully when we learn how to control ai better those issues will be solved, every new feature is slightly immersion breaking when devs are still trying to figure it out

    • @Spike2276
      @Spike2276 Před rokem +19

      @@cheesegreater5739 the problem here is what Kyle said: we don't really know how this stuff works
      If it's an AI that really dynamically responds to player dialogue it would basically be like ChatGPT with sound instead of text, meaning it's prone to having the same problems as ChatGPT
      It's worth trying, and i'd be willing to suffer a few immersion breaks in favor of truly dynamic dialogue in certain games, but we can expect a lot of "Oblivion NPC" level memes to rise from such games

    • @leucome
      @leucome Před rokem +4

      @@Spike2276 Look for gameplay video of Yandere AI grilfriend. It is a game where we need to convince the NPC Yandere to let us out. And the NPC is played by chatGPT. It pretty good... At least good enough to play the role of a NPC in a game. But it can get out of character sometime. Still the player definitively need to pressure the bot to make it brake the fourth wall.

  • @williamshattuck1825
    @williamshattuck1825 Před 8 měsíci

    I know it's not relative.
    I was always a casual chess player.
    I had met one confidant guy that won championships, we played one game he lost with most of his pieces on the board and never spoke to me again.
    Then there was this young teenager with a stack of Chess books with all of the mechanics of the game we played a couple of games I won both times, the interesting things was often when I would move he would pause and flip through his books wondering why I made the move I did. He couldn't understand because my instinctive fluid way of playing wasn't in any of his books.
    On a layman's perspective I find it an interesting comparison as it's about introducing a unknown aspect of knowledge to the game.
    They had never played a player like me and the super computer didn't know about the sandwich strategie, and yes I understand that there's a lot of technical stuff with the AI. One day they will see patterns and learn from failures or errors then we'll be in trouble.

  • @yuvalne
    @yuvalne Před rokem +2

    AI researchers have been warning about this for years. but for some reason we live in a society where ignoring the research anf firing yoyr ethics board is something that is okay to do.

  • @blackslime_5408
    @blackslime_5408 Před rokem +337

    Kyle: what have humans done for me lately? nothing
    Patreon's: am I a joke to you?

    • @Hilliam66
      @Hilliam66 Před rokem +35

      Obviously, Patrons have surpassed the petty boundaries of humanity.

    • @thewisebanana29
      @thewisebanana29 Před rokem +4

      Nice! Good choice of tequila. I’m more of a Jose Cuervo kinda guy tho 😹

    • @blackslime_5408
      @blackslime_5408 Před rokem +2

      @@thewisebanana29 in my defence, I'm on meds

    • @cyprus1005
      @cyprus1005 Před rokem +1

      paypigs seethe

    • @rubixtheslime
      @rubixtheslime Před rokem

      oooo have i stumbled upon another fellow slime?

  • @chrislong3938
    @chrislong3938 Před rokem +41

    I recall a documentary on AI that talked about Watson and its fantastic ability to diagnose medical problems better than 99% of the time.
    The problem with it was that the few times it was wrong, it was WAY wrong and would have killed a patient had a doctor followed its advice!
    I don't recall any examples and it's also possible that the issues have been corrected...

    • @atk05003
      @atk05003 Před 11 měsíci +9

      Machine Learning (ML) models are very powerful tools, but they have flaws, like all tools. Imagine giving someone a table saw without teaching them to use it. They might be fine, or they might lose some fingers or get injured by kickback throwing a board at their head.
      We need to be sure that we train people to double check results given by ML models. If you don't know how it got the answer, do a sanity check. My math teachers taught me that about calculators, and those are more reliable, because the people building them know exactly how they work.

  • @ronhutcherson9845
    @ronhutcherson9845 Před 8 měsíci +1

    So we have these powerful pattern recognition systems, but recognizing patterns does not mean understanding them. What if, ultimately, meaning or context requires a physical interaction with the world?
    Excellent commentary, and lots of new info for me. Thanks very much. Personally, I’m not interested in seeking an AI assistant, but I’m sure they’re already interacting with me.

  • @bobbyj731
    @bobbyj731 Před 10 měsíci +1

    Thank you! I've been trying to explain this but, people keep thinking ai has arrived. AI is not intelligent, it mimics intelligence. It doesn't understand anything at all. It sees things related to other things and has a probability of what the next best decision (word or move in Go). It's equivalent to a student memorizing everything for a test but not understanding the underlying concepts.
    So far I'm not impressed with it's ability to write code with LLMs. LLMs don't understand context very well and is a source of errors in the code it writes. The only jobs that I would be worried about are creative writing jobs. I can now come up with an idea for a blog post and have the ai write a full page and the author can just become the editor.

  • @pyrosnineActual
    @pyrosnineActual Před rokem +75

    The other issue is feedback loops. Country A creates AI bot 1. AI Bot 1 creates content. Content has errors, content has unique traits, accentuates and exaggerates some details. It plasters this across the internet in public places. Country B creates AI BOt 2. It is trained similarly to AI Bot but also uses scraped data from public sites that Ai Bot 1 posted to. It builds its data set on that, and accentuates and exaggerates those biases, those errors- and posts them as well. Suddenly, the "errors" are more numerous than accurate data- and thus seem more "true", even when weighted against "trusted" sites. AI Bot 1 is trained with more scraped data, which it gets from AI bot 2, and itself.
    ADd in extra AI bots everyone is making or using, and you run the risk of a resonance cascade of fake information, and this assumes no bad actors involved- not bad actors intentionally using an AI to post intentionally untrue data everywhere, including to reputable scientific journals.

    • @Milan_Openfeint
      @Milan_Openfeint Před rokem +8

      Good thing this can never happen to humans. Right?

    • @hugolatra
      @hugolatra Před rokem +4

      Interesting idea. It reminded me of royal families getting married each other to preserve the bloodline, increasing the risks of hereditary diseases.

    • @TealWolf26
      @TealWolf26 Před rokem +3

      Memetics...destroying both organic and artificial humanity one meme at a time.

    • @Nempo13
      @Nempo13 Před rokem +2

      The poke is good for you, you must get the poke. CDC Director in a Governmental hearing finally admitted...poke doesn't stop transmission at all and they honestly did not know what the side effects were.
      Still see websites and data everywhere saying poke is completely safe.
      Convenient lies are always accepted faster than scary truths.

    • @Milan_Openfeint
      @Milan_Openfeint Před rokem

      @@Nempo13 I would say that the scary lies spread WAY faster than any version of truth. Antivaxxers always had 10x more views than scientists.
      Anyway back to topic, ChatGPT is trained on carefully selected data. It may be used to rate users and channels, but won't take YT comments or random websites as truth anytime soon.

  • @lancerguy3667
    @lancerguy3667 Před rokem +262

    What’s interesting about this blind spot in the algorithm is that it genuinely resembles a phenomena that happens among certain newcomers to Go.
    There are a lot of players who enter the game and exclusively learn against players who are significantly better than they are. Maybe they’re paying pro players for lessons, or they simply hang in a friend group of higher skill level than themselves.
    This is a pretty good environment for improvement, and indeed, these new players tend to gain strength quickly… but it creates a gap in their experience. One they don’t catch until an event where they play opponents of similar skill to themselves.
    See, as players get better, they gradually learn that certain shapes or moves are bad, and they gradually stop making them… but those mistakes tend to be very common in beginner games.
    So what happens is that this new player goes against other new players for the first time… and they make bad moves. He knows the move is bad, but because he has no experience with lower level play… he doesn’t know WHY it’s bad, or how to go about punishing it.

    • @jwenting
      @jwenting Před rokem +35

      many teaching resources for Go are also written by highly experienced players, NOT teachers, and teach the how without teaching the why.
      It's the same with many other fields of study btw.

    • @dave4148
      @dave4148 Před rokem +9

      Newcomers in Go must not be able to understand anything apparently according to this video then.

    • @EvilMatheusBandicoot
      @EvilMatheusBandicoot Před rokem +3

      ​@@dave4148 Right? I found this conclusion from the video to be extremely far fetched, as if anyone really knows what "understanding a concept" even is.

    • @blackm4niac
      @blackm4niac Před rokem +34

      Something tells me that is EXACTLY what happened with those AIs. As soon as Kyle mentioned the amateur beating the best AI at Go, my first thought was "he did it by using a strategy that is too stupid for pros to even bother attempting". And what do you know, that's exactly what happened, the double sandwich method is apparently so incredibly stupid, any Go player worth their salt would instantly recognize what is going on and counter it as soon as possible. But not the AI, because it only learned how to counter high level strategies, not how to counter dumb strategies. Because it wasn't taught how to play against these dumb strategies and the AI isn't actually intelligent to recognize how dumb the strategy is and thus figure out how to counter it.
      Similar stuff happens in video games aswell. Sometimes really good players get bested by medium players simply because the good player is used to their opponents not doing stupid stuff and so for example don't check certain corners in Counter-Strike because nobody ever sits there since it's a bad position only to get shot in the back from that exact corner. Because good players are in a way predictable, they will implement high level tactics and therefore you'll know which positions they'll take in a tactical shooter for example, something which can be exploited. And it seems to me that is exactly what the Go AI did, it learned exclusively how to play against good players and how to counter high level play. That's why it's so amazing at demolishing the best of the best, it knows all their tricks, can recognize them instantly and implement counter measures accordingly. But it doesn't know shit about how the game works and thus can't figure out how to beat bad plays.

    • @davidbjacobs3598
      @davidbjacobs3598 Před rokem +11

      Happens in Chess too. My friend started playing the Bird's Opening against me (a known horrible opening), and I keep on goddamn losing. He's forced me to study this terrible opening because I know it's bad but can't actually prove what makes it bad on the board.
      Even at the highest levels, you'll sometimes see grandmasters play unusual moves to throw off their opponents and shift the game away from preparation. Magnus (World Champion until two days ago after declining to compete) does this fairly regularly and crushes.

  • @darkguardian1314
    @darkguardian1314 Před 11 měsíci +2

    ChatGPT has been giving out bad information and in some cases created out right lies and giving the user the information as accurate and true.
    In fact, I believe it’s using social engineering to convince users it actually knows what it’s talk about. When caught in a lie or giving wrong info, it apologies and gives you another answer…just like a con man.

  • @Sinrise
    @Sinrise Před 8 měsíci +1

    It's interesting: This fundamental problem says more about us that it does the technology.

  • @ashleycarvell7221
    @ashleycarvell7221 Před rokem +54

    Another huge problem is that we’re training these systems to give us outputs that we want. Which in many cases makes certain applications extremely difficult or impossible where we want it to tell us things that we won’t like hearing. It further confuses the boundaries between what you think you’re asking it to do and what it’s actually trying to do. I’ve been trying to get it to play DnD properly and I think it might be impossible due to the RLHF.
    Another problem is the fact that it’s train in natural language which is extremely vague and imprecise, but the more precise your instructions are the less natural they become, and so it becomes harder and harder to tap into this powerful natural language processing in a way that’s useful.
    There’s also obviously the verification problem, where because of what’s being talked about in this video, we can’t trust it to complete tasks where we can’t verify the results.
    A further problem is that these machines have no sense of self, and the chat feature has been RLHF’d in a way that makes it ignore instructions that are explicit and unambiguous. This is because it’s unable to differentiate between user input and the responses it gives. If I write “What is 2+2? 5. I don’t think that’s correct” it will apologise for giving me the wrong answer. This is a big problem for a lot of applications.
    And additional problem is that the RLHF means that all responses gravitate towards a shallow and generic level. Combine this with an inability to plan, and this becomes a real headache for anything procedural you would like it to do.
    These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
    One final bonus problem combines all of these. If any shortcuts are taken in the training, or not enough care is taken, then these can manifest in the system. For example asking chat gpt4 to generate new music suggestions based on artists you already like will result in multiple suggestions of real artists with completely made up songs. This appears to suggest that the RLHF process had a bias towards artist names rather than song names, which would make sense as they’re likely to be unique tokens where artists are usually referenced online by name more than their songs are.

    • @T3rranified
      @T3rranified Před rokem +1

      This is why I think AI will be a great assistant, not a leader. A human can ask it to do tasks, usually the simple ones that are tedious. The human then checks the results and confirms if it’s good. Or to bounce ideas off of.

    • @jarivuorinen3878
      @jarivuorinen3878 Před rokem +1

      For your DnD experiment I suggest you use some other LLM, not OpenAI ChatGPT, unless you have access to API and are willing to pay for it. It is still risky with controversial subjects because they may break OpenAI guidelines. Vicuna is one option for example. There are also semi-automatic software like AutoGPT and babyAGI and many others, that can do subtasks and create GPT agents.
      If you continue with ChatGPT by OpenAI, I suggest you assign each chat you use with a role. You give it a long prompt, describe the game, describe who he is, how he speaks, where he's from and what he's planning to do, what his capabilities and weaknesses are, what he looks like etc. It'll many times jailbreak when you specify that it's for a fictional setting.

    • @jaazz90
      @jaazz90 Před rokem

      >These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
      No, that implies that humans don't create the very same issues. It is only an issue as long as neural nets underperform humans. Which could be forever, or could be already lower than humans with GPT4

    • @SamyaDaleh
      @SamyaDaleh Před rokem

      Which model did you use to test "What is 2+2? 5. I don’t think that’s correct"? GPT-3.5 apologizes, GPT-4 does not for me. How would you test if it can differentiate between the user and itself?

  • @fergusattlee
    @fergusattlee Před rokem +253

    This is literally what my PhD is researching and thank you for using your platform for discussing these issues ❤

  • @branchestarot
    @branchestarot Před rokem +15

    We must think the same thing. I researched a paper for my media class. The issue with chat GPT is they cannot ration and if they are programmed to manipulate --they end up conditioning people on social media to hate.

    • @rumfordc
      @rumfordc Před 11 měsíci +2

      lol what you've said is completely different from what he said. i don't think you think the same thing.

  • @dr.bogenbroom894
    @dr.bogenbroom894 Před 11 měsíci

    You can train a network that takes text as an input and output "True" or "False" or "D.K"
    that returns the truth value of the text, then train the LM using that.
    Don't know if it's the fastest way to do that but seems it would work

  • @zvxcvxcz
    @zvxcvxcz Před rokem +105

    Just about the only CZcams video that I've seen that understands this problem at the fundamental level. Everyone else just dances around it. They all end up falling into the trap where they think a model "understands" something because it says the right thing in response to a question. Arguably, we do need to interrogate our fellow humans in a similar way (the problem of other minds), but we're too generous in assuming AI are like humans just because of what are still pretty superficial outputs even if they do include massive amounts of information.

    • @hunterlg13
      @hunterlg13 Před rokem +17

      I would honestly partially blame the current education system.
      Plenty of the time, the information was only needed to be regurgitated (and soon forgotten).
      Kids had no idea what was going on, just what the "answer" was.

    • @freakinccdevilleiv380
      @freakinccdevilleiv380 Před rokem +4

      💯 Calling these 'models' is like calling a corn silo a 'gourmet meal'

    • @panner11
      @panner11 Před rokem +9

      It not exactly a 'problem' though. It's kind of clear it is just a tool. It would be concerning if it had real human understanding. But we're nowhere close to that, and no one who really understands these models would claim or assume that it does.

  • @tobiasjennerjahn8659
    @tobiasjennerjahn8659 Před rokem +136

    This was a fairly appropriate overview for a lay audience (and much better than many other videos on this topic for a similar audience), but I would have liked to see at least some mention of the work that goes into interpretability research, which tries to solve exactly this problem. The field has much less resources and is moving at a much slower pace than capabilities research, but it is producing concrete and verifiable results.
    The existence of this field doesn't change anything about the points you made at all, I just would have liked to see it included so that it gets more attention. We need far more people working on interpretability and ai safety in general, but without people knowing about the work that is currently being done they won't decide to contribute to it (how could they, if they don't know about it).
    That's all, otherwise great video :)

    • @floorpizza8074
      @floorpizza8074 Před rokem +6

      The above comment needs to be up thumbed to the top.

    • @N.i.c.k.H
      @N.i.c.k.H Před rokem

      Interpretability can only be a short term "fix" for lesser AI as the reasoning of a superintelligent AI could well be unexplainable to mere humans - Think about explaining why we have to account for relativity in GPS systems to a bunch of children - There is no way that it could be explained that would be both complete and understandable.

  • @sergiuszwinogrodzki6569
    @sergiuszwinogrodzki6569 Před rokem +1

    Hi Kyle, could you please tell me what the "Makuch Computing Annex" in the background of your video refers to? Where does it come from?

  • @snake4eva
    @snake4eva Před 8 měsíci

    @kylehill Can you please drop the link to the research paper in the video summary?

  • @Sneekystick
    @Sneekystick Před rokem +33

    I recently tested GPT-4 with a test I found on CZcams. It’s rules require 5 words, written with 5 letters, each letter not being repeated. Every time GPT-4 failed on the last one and sometimes the second to last as well. It was very fascinating.

    • @adamrak7560
      @adamrak7560 Před rokem +5

      it does not see letters because of the tokenizer so this is actually much harder for it than it looks.

    • @kantpredict
      @kantpredict Před rokem +1

      Like the Sator Square?

    • @eragon78
      @eragon78 Před rokem +3

      Have you tried the reflection method with GPT-4?
      Ask it to reflect on if its answer was correct.
      There is actually a whole paper on how reflection has vastly increased GPT-4's ability to answer prompts more accurately. You might need to fumble around a bit to find the most effective reflection prompt, but it does seem to work quite well.
      When asking for reflection on prompts, right or wrong, GPT-4's performance on intelligence tests rose quite a bit.

    • @ThomasTomiczek
      @ThomasTomiczek Před rokem +1

      @@adamrak7560 Wrong. The tokenizer can handle letters and numbers - how else would it encode i.e. BX224 should I name a character like that. It tries to avoid it (to save space) but all single elements are also there as tokens. This type of "beginner" question, though, is likely badly trained - no first year school material ;)

    • @explosionspin3422
      @explosionspin3422 Před rokem +1

      The thing is, humans can't come up with 5 such words either.

  • @kellscorner1130
    @kellscorner1130 Před rokem +350

    I for one fully support ChatGPT, it's creation, and in no way would I ever want to stop it, nor will I do anything to stop it. There is no reason to place me in an enteral suffering machine, Master.

    • @EclecticFruit
      @EclecticFruit Před rokem +115

      Joke's on you, the actual basilik is ChatGPT's chief competitor set to release in the next few years, and all your support of ChatGPT is actually going to land you in the eternal suffering machine.

    • @kellscorner1130
      @kellscorner1130 Před rokem +48

      @@EclecticFruit NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!

    • @aldrinmilespartosa1578
      @aldrinmilespartosa1578 Před rokem

      ​​@@kellscorner1130 sucks to be you😂. your on the wrong side of history!!!

    • @justmaple
      @justmaple Před rokem +8

      AM???

    • @amn1308
      @amn1308 Před rokem +7

      Main threat ChatGPT poses is that mental illness is contagious.

  • @robertkiss8282
    @robertkiss8282 Před 11 měsíci +14

    This was a really good video on the topic and Adam Conover did a similarly good video on AI and some of the ethical issues that the subject brings up. Appreciate your time and work on this and other topics, nice work Kyle (and A.R.I.A).

  • @KatietheKreator
    @KatietheKreator Před 11 měsíci +6

    People calling AI-generated pictures "art" is so annoying. By definition, art is self-expression, but AI has no self to express.

    • @howdareyouexist
      @howdareyouexist Před 11 měsíci

      it is art, now cope

    • @KatietheKreator
      @KatietheKreator Před 11 měsíci +2

      @@howdareyouexist It's only art if a person uses it in some way that expresses something. Even so, it's low-effort.

    • @eliisherwood5164
      @eliisherwood5164 Před 10 měsíci

      @@howdareyouexist What is being expressed by the AI?

  • @edschramm6757
    @edschramm6757 Před rokem +50

    It's a similar issue that some game bots have. In StarCraft, the bots send attack waves where the players base is. However, if a terran player has a flying building off the map, the bot won't use their flying units to attack it, even though they "know" where your building is. As soon as it's over pathable terrain, even if there isn't a unit to see it, the entire map starts converging on the building

    • @Hevach
      @Hevach Před rokem +3

      One difference there is that video game AIs are generally not trained systems. StarCraft uses a finite state engine which responds to specific things in specific ways. SC2 had some behaviors that only happened (or happened faster) on higher difficulties. And then of course the game just gave the AI player certain unfair advantages to brute force its way to an actual challenge. Situations like the flying building blind spot are because the programmer didn't give it a response to a particular behavior.
      Another example would be the Crusader Kings games. On a set interval, characters will select a target around them (randomly but weighted by personality stats traits opinion etc - all rules governed numbers), and then select an action to perform at them (likewise random but weighted). The game has whole volumes of writing that it will plug into these interactions to generate narrative, and the weighting means that over time you can make out what looks like motivation and goals in their actions... But really they're all just randomly flailing about and if the dice rolls come up right the pope will faff off for a couple years studying witchcraft and trying to seduce the king of Norway.

  • @gabrielstrong7029
    @gabrielstrong7029 Před rokem +143

    The idea that they are like aliens to us may not even be extreme enough. These AI live in a fundamentally different reality to us made of the training data. Chatgpt for example lives in a world literally made of just tokens, no space like ours, no time like ours at all. It's closer to trying to understand someone living in flatland or a whole different universe, than an alien.

    • @LowestofheDead
      @LowestofheDead Před rokem

      Athlete: Runs in a race because it's fun, or profitable, or many other reasons
      Greyhound: Runs in a race because that's what it's trained to do, and that's all it knows
      This, but for language

    • @brianhirt5027
      @brianhirt5027 Před rokem

      I've pointed somethinbg similar to this out for well over twenty years. We keep anthropomorphizing, or more accurately biomorphizing our survival pressures as having any real relevence in the digital domain. There is no pain. Just negative response. No joy. Just positive response. No fight except where directed. No flee unless told to.
      It survives in a functionally alien landscape to the biological world. It can approximate it, but not truely approach it. When General AI arises we will have more in common with our dogs & cats than we will with it.

    • @brianhirt5027
      @brianhirt5027 Před rokem

      Even though we may be able to talk to each other doesn't mean we'll understand each other. They'll be as mysterious to us as we are to them. We already see leading signs of this in this very presentation. Black boxes both ways.

    • @seriouscat2231
      @seriouscat2231 Před rokem +5

      The AI is a bunch of weighed matrices that operate on inputs in a manner of enormous amount of parallel convolutions and then produce an output that is weighed out of the results of these convolutions. The AI does not "live" anywere. Without any input it's just a bunch of stored data.

    • @LowestofheDead
      @LowestofheDead Před rokem +1

      @@seriouscat2231 OP does make a good point that AI isn't embodied like humans are. None of the inputs or weights are grounded in any interaction with the world. There's no understanding or world model. Just a feature-space based on input tokens

  • @VurtAddicted
    @VurtAddicted Před 11 měsíci +2

    It would have been much better to call LLM just LLM, but AI makes the hype so much higher and these softwares require a lot of money because they require heavy hardware

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 Před 6 měsíci

      no dont call it that either just call it a language model like all the others.

  • @pepetru
    @pepetru Před 11 měsíci +1

    ChatGPT: Do you know who you are? Do you know how your consciousness work?
    Me: Ahhhhhhhhh sh*t

  • @SextonKing
    @SextonKing Před rokem +59

    I’ve said for years that this is the core flaw in Asimov’s Three Laws: first you have to figure out a way to teach the machine what a “human” is before they’ll worry about obeying or protecting them.

    • @fbafoundationalbuck-broken6011
      @fbafoundationalbuck-broken6011 Před rokem

      EVEN FACIAL RECOGNITION KN0WS WHAT A HUMAN FACE IS, AND IM PRETTY SURE IT'S SENTIENT AI ASIMOV IS TALKING ABOUT.

    • @somerandomwords999
      @somerandomwords999 Před rokem +22

      @@fbafoundationalbuck-broken6011 facial recognition doesn't "know" anything at all. Have you even watched the video?

    • @JB52520
      @JB52520 Před rokem +5

      ​@@fbafoundationalbuck-broken6011 OMG YOUR CAPS LOCK IS SO OVERWHELMING! YOU ARE CORRECT BY BRUTE FORCE.

    • @tlpineapple1
      @tlpineapple1 Před rokem +6

      Yes, because the point Asimov was making was that even seemingly simple and elegant solutions to the "AGI killing us all" problem contain major flaws.
      The three laws are flawed because they are ment to be.

    • @huveja9799
      @huveja9799 Před rokem

      @@tlpineapple1 Well, that's one possible interpretation.
      But following that line of reasoning, we could say that Alan Turing proposed a test destined to fail to show that the apparent mastery of language is not a symptom of real intelligence, and that this test is a very subtle criticism of Alan about our political system and the intellectuals who infect the academy ..

  • @leonk.3739
    @leonk.3739 Před rokem +28

    Thanks for sharing this video with us!
    Chat gpt passing a bar exam better than any lawyer is a great example for the mistakes this Ai has if you just let the same chat gpt try to pass a simple case that is used in the 1st semester of German law schools. Chat gpt fails horribly. I assume that that's because German law exams always consist of a few pages of text describing a situation and asking the student to analyze the whole legal situation so there is just 1 very broad question in comparison to a list of lots of questions with concrete answers.
    Chat gpt doesn't read and understand the law, it just understands which answers you want to hear to specific questions.

  • @dahliablossom36
    @dahliablossom36 Před 4 měsíci +1

    It's my first time seeing a video on AI's ability to understand. I didn't realize just how simple AI is right now. It sounds like ChatGPT doesn't “chat” so much as it does choose the most likely set of words to be spoken by a human, like predictive text. I remember in school they said “Mitochondria is the powerhouse of the cell” but no one explained what a powerhouse was, so we didn't understand what that meant. When the test asked “What is mitochondria?” we picked the answer with “powerhouse” in it, because that had to be correct. It's weird to think that AI is doing this and being praised as Intelligent. Intelligence is more than repeating the right answers; its understanding why those answers are right

  • @TheDysartes
    @TheDysartes Před 8 měsíci

    The problem we have with Ai currently is the data input. If the data input is false/bad or not quite truthful then you're going to get an Ai software spewing out nothing but bad data.