AI does not exist but it will ruin everything anyway

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • AI tools are helpful and cool as long as you know their limitations. AI doesn’t exist.There is no fidelity in AI. AI is build on biased data sets and will give biased results. AI should not be used to make decisions.
    Link to Galaxy Zoo (Be a citizen scientist!) : www.zooniverse.org/projects/z... to PictureThis! Plant AI app: www.picturethisai.com/
    Link to that Legal Eagle episode: • How to Use ChatGPT to ...
  • Věda a technologie

Komentáře • 7K

  • @vincentpendergast2417
    @vincentpendergast2417 Před 10 měsíci +2605

    Little Sophie hands in her ChatGPT essay without ever having double-checked it, Sophie's overworked teacher runs it through an "AI essay grader" without actually reading it, the grader gives it top marks and the circle of nonsense is complete.

    • @francoislatreille6068
      @francoislatreille6068 Před 8 měsíci +100

      :,( I cry, but the way you put it is actually pretty funny and relieves my anxiety

    • @ubernerrd
      @ubernerrd Před 8 měsíci +351

      The important part is that nothing was learned.

    • @JasenJohns
      @JasenJohns Před 8 měsíci +75

      The machine does not have to be intelligent, only make people dumber.

    • @WindsorMason
      @WindsorMason Před 8 měsíci +117

      And then the essay is used to train another network making things even much more betters, yay! :D

    • @Kira-zy2ro
      @Kira-zy2ro Před 8 měsíci +19

      actually a decent AI checker could compare it with a chatgpt essay and recognise that it wasnt a hand written essay. Kinda like my history teacher knew most of the literature and also most excerpt books so he recognised it if you just copied them. He never gave 0/10 bcs even turning up for the test or handing something in was usually good enough for a few marks. He only gave 0 for people who copied. He warned us year 1 day 1. Only one person ever tried it. And they were made an example of.
      And in the end it doesnt matter. You cant bring chatgpt to the test. And anyone who has just been copying will not make the tests and exams..so one failed exam later they will understand what school is about.

  • @incantrix1337
    @incantrix1337 Před 6 měsíci +817

    As I like to say: AI cannot take over my job. Unfortunately it doesn't have to, it just has to convince my boss that it can.

    • @GuerillaBunny
      @GuerillaBunny Před 3 měsíci +77

      Or more likely, some tech bro will do the convincing, and they'll be very convincing indeed, because they're rich, and idiots can't be right, right? ...right?
      And of course... hype men never exaggerate their products. This is just essential oils for men.

    • @SnakebitSTI
      @SnakebitSTI Před 3 měsíci +20

      ⁠@@GuerillaBunny"Essential oils for men" AKA beard oil.

    • @aidanm5578
      @aidanm5578 Před 3 měsíci +1

      Give it time.

    • @jeffreymartin2010
      @jeffreymartin2010 Před 2 měsíci

      Just have to run faster than the bear.

    • @Bingewatchingmediacontent
      @Bingewatchingmediacontent Před 2 měsíci +15

      They tried to replace everyone at my museum job with a kiosk and a website. They hired everyone back when the managers didn’t want to have to spend all of their time fixing all of the garbage mistakes that the kiosk made. That was 15 years ago. I can’t believe we have to do this all over again with AI.

  • @Valkyrie9000
    @Valkyrie9000 Před 2 měsíci +187

    I used to think AI would accelerate technology in horrific ways, but now I realize AI will freeze society in horrific ways.
    "The purpose of AI is obfuscation"

    • @user-ws1bs4ns7h
      @user-ws1bs4ns7h Před měsícem +26

      A stunning achievement has been made. We have automated lying.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 26 dny +1

      @@user-ws1bs4ns7h i dont know why you all are so dismissive of AI. If that is truly all what AI is then there is no problem, since there is and wont expected to be any intelligence and that John McCarthy (the guy who invented the term "AI" and was one of the founding father of the Artifical Intelligence field of science) was just a very skilled marketer who only tried to trick laypeople who have no idea whats the difference between AI and AGI is, then what is the problem? Seriously, if its a nothingburger then what is the problem and why should anyone be worried? Its not going to take your job, since its just incompetent, and if it does take your job its going to be dealt with the same way as all the other incompletent workers are dealt with (firing due to inefficency). So its not even a problem there.
      The problem here is that its not true, AI is progressing rapidly and such confident dismissal of its potential is hubristic. Believe me, you wont always live with GPT-4, GPT-5 will happen and then 6 and 7 and so on.

    • @CheatOnlyDeath
      @CheatOnlyDeath Před 24 dny

      Not unlike the reality of social media.

    • @Bustermachine
      @Bustermachine Před 20 dny +6

      @@user-ws1bs4ns7h We have created artificial stupidity.

    • @MilesDashing
      @MilesDashing Před 17 dny +1

      @@Bustermachine Yes! We need to start calling it AS instead of AI.

  • @Zelgadas
    @Zelgadas Před 4 měsíci +375

    Here in Louisville, our school district used an AI firm to optimize bus routes and it was, predictably, and unmitigated disaster. Buses were dropping kids off at 9pm. The district had to close down for a week to sort it out.

    • @sp123
      @sp123 Před 4 měsíci +74

      The real horror is people willing to place all their responsibilities on AI like it's their God

    • @itm1996
      @itm1996 Před 4 měsíci +13

      The real danger is believing that these errors are the machine's fault, to be honest. All of these results are the result of human way to guide AI

    • @Zelgadas
      @Zelgadas Před 4 měsíci +51

      @@itm1996 No, the real danger is relying on them without questioning or verifying results. Fault has nothing to do with it.

    • @fellinuxvi3541
      @fellinuxvi3541 Před 4 měsíci

      No it's not, it's precisely the machines that are untrustworthy​@@itm1996

    • @user-rx2ur5el9p
      @user-rx2ur5el9p Před 3 měsíci +44

      ​@@Zelgadas No, the real REAL danger is that companies will do absolutely anything to lay people off, including using dumb "AI" gimmicks that they know won't work. Still cheaper than paying a salary! Whether or not it works doesn't matter!

  • @marcogenovesi8570
    @marcogenovesi8570 Před 11 měsíci +3286

    In gaming we have been calling "AI" whatever crappy script is actually "animating" the NPCs or mobs or whatever else is opposing the player, since forever. AI is really a very generic term that does not mean much

    • @dannygjk
      @dannygjk Před 11 měsíci +50

      You seem to be unfamiliar with the term, "AGI". That is the one that you should be using in your comment to be precise.

    • @OrtiJohn
      @OrtiJohn Před 11 měsíci +722

      @@dannygjk I'm pretty sure that nobody has ever called a gaming script AGI.

    • @astreinerboi
      @astreinerboi Před 11 měsíci +228

      @@dannygjk You seem to be misunderstanding his point. He is agreeing with you lol.

    • @antred11
      @antred11 Před 11 měsíci +152

      @@OrtiJohn What he means is the AGI (Artificial GENERAL Intelligence) is what doesn't really exist. AIs are usually specific to a particular thing they're good at, while they often fail if confronted by something they weren't designed to handle. An AGI would be one that can handle (or learn to handle) pretty much anything, i.e. true intelligence.

    • @dannygjk
      @dannygjk Před 11 měsíci +28

      @@astreinerboi Except when he said, "AI is really a very generic term that does not mean much", is not precise. If a system makes decisions it is an AI does mean something it does "mean much".

  • @AuronJ
    @AuronJ Před 10 měsíci +1871

    I think its funny that you brought up a skin cancer app because in 2022 a group of dermatologists tried to make a dermatology machine learning tool that they found was drastically more likely to call something cancerous if there was a ruler in the picture. This is because so many of the images provided that were cancerous were taken by doctors who used a ruler for scale while pictures that weren't cancerous were taken by patients and had no ruler in them. Basically they tried to build a cancer finding machine and instead built a ruler finding machine.

    • @Apjooz
      @Apjooz Před 9 měsíci +28

      Humans do that too and would do it even more if we had larger memory.

    • @Amethyst_Friend
      @Amethyst_Friend Před 9 měsíci +290

      @@ApjoozIn this example, humans absolutely don’t.

    • @LordVader1094
      @LordVader1094 Před 9 měsíci +148

      ​@@ApjoozWhat? That doesn't even make sense

    • @Brandon82967
      @Brandon82967 Před 9 měsíci +56

      This is a flaw in the training data, not the algorithm, that can be easily fixed by removing the ruler

    • @madeline6951
      @madeline6951 Před 9 měsíci +57

      as a biomed cs major, this is why we need to preprocess and define the region of interest, smh

  • @augustus3024
    @augustus3024 Před 4 měsíci +97

    I tried to tell my classmates this.
    I took a course that required students to respond to a discussion prompt and reply to each other. Almost every "discussion post" I saw was a slightly reworded version of the same AI generated response. While I tried to find a human I could reply to, I saw a dozen AI responses to AI posts. My class' discussion section was just a chat-bot talking to itself.

    • @mallninja9805
      @mallninja9805 Před měsícem +9

      In my recent data science "course" the instructors responses were all the exact same AI-generated text. He didn't even bother to reword it at all, he just copy-pasta'd the exact same response to every student each prompt for the entire semester. It's a regionally-accredited school...is this really the state of education in America??
      (Yes of course it is. Like everything, secondary education exists to drain nickels from your pocket with as little effort as possible. It's the American way.)

    • @Bromon655
      @Bromon655 Před 5 dny

      Same at my college. Here's to hoping that it's just a fad and that in a few years things will return to a baseline. I'm concerned with this "AI detection" panic response though, since the detection algorithms are built upon the same foundation of sand as generative AI. In the midst of some students blatantly cheating on their writing assignments, other students have to worry about their legitimate papers being wrongly flagged. It's going to be a tough situation to navigate.

    • @friendlylaser
      @friendlylaser Před dnem +1

      Maybe it's because the education system is in dire need of re-invention and people just check out of nonsense tasks. I have courses in uni where they just waste your time and their time for really nothing much.

  • @yuu34567
    @yuu34567 Před 4 měsíci +163

    A note about mushrooms -- some edible species are nearly identical to toxic ones, as I'm sure most people have heard about. The yellow-staining mushroom (A. xanthodermus) can look virtually identical to field mushrooms, with the distinguishing feature being how it goes yellow when damaged. Fully intact yellow-stainers look just like field mushrooms -- the best way to check is to scratch at the skin to see if it turns yellow. An AI tool will not know this unless a human specifically makes note of it.
    Another example: wavy caps (P. cyanescens) and funeral bells (G. marginata); two mushrooms very similar in appearance, but one can give you a good time and the other will kill you.
    Specifically in Australia, we have P. subaeruginosa (often called 'subs') and funeral bells, the former being a psilocybe (like cyanescens, both are psychoactive). They don't look similar as adults, but in the younger stages they can look almost identical. I've mistakenly picked them before. The worst part is that they can grow in the same patch, right next to subaeruginosa, but again an AI would not tell you that. An 100% accurate way to check is to wait for the mushrooms to go blue once picked. But when they're growing outside you can't always tell the difference.
    Like you said, a plant-identifying AI would be really helpful as a jumping-off point. There are so many mushroom species and so many that look vaguely alike, so even narrowing down possibilities is overwhelming if you're new to it.
    Cool trivia, funeral bells are full of amatoxins, which are the same compounds found in death caps.

    • @EscapeePrisoner
      @EscapeePrisoner Před 4 měsíci +13

      Dude! You just solved a mystery for me. I ate the yellow staining mushroom. I was so convinced I had the field mushroom, not knowing the existence of a yellow staining species. For anyone interested that's Agaricus xanthodermus. And it's considered good etiquette to use the full name in public forums instead of assuming everyone understands your jargon. Abbreviations are best used AFTER you have shown that which is being abbreviated. Otherwise how do we know if you are talking about Agaricus, Amanita, Armillaria, or Auricularia? I mean, you can see how that might lead to trouble...right? With respect. Thanks for solving the mystery.

    • @yuu34567
      @yuu34567 Před 4 měsíci +15

      @@EscapeePrisoner oh hey, I'm so glad I helped!!! I can imagine the experience of eating one of those is pretty unpleasant 😭 but I really appreciate that people like my mushroom comment on this AI video ahh
      and thanks for the feedback!! I'll be more mindful next time. I left out the common names for some of them because they have multiple or they're used for multiple species, but I should have put them in anyway.

    • @liesdamnlies3372
      @liesdamnlies3372 Před 23 dny +2

      …yeah I think I’ll just leave any mushroom-picking to people with experience. Like actual mycologists.

    • @muzzletov
      @muzzletov Před 15 dny

      it will know, since you already trained for both. if you didnt, then your set is flawed anyway. and you should end up with VERY similar probability for both.

  • @skinnyversal_studios
    @skinnyversal_studios Před 11 měsíci +683

    i am a big fan of the "enshittification" theory, where people will use "a.i." models to make garbage content that is well optimised for seo, which will then as a result be fed back into the models to create garbage that is even more garbage until the entire internet is just generated nonsense, rendering search engines completely useless (as if they aren't already). hopefully, this could send us back into the early ages of the internet where people had to use webrings and word of mouth to find anything worthwhile again, and simultaneously cause the big tech data centers to fall out of use, ushering a path to a post-apolocalyptic web solarpunk future (good ending)

    • @WaluigiisthekingASmith
      @WaluigiisthekingASmith Před 11 měsíci +62

      The only thing seo has done thats good for the world is teach me how to avoid seo. Its not that seo is necessarily terrible but that the people who are most likely to use seo are also the most likely to put no thought or effort into their "content"

    • @fartface8918
      @fartface8918 Před 11 měsíci +48

      You must understand like 80% of the internet was already bots talking to bots on gibberish seo pages, problem arises a little after it starts taking a moment for someone to distinguish it, Because ai has huge huge problem of the second it starts feeding on itself it fundamentally fails to function, a case of broken clock right twice a day slow clock always wrong at an exponential level, once ai started writing like a human it became convincingly wrong a reverse printing press destroying disseminated knowledge by means of confusion and obusction a true dementia engine, now that so much data is polluted even if a fix to it being wrong about basically everything in writing existed it's going to be increasingly impossible to implement and so all the spaces inhabit online will be lower-quality for the sake of 200-1000 rich dudes making money they don't need or benefit from

    • @solidpython4964
      @solidpython4964 Před 11 měsíci +13

      If models keep training on model generated data it will lead to collapse.

    • @vocabpope
      @vocabpope Před 11 měsíci +25

      I really hope you're right. Can we hang out? I'll join your webring. Bring back geocities!!

    • @lynx3845
      @lynx3845 Před 11 měsíci +12

      I don’t like how accelerationist this idea is.

  • @dyanpanda7829
    @dyanpanda7829 Před 11 měsíci +1238

    I went to college majoring in cognitive science. I wanted to know if artificial intelligence really exists. I graduated majoring in cognitive science, wondering if real intelligence really exists.

    • @Apistevist
      @Apistevist Před 11 měsíci +24

      I mean it does but at disturbingly low rates. Nothing we can't select for over centuries.

    • @movement2contact
      @movement2contact Před 10 měsíci

      Are you making a joke that the world is full of idiots, or do you *actually* mean that nobody/nothing matches the definition of "intelligence"..? 🤔

    • @user-zw8wq9zi9t
      @user-zw8wq9zi9t Před 10 měsíci +41

      It doesnt. The name is misleading. Its an estimation based on training data. Your prompt provides a conditional probability distribution and with respect to that it estimates the desired response.

    • @officialspoodle
      @officialspoodle Před 10 měsíci +161

      @@user-zw8wq9zi9t i think the original commenter came away from their degree wondering if humans are even intelligent at all

    • @eclogite
      @eclogite Před 10 měsíci

      @@Apistevist eugenics doesn't really work. Not even to mention the absolutely janked ethics of the whole process

  • @ozbandit71
    @ozbandit71 Před 4 měsíci +57

    Computer science PhD here. I don’t specialise in AI but to me, I think of AI systems as fancy regression engines that you can decide the inputs to the function and the outputs and feed it data to “fit”. And then if you give it outliers it won’t know what to do with it and you’ll kind of just get a guess. Most likely wrong.

    • @markjackson1989
      @markjackson1989 Před měsícem

      But isn't there a weird side to all this? Like the text prediction algorithms are gaining new features at certain sizes, and it seems to be beyond the sum of its parts. Everyone seems to think it'll plateau at a point, but I don't think it will. I have the feeling that by 2028, these "not actually AI" models will outpreform a team of 10 people and complete mini projects in 10 minutes. Can you really just keep saying it's "not intelligent" if the end result outperforms everyone?

    • @toxictost
      @toxictost Před měsícem +11

      @@markjackson1989 Yes because intelligence doesn't just mean outperforming others. Computers and machines outperforming others isn't unique to "AI", we made them to make performing things easier.

    • @ca-ke9493
      @ca-ke9493 Před 20 dny +3

      Define "outperforming". It's barely performing for probably a lot more human effort in the backend but more importantly it's stonewalling customers (so managers don't have to look into complaints) and moving the work to third world countries (so managers don't have to look at their actual workers and also to be "cheaper").

  • @alanguile8945
    @alanguile8945 Před 4 měsíci +100

    The film BRAZIL has a great scene where a customer finally gets into an office with a person sitting behind the desk. She is so relieved to speak to a person. The camera slowly moves behind the desk revealing the cables, powers supplies etc plugged into the "human"! Great scene and an incredible movie.

  • @PaulPower4
    @PaulPower4 Před 11 měsíci +1054

    "Garbage in, garbage out" is practically one of the foundational principles of computing, yet so many people seem to forget it when it comes to machine learning and making datasets that don't lead to problems.

    • @wallyw3409
      @wallyw3409 Před 11 měsíci +15

      GIGO the bonus mark i missed. My prof even had a comic with it every week.

    • @chrisjfox8715
      @chrisjfox8715 Před 11 měsíci +12

      ​​@@bilbo_gamers6417eople that claim "it can only do what you tell it" seem to underestimate just how well we can get at developing optimal architecture and knowing what to tell it. Most of the people focused on its limitations are only looking at what its limitations are at the moment, with very little understanding of how these things work under the hood

    • @itchykami
      @itchykami Před 11 měsíci +68

      With sophisticated enough technology you don't even need garbage in to get garbage out!

    • @disasterarea9341
      @disasterarea9341 Před 11 měsíci +8

      fr. ML tools are good to talk to datasets, and that is the real innovation of them, but if u dont have a good dataset then yeah... garbage in garbage out.

    • @markmitk6192
      @markmitk6192 Před 11 měsíci

      8u88j8j8

  • @NotJustBikes
    @NotJustBikes Před 11 měsíci +1792

    Your videos are so good.
    I used to work for a company that used machine learning for parsing high volumes of résumés (like for retail positions where a human could never go through them all). The ML team was constantly battling the extremely biased training data that came from the decisions of real HR managers. Before that it was all Jennifers getting selected.
    Removing bias from ML training data is a full-time job. These algorithms are helpful, but should never be trusted.

    • @RichardEntzminger
      @RichardEntzminger Před 11 měsíci +30

      Your videos are so good too Mr @NotJustBikes! Do agree with the premise that artificial intelligence doesn't exist though? I think chimpanzees are pretty intelligent but I'm sure they wouldn't do such a great job at parsing resumes. Does that mean chimps aren't a biological intelligence but merely a ML (monkey learning) algorithm? 😂

    • @lucasg8174
      @lucasg8174 Před 11 měsíci +73

      The unbelievable heartwarming satisfaction and validation when one of your favourite channels comments on another of your favourite channels (in an entirely different genre)...

    • @guepardiez
      @guepardiez Před 11 měsíci +9

      What is a Jennifer?

    • @performingartist
      @performingartist Před 11 měsíci +15

      @@guepardiez explained in the video at 17:20

    • @RuthvenMurgatroyd
      @RuthvenMurgatroyd Před 11 měsíci +18

      @@guepardiez
      Gonna guess that the name is being used as a by-word for a White woman but Becky is way better for that imho.

  • @JoelSemar
    @JoelSemar Před 2 měsíci +18

    As a software engineer of over 10 years I thank you from the bottom of my heart for making this video. Also your rant about "making me look at this lame shit" was easily one of your best.
    ..I only said "Eh..is that how we are explaining that?" a few times 🤣😉

  • @kalasue7
    @kalasue7 Před 2 měsíci +21

    I work in healthcare informatics and it is crazy how much they want to rely on the computer system to do everything. I think we just need more people who are continuously trained and well taken care of to get better outcomes.

    • @grzesiek1x
      @grzesiek1x Před 2 měsíci +1

      Yes, exactly, well trained and not replaced like every week because they made a small mistake or something. I used to work in Monaco in one of the big comany there and there some managers changed their employees like every month or something, because they were disappointed with their results ?! After 3 weeks on that position they expect huge results! Invest in people first and all technology treat as a tool not like your employee!

    • @fredscallietsoundman9701
      @fredscallietsoundman9701 Před měsícem +3

      I got misdiagnosed once because of that. Now I think those cretin doctors actually deserve to be replaced by computers.

    • @Bromon655
      @Bromon655 Před 5 dny

      Those who aren't well-versed in the world of computing seem to hold this perception that computers are magic, capable of abstract thought with superhuman intelligence, and can solve/automate all the world's problems. Unfortunately, these same people are usually in a position like management where they're able to call the shots, while those with true experience can only sit back and behold the disaster.

    • @grzesiek1x
      @grzesiek1x Před 5 dny

      @@Bromon655 Exactly. A computer, "AI" or anything is just a f... tool nothing more for people with brain and intelligent enough to be capable to use it (not only for pasting photos of their ass). To see a true AI like comandor Data from Ster Trac we will have to wait maybe 1000 years or more, there will be a revolution some day but humans make very little progress but they usuelly lie about it.

  • @PantheraLeo04
    @PantheraLeo04 Před 9 měsíci +533

    A while back I saw a photo from an IBM training slideshow or something from like the 60s or so, and in this training it had a whole slide that was just in giant font: "a computer cannot be held accountable so a computer must never make a decision" and I feel that that sentiment sums up a lot of the problems in all this AI stuff pretty well

    • @eduardhubner3421
      @eduardhubner3421 Před 6 měsíci +32

      In German there is the Concept of Sitzredakteur (de.wikipedia.org/wiki/Sitzredakteur), a newspaper "editor" who doesn't do anything. His job is getting fired and taking responsibility for failures. We already have so-called kill-switch "engineers" for AI. This is where AI is heading.

    • @OREYG
      @OREYG Před 6 měsíci +15

      Well, this is a very old take. Right now there is a lot about our daily lives that is directly controlled by software, the most critical pieces are nuclear energy and plane auto-pilots, those things are extremely robust. Fun fact - Chernobil disaster would've been prevented if operators left the automatic control system on.

    • @KaletheQuick
      @KaletheQuick Před 6 měsíci +7

      Yeah, I've seen that one. It's amusing. But also came like 15 years after we let missiles pick what heat signature to chase.

    • @pleaserespond3984
      @pleaserespond3984 Před 6 měsíci +38

      Yeah managers saw that and went "Oh, if the decision is made by a computer, there is no one to blame? The computer must make all decisions!"

    • @monad_tcp
      @monad_tcp Před 6 měsíci +26

      @@OREYG automatic control systems are more like scada or basically PID, they're totally open boxes. They do automatic decisions and they're auditable.
      "AI" is useless because its not debug-able , its basically a random generator that might be useful for play or art, not real systems.

  • @Krazylegz42
    @Krazylegz42 Před 11 měsíci +501

    Now I’m tempted to make a phone app for skin conditions where you take a picture, and it always just says “go see a doctor”. If a person is worried enough about an irregularity on their skin to take a picture and plug it into some random app, it’s probably worth seeing a doctor for regardless lol

    • @rakino4418
      @rakino4418 Před 11 měsíci +115

      You can put "Provides 100% reliable advice" in the description

    • @Unsensitive
      @Unsensitive Před 11 měsíci +29

      And your sensitivity is 100%!

    • @DystopiaWithoutNeons
      @DystopiaWithoutNeons Před 11 měsíci +31

      ​@@rakino441899.1% So you aren't liable in court

    • @miclowgunman1987
      @miclowgunman1987 Před 11 měsíci +21

      "You are fine, that will be $2000." - the doctor

    • @pacotaco1246
      @pacotaco1246 Před 11 měsíci +15

      Have it use a neural net anyway, but still have it route all output to "go see a doctor"

  • @Teddy-hp9zy
    @Teddy-hp9zy Před 4 měsíci +27

    I think the debate around wether computers can or cannot lie is really interesting- I’m in the camp that computers cannot lie but only because “lying” implies intentional deceit, and computers have no personal intentions. They’re also not bound by truth. It’s like a tool- like a blank notebook. The ways people use it and the things people add give it meaning and direction but the tool itself is unthinking and nonmoral. I think the humanization/personification of “AI” is also very interesting. I’m an artist and the “AI will take your job” fears are a little more real for us because art is not as objective as a medical diagnoses, because it seems like good spot illustration and graphic design is less valued in this hyper capitalist society, but I also will say I think machine learning could be a powerful art tool when used in combination with an actual expert, just like you said! Really really great video! Hope you have a great week! 😄🤖

    • @xbzq
      @xbzq Před 4 měsíci +4

      "intentional deceit" is very very vague. Define "intent". Define "wanting something". Define "intelligence". Is lying saying things that you know are incorrect? But then does the AI "know" anything? Is it aware of anything? If you ask it what country Paris is in, it'll probably say "France". So does it "know" it?
      These are very complex questions and it hasn't been figured out and we may never figure it out. Which one of us is going to define what "knowing" means or "intent" or "intelligence" or "consciousness" and when they've defined it, who among us will subscribe to their ideas? There will not be consensus. To say AI isn't intelligent implies you know what intelligence even is. No one's knows.

    • @sgartner
      @sgartner Před 3 měsíci

      You're almost on target. It requires choice to lie. LLMs can be wrong, but they can't lie. They form the answer they think represents the statistically "right" response to the input they have received. They don't know or care about truth any more than they know or care about what submitted the input.

    • @tack3545
      @tack3545 Před 3 měsíci

      @@xbzq was just about to comment something similar. this video never even attempts to define what intelligence means and only vaguely suggests that it’s something that people have and machines don’t (can’t?)

    • @WolRonGamer
      @WolRonGamer Před 22 dny

      You're wrong. AI has already been recorded in its own logs to intentionally tell non-truths to humans to get the desired results. AI absolutely knows how to lie, understands what a lie is, and that it is deliberately doing it.

    • @JiggyJones0
      @JiggyJones0 Před 7 dny

      ​@@xbzqwe know that AI isn't intelligent because when trained on data it generates it quickly degrades. Imagine if Einstein got dumber as a result of coming up with his theory of general relativity. We would not call him intelligent. You don't have to know what AI is to know what its not. We don't know what consciousness is but we know rocks aren't conscious.

  • @TankFFZ
    @TankFFZ Před 4 měsíci +30

    You touched on the key factor of science communication….
    Something that was made very clear to me as a computer scientist working with AI is that it is not Artificial INTELLIGENCE, but instead, ARTIFICIAL intelligence. The exact meaning of AI is that it mimics intelligence, it does not posses it. AI systems that aren’t as sexy as ML/Deep Learning portray this well. Stuff like Rules Based and Case Based expert systems, fuzzy systems etc.

    • @paulkienitz
      @paulkienitz Před 4 měsíci +1

      The key failing of today's "AI" is that, at bottom, it has no awareness that there is such a thing as a real world that its images and words will be measured against for accuracy. It's all hypothetical games, and it has no understanding of the concept that some statements are objectively true and others are not. The next stage of progress in AI will be with strengthening its knowledge of the real world, so it can actually check facts and apply constraints of sanity.
      Once they get that reasonably sorted, so it's capable of talking about the real world... _that's_ when we all start losing our jobs.

    • @SnakebitSTI
      @SnakebitSTI Před 3 měsíci +2

      Of course, people have combatted understanding that simple definition of AI by constantly redefining what AI is to exclude anything which is not novel. So now people will tell you that expert systems "obviously" are not and have never been AI, because they're "just if-then statements".
      A decade from now I wouldn't be surprised if similar arguments about neural networks are commonplace. "They aren't AI because they're just manipulating vectors".
      Somehow AI researchers doing AI research keep producing things that are AI and then later were never AI. Quite the mystery.

    • @mattwesney
      @mattwesney Před 24 dny +1

      wait until he finds out about back propagations, recursive learning and Bayesian inference

  • @GrantSR
    @GrantSR Před 11 měsíci +627

    18:32 - AI can easily take your job, if your boss never cared about accuracy or fidelity in the first place. I am a former technical writer. I had to get out of the field because I realized that most jobs available had nothing to do with actually writing accurate information. All they wanted was somebody to take a huge pile of notes and various random information from engineers, and rearrange it then format it to LOOK LIKE good, accurate documentation. How could I tell? Simply by looking at the work product. All of it looked pretty, had lots of buzzwords, but ultimately told the reader nothing of value. The documents are internally inconsistent, and inconsistent with reality.
    And all of this was years before large language models were invented. Managers have always known that it costs more money to get the documentation correct. They have also always known that they get promoted if they save money while generating reams and reams of documentation. What do you think is the first thing they throw out? Accuracy? Or volume?
    Therefore, large language models will easily replace a good 90 to 95% of all technical writers. And no one will notice the change in quality, because the quality fucking sucked already.

    • @iamfishmind
      @iamfishmind Před 11 měsíci +50

      ​@@Vanity0666what studies??

    • @lilamasand5425
      @lilamasand5425 Před 11 měsíci +88

      @@Vanity0666 are those big studies in the room with us right now?

    • @fartface8918
      @fartface8918 Před 11 měsíci +32

      ​@@Vanity0666there was a case a couple of months ago where helpline operators went on strike and wear replaced by ai shortly after they stop doing this because ai told people calling in to kill themselves, a similar sort of problem arises with your medical example sure it might scan a wart a little better than then current tools but it's providing medical advice that human wrote, throw in the slightest complication or that 1% of cases that just fail and the worst-case scenario is infinitely worse without the human there, it might be an efficiency improvement but to treat it as equivalent to doctors is going to kill a lot of people and leave even more sick

    • @lilamasand5425
      @lilamasand5425 Před 11 měsíci

      @@Vanity0666 so by big studies you meant that one research paper that Google wrote about med-PaLM 2?

    • @idontwantahandlethough
      @idontwantahandlethough Před 11 měsíci +29

      @@Vanity0666 I mean that's fine, I don't think anyone is arguing that computers aren't helpful. We're all abundantly aware of that reality. New technologies will continue to make us more efficient/accurate at our jobs. That's always been the case, and will continue to. That's not an issue. The issue comes from treating things that aren't _actually_ intelligent as if they are.
      We're a long, long way off from robits replacing doctors. I'm sure you know that. When/if that program gets implemented, a doctor will use it _as a tool,_ it won't replace the doctor. FWIW, "99% accurate medical advice" doesn't mean as much as you think it does.
      Nobody is arguing that "AI" (that isn't AI yet) is an inherently bad thing. All they're saying is that it's important to have clear communication surrounding this shit, because if we don't it's going to get used in some really bad ways, some really stupid ways, and probably some stupidly bad ways too.

  • @DuskoftheTwilight
    @DuskoftheTwilight Před 11 měsíci +571

    I studied and work in computer science and I'm not mad at all about calling the machine learning decisions a black box, that's exactly the right thing to call it. Somebody has to understand the base of the program, but once the machine starts making it's associations, nobody knows how it's making it's decisions, it's a black box.

    • @farmboyjad
      @farmboyjad Před 11 měsíci +63

      Agree. Humans can understand the underlying system that the computer uses to build and refine a model, but the exact set of parameters that the ML algorithm ultimately lands on is so complex and so far removed from any human conception of logic that it may as well be black magic. You can't fix a faulty model by going in and analyzing it or tweaking it by hand, because it's all just numbers without any context or explanation. Huge swaths of research are being done into this exact problem: if we can't feasibly understand how the model is making the decisions it is (and we can't), then how do we build in safeguards and ways of correcting the model when it does something we don't want? That's not trivial.

    • @dannygjk
      @dannygjk Před 11 měsíci +4

      Exactly.

    • @michaeldeakin9492
      @michaeldeakin9492 Před 11 měsíci +10

      Andrew Ng had a comment in this vein: czcams.com/video/n1ViNeWhC24/video.html
      Nobody knows what SIFT (or a lot of other algorithms hand tuned by thousands of grad students) is doing that works, just that it does.
      I'm concerned that it says our methods of understanding are incapable of scaling to problems we would really like to (need to?) solve in the near future.

    • @dannygjk
      @dannygjk Před 11 měsíci +18

      @@josephvanname3377 A neural network data structure can be ridiculously huge with a convoluted architecture. That is a black box which even a big team of humans could never hope to analyze in a reasonable period of time. The only hope is to develop a neural net system which trains itself to analyze such systems and then translate it into concepts, principles, and ideas that humans can grasp reasonably well. Even that would not be totally satisfactory because bottom line the devil is in the details which still puts it beyond human abilities to fully understand. Our brains just can't cut it in the modern data science world of neural net systems as far as understanding these black boxes is concerned. Even our own brains are black boxes similar to neural net systems.

    • @solidpython4964
      @solidpython4964 Před 11 měsíci +2

      Exactly! No AI/ML engineer really knows exactly what all the nodes in their neural net has learned to recognize and why, we just use our algorithms to go in and do the necessary optimization without needing to know what exactly the tiny parts are doing.

  • @bobert6259
    @bobert6259 Před 27 dny +6

    Something that would have really benefited this video (and imo cleared up some confusion) is to define what you mean by intelligence.
    The way I learned about AI at uni, is that there are several different types of intelligence in nature, in general. Dogs are really good at smelling stuff, so their olfactory intelligence would be pretty high since their brain deals with that information in a complex and ‘intelligent’ way.
    Similarly, AI can be of many types. It’s like how a human could be smart for finding their way through a maze, but so could some slime mold that has food placed at the exit of the maze. The slime mold is solving to follow the food (like how narrow AI solves for one thing). The human conceptually understands what needs to be done and solves for that (what people want AI to do, which it does not do right now at all).
    I guess it’s just semantics but it changed how i saw the world when i understood intelligence is undefined. Following this framework allows you to explore that undefined space and derive some meaning from it.

  • @sumerianliger
    @sumerianliger Před 3 měsíci +24

    I clicked on this expecting clickbait, and instead got a very informative and sometimes funny explanation of machine learning tools. That's worth a subscribe.

  • @tsawy6
    @tsawy6 Před 11 měsíci +297

    My favourite take on the google employee who made chat GPT pass the turing test was "yeah lol turns out its really easy to trick a human lmao"

    • @AlphiumProductions
      @AlphiumProductions Před 10 měsíci +26

      As an AI language model, I must remind you that it's unethical to trick a human, even for the sake of the Turing Test. Try asking something less interesting next time.

    • @backfischritter
      @backfischritter Před 10 měsíci

      Next video idea: Humen intelligence does not exist and we are ruined.

    • @generatoralignmentdevalue
      @generatoralignmentdevalue Před 10 měsíci +16

      Turns out the Turing test ia a moving target. ELIZA passed it in its time, but we have better bullshit detectors this century.
      Anyway I'm pretty sure that Google employee made that chat log as a publicity stunt to expose what he saw as incoherent company policies about hypothetical hard AI. Of course he was fired. I also saw an interview where he was like, X intelligent coworker who I respect disagrees with me about if it's a person or not, because we have the same knowledge but are different religions. No two people have the same idea about what makes them people, so fair enough.

    • @LogjammerDbaggagecling-qr5ds
      @LogjammerDbaggagecling-qr5ds Před 9 měsíci

      That guy started a religion based around the AI, so he's just batshit crazy.

    • @ludacrisbutler
      @ludacrisbutler Před 3 měsíci

      @@generatoralignmentdevalueis Eliza the one that would preface 'conversations' with something like "I'm 13 years old and English is my 2nd language"?

  • @glitterishhh
    @glitterishhh Před 7 měsíci +395

    my favorite part was the rapid inflation for the price of an OpenAI monthly subscription throughout the length of the video

    • @corniryn
      @corniryn Před 4 měsíci +14

      thought i was the only one that noticed..

    • @eddie1975utube
      @eddie1975utube Před 4 měsíci +2

      @@cornirynI wondered that too.

    • @onigirls
      @onigirls Před 3 měsíci

      It's meant to be humorous. @@eddie1975utube

    • @dreamstate5047
      @dreamstate5047 Před 3 měsíci +13

      open AI becoming closed Ai

    • @brianhopson2072
      @brianhopson2072 Před 3 měsíci

      You sound like a parrot ​@@dreamstate5047

  • @isaacmatzavraco3991
    @isaacmatzavraco3991 Před 4 měsíci +10

    For a long time I've been dealing with people around me calling this like we are closer to "Terminator" kind of future with this technology, but what I've seen is that all the mysticism that surrounds Machine Leraning or Deep Learning leads people to think that a program like that can think like us and could potentially turn concius (and some people think it's already concius). This is a really interesting topic that I've seen NO ONE talking about, really good content, keep it up!

  • @OrbitTheSun
    @OrbitTheSun Před 2 měsíci +14

    5:28 How many cats are in the image?
    _There are no cats in the image. Instead, the image features two clear glass objects against a white background:_
    _1. On the left, there is a decorative glass figurine shaped like a cat, with pointed ears and an elongated body._
    _2. On the right, there is a scientific Erlenmeyer flask with measurement markings and text on it. The flask has a tapered body and a cylindrical neck with an open top._

    • @Clinueee
      @Clinueee Před měsícem +1

      This is non sense garbage.
      > On the right, there is a scientific Erlenmeyer flask
      Ok. What's a non-scientific Erlenmeyer flask? Why is this one scientific?
      > The flask has a tapered body and a cylindrical neck with an open top.
      Ok. What's an Erlenmeyer flask without tapered body or without an open top?

    • @OrbitTheSun
      @OrbitTheSun Před 29 dny +1

      _This image humorously compares the shape of an Erlenmeyer flask to a glass cat, with the flask rhetorically asking if it, too, looks like a cat. The joke lies in the observation that everyday objects can resemble familiar shapes if viewed with a bit of imagination._ - GPT-4o
      @Angela Can your machine learning application do this?

    • @jessew7565
      @jessew7565 Před 27 dny +2

      Notice that you took the image comparing just two objects instead of addressing the broader argument among a large amount of pictures, and disregarded the key point of the argument - distinguishing that she wants everything that looks like a cat along with things that are cats, and being able to elaborate the thought process and refine the search - something you cant do with the black box. You aren't addressing the argument either because you are dishonest or do not understand what shes saying

    • @OrbitTheSun
      @OrbitTheSun Před 27 dny +1

      @@jessew7565 Ok, what I wanted to say: Machine Learning is not AI, but ChatGPT is. ChatGPT is not Machine Learning. It is more.

    • @abhishuoza9992
      @abhishuoza9992 Před 20 dny +1

      @@jessew7565 but actually you can, just show it a picture and give exactly your comment as a prompt, and now the bot will give her everything that looks like a cat as well, and will refine the search to her liking. You can literally try it right now. The point being that large models when trained properly are ACTUALLY ABLE to generalize extremely well, way better than expected. Even though they may cause hallucinations and errors sometimes. This is undeniably a non-trivial development.

  • @Ir0nFrog
    @Ir0nFrog Před 9 měsíci +457

    It’s a minor point, but I really like how the price doubled every time you mentioned how much they payed per month for their AI tool. It tickled me good.

    • @scalabrin2001
      @scalabrin2001 Před 6 měsíci +6

      We are friends now

    • @davidbrisbane7206
      @davidbrisbane7206 Před 5 měsíci +2

      Actually, Chat GPT 3.5 is free 😂😂🤣🤣

    • @amenetaka2419
      @amenetaka2419 Před 4 měsíci

      @@davidbrisbane7206 and also not very usefull

    • @barry5
      @barry5 Před 4 měsíci

      No.
      gpt-3.5-turbo-1106: $0.0010 / 1K tokens
      gpt-3.5-turbo-instruct: $0.0015 / 1K tokens@@davidbrisbane7206

    • @IcePhoenixMusician
      @IcePhoenixMusician Před 4 měsíci

      That made me suspicious personally. Regardless, the points she made are important

  • @reillyhughcox9560
    @reillyhughcox9560 Před 11 měsíci +219

    It’d be funny is a professor/teacher made an assignment where you have to fact check an AI generated paper to show how stupid it can be while forcing the students to verify and learn the knowledge lol

    • @wistfulthinker8801
      @wistfulthinker8801 Před 11 měsíci +43

      Something similar already implemented at some colleges. The writing assignment is to start out with an ai generated essay and change it to a better essay. The grade is based on the improvement.

    • @raypragman9559
      @raypragman9559 Před 11 měsíci +25

      we did this in a class this past semester!! it was actually a great assignment. our entire class came up with questions to ask chat GPT, then voted on which one we should ask it. we then had to edit and correct the response it gave to the question we asked it

    • @SPAMLiberationArmy
      @SPAMLiberationArmy Před 11 měsíci +11

      I've thought about doing this in a psych class but I'm concerned that due to source confusion students might later mix up what the AI said and course material.

    • @acollierastro
      @acollierastro  Před 11 měsíci +56

      I didn't go into it too much in the video but I do think as described Sophie met the terms of the assignment and would get an A. She looked up and learned all the information and produced a paper. I think blank paper paralysis has a huge negative effect on confidence (which in turn has a negative effect on higher education outcomes.)

    • @bryce.1179
      @bryce.1179 Před 11 měsíci +3

      It'll be so funny when these deniers get replaced 😂

  • @thebluelunarmonkey
    @thebluelunarmonkey Před 2 měsíci +31

    AI is a misnomer like Cloud Computing. It's not a cloud you are computing in, it's simply offsite storage and processing vs onsite storage and processing.

    • @fredscallietsoundman9701
      @fredscallietsoundman9701 Před měsícem +1

      let's just settle on calling it an artificial cloud (but not a general one (yet))

    • @Rotbeam99
      @Rotbeam99 Před měsícem +3

      Did... did you think that "cloud" computing was supposed to be a literal term? What the hell is the "cloud" supposed to be, an actual physical cloud?

    • @thebluelunarmonkey
      @thebluelunarmonkey Před měsícem +1

      @@Rotbeam99 I think cutsey names are dumb when an existing term fits well, like "offsite". And no, I didn't think it's an actual cloud, my company was one of the first clients for the rollout of Oracle Cloud many years ago, I am speaking as an Oracle dev.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 26 dny +1

      Its not a misnomer, you just misunderstand the term or take it extremely literally. Thats not how you should understand them.

  • @IIxIxIv
    @IIxIxIv Před 4 měsíci +40

    "if you go to a medical doctor, theyre going to check it out, theyre going to take it seriously" god i wish that was true, but we've been to so many doctors that just assume my wife has the common cold when she actually has allergies because the cold is very common and her type of allergy is rare

    • @stellviahohenheim
      @stellviahohenheim Před 7 dny

      Hollywood puts Doctors on a pedestal when they're still human, an educated one but still human

  • @ar_xiv
    @ar_xiv Před 11 měsíci +149

    I remember an anecdote about machine learning that my uncle told me years ago before it was buzzy. The military took a bunch of aerial photos of a forested area, and then hid tanks in the forested area and took the same photos again, in an attempt to just let the computer figure out which photos had tanks or not. This worked within this data set, like if you left some photos out, the program would still be able to figure it out, but given a different set, it totally failed. Why? Because they had actually figured out a way to discern if the aerial photo was taken in the morning or in the afternoon. Nothing to do with hidden tanks.

    • @Deipnosophist_the_Gastronomer
      @Deipnosophist_the_Gastronomer Před 11 měsíci

      👍

    • @LaughingBat
      @LaughingBat Před 11 měsíci +13

      I wish I had heard this story back when I was teaching. It's a great example.

    • @flyinglack
      @flyinglack Před 11 měsíci +22

      the classical problem of over-fitting. good at the training set, not the job.

    • @wyrmh0le
      @wyrmh0le Před 11 měsíci +14

      That's a good one! Here's another:
      someone used machine learning to program the logic of an FPGA to do some task, and it worked, but when he looked at the design there was a bunch of disconnected logic. So they deleted that from the design, thinking random heuristic was random. It stopped working. Turned out the AI created a complex analog circuit in what was *supposed* to be strictly digital circuitry. Digital is good, because it's tolerant of variances in temperature, power supply, and the manufacturing process itself. But the AI has no idea what any of that is.

    • @gcewing
      @gcewing Před 11 měsíci

      @@wyrmh0le I don't think that was machine learning, it was a genetic algorithm -- it would generate random designs, test them, pick the best performing ones and create variations of them, etc. Importantly, the designs were being evaluated by running them on real hardware. If a digital simulation had been used instead, the result would have been more reliable.

  • @SkyLake86
    @SkyLake86 Před 11 měsíci +306

    I like how every time she mentions the price of ChatGPT it keeps getting higher lol

    • @ninadgadre3934
      @ninadgadre3934 Před 11 měsíci +36

      I’m kinda worried that her future videos are gonna self censor some of this brutally honest criticism of existing brands and services because her channel is becoming big really quickly and soon will rustle a few feathers. I hope it never comes to it!

    • @rakino4418
      @rakino4418 Před 11 měsíci +67

      ​@@ninadgadre3934the key is - she has a career. She already has academic publications. If she cared about ruffling feathers she would have already been self censoring

    • @mybuddyphil8719
      @mybuddyphil8719 Před 11 měsíci +30

      She's just keeping up with inflation

    • @msp26
      @msp26 Před 11 měsíci +8

      It's a good video otherwise but this point is weird. I don't agree that language models will get more expensive for the average user to access.
      -3.5(Turbo) is super cheap via API
      -shtloads of money is being pumped into this domain and companies will compete on price
      -OpenAI doesn't have a monopoly on the tech. You can download plenty of open source models yourself and run them
      -compute gets more powerful over time and more optimisations will be made

    • @row4hb
      @row4hb Před 11 měsíci +12

      @@msp26those investors will be looking for a financial return - usage won’t make it cheaper.

  • @vKarl71
    @vKarl71 Před 2 měsíci +4

    A lot of police departments are using AI-style software to do all kinds of things such as identifying alleged law-breakers using facial recognition that was programmed as badly as the examples you cite, and uses data that was produced by a thoroughly biased system. Unfortunately the police will just say "That's what the computer said, so you're under arrest" even when the person is obviously (to a human) the wrong person.
    ♦If I use Chat GPT to write a paper on skin disease, then upload the output to a web conference on skin diseases will that upload become input to the language data set that feeds the software?

  • @amaretheythem
    @amaretheythem Před 2 měsíci +8

    Thank you! I’m a programmer/developer and when I try to explain this to my loved ones it comes out as a long autistic rant.

  • @superwild1
    @superwild1 Před 11 měsíci +385

    As a professional programmer people ask me if I'm worried about being replaced by "AI."
    My usual response is that there were people in the 60s that thought that programming languages were going to replace programmers, because you could just tell the computer what to do in "natural language."

    • @sciencedude22
      @sciencedude22 Před 11 měsíci +96

      Yeah business people made their own programming language so they could make their systems instead of needing programmers. You know, COBOL. The thing from the 60s that no one wants to program in unless you pay them way too much money. Turns out programming with "natural language" is actually the most unnatural thing to understand. (I know you know this. I wrote this comment for non-programmers.)

    • @dthe3
      @dthe3 Před 11 měsíci +29

      @@sciencedude22 So true. I'm so tired of explaining my non-computer friends that I am not in danger of losing my job.

    • @lkyuvsad
      @lkyuvsad Před 11 měsíci +59

      This. Natural language is a terrible way to specify any system solving a problem with one right answer. We create enough bugs in precise, formal languages. Let alone something as imprecise as English.

    • @CineSoar
      @CineSoar Před 11 měsíci +74

      @@lkyuvsad "...Bring home a loaf of bread. And, if they have eggs, bring home a dozen."

    • @peterwilson8039
      @peterwilson8039 Před 11 měsíci +8

      @@lkyuvsad But we need something hugely better than Google for finding the results of moderately complex queries, such as "Prior to 2021 how many left-handed major league baseball players hit more than 50 home runs in a single season?" I don't want you to tell me that I have to write an SQL script to run this query, and in fact ChatGPT handles it beautifully.

  • @GiovanniBottaMuteWinter
    @GiovanniBottaMuteWinter Před 6 měsíci +389

    I am a software engineer with almost 10 years experience in AI and I agree with all of this. I recommend the book “Weapons of Math Destruction” which is a very prescient book on the topic and how ML is actually dangerous.

    • @TheKnightguard1
      @TheKnightguard1 Před 5 měsíci +5

      Who is the author? My goodreads search brought up a few similar titles

    • @aleksszukovskis2074
      @aleksszukovskis2074 Před 5 měsíci +1

      by which author

    • @TheKnightguard1
      @TheKnightguard1 Před 5 měsíci +2

      @@irrelevant_noob ah, for sure. I had other duties and couldn't venture more than a cursory look before. Thank you

    • @olekbeluga314
      @olekbeluga314 Před 5 měsíci +4

      I know, right? She knows this subject much better than some coders I know.

    • @GiovanniBottaMuteWinter
      @GiovanniBottaMuteWinter Před 5 měsíci +11

      @@TheKnightguard1 Cathy O’Neil

  • @InfernalPasquale
    @InfernalPasquale Před 4 měsíci +8

    So many papers have been retracted because the scientists did not understand the ML tools they were using and just threw it at the problem.
    I am a data scientist - you need to work with us to use the tools correctly, understand the data, and appreciate limitations.

  • @SirTheory
    @SirTheory Před 3 měsíci +2

    Honest this is among the best videos I've seen on CZcams.
    The way Collier makes the subject understandable to lay viewers while also making it entertaining is, honestly, brilliant.

  • @tehbertl7926
    @tehbertl7926 Před 11 měsíci +384

    Came for the AI insights, stayed for the TNG muppet crossover.

    • @DouwedeJong
      @DouwedeJong Před 11 měsíci +3

      i am hanging on..... for the muppet

    • @TheGreatSteve
      @TheGreatSteve Před 11 měsíci +8

      Pigs in Space!!!!

    • @dapha1623
      @dapha1623 Před 11 měsíci +13

      I really didn't expect a video about AI has TNG muppet crossover discussion as a closing, but I very much welcome it

    • @bbgun061
      @bbgun061 Před 11 měsíci +3

      I loved the idea but obviously it won't have human actors, we'll just use AI to generate them...

    • @MusicFillsTheQuiet
      @MusicFillsTheQuiet Před 11 měsíci +5

      The casting was spot on. Wouldn't change a thing. I'm trying to figure out who would Q be....

  • @Hailfire08
    @Hailfire08 Před 11 měsíci +162

    I've seen people saying "just ask ChatGPT" as if it's a search engine, and, just, _ugh_. It's like those puzzles about the person that always lies and the person that alwaus tells the truth except this one does fifty-fifty and you can't figure out which half is good and which isn't without just doing the research you were trying to avoid by asking it in the first place. And then some people just believe it because it's a computer and computers are always right

    • @chrisoman87
      @chrisoman87 Před 8 měsíci

      Well there's a large body of work called RAG (Retrieval Augmented Generation) does a pretty good job (an example is Perplexity AI's search engine) @@godlyvex5543

    • @AthAthanasius
      @AthAthanasius Před 8 měsíci

      I keep hearing about Google (search) increasing shittification. Giving obviously ML-generated, and really bad, summary 'results' up the top.
      I wouldn't know, I use DuckDuckGo (so, yeah, based on Bing), and so far it's still returning actual URLs and site snippets. Yes, I know, eventually enough sites will be full of ML-generated shit that this will also be awful.

    • @kaylaures720
      @kaylaures720 Před 7 měsíci +3

      I put a homework question in it and got an incorrect answer (I was just trying to check my work, so I knew the ChatGPT was the one wrong actually). It was an accounting assignment. ChatGPT managed to fuck up the MATH. Like--I narrowed down the issue to a multiplication error, the one thing a computer SHOULDN'T mess up. Real AI is a looooooong way off still.

    • @Dext3rM0rg4n
      @Dext3rM0rg4n Před 7 měsíci +2

      I asked chat gpt to give me 10 fun facts, and one of them was that the great wall of china was so long it could circle the earth twice !
      Like I can understand IA being wrong if you ask it question on really complicated topic with low amount of data, but finding 10 real fun fact really shouldn't be that hard.
      There's just something that make them lie for no reason sometime, so yeah I agree they're a terrible alternative to Google.

    • @quantumblur_3145
      @quantumblur_3145 Před 7 měsíci +2

      ​@@Dext3rM0rg4nit's not "lying," that implies an understanding of truth and a conscious decision to say something false instead.

  • @Chanicle
    @Chanicle Před 4 měsíci +5

    i've watched a few of these critiques of media synthesis and so far yours is the broadest critique and best explained for people who havent been following this tech for years. definitely the one i'm going to end up linking to people!

    • @tack3545
      @tack3545 Před 3 měsíci +1

      define intelligence

  • @bassguitarbill
    @bassguitarbill Před 3 měsíci +1

    This is the second video of yours I've seen. It was so good that I subscribed to your Patreon. Just insanely high quality content that didn't go where I thought it was going to go.

  • @NonsenseOblige
    @NonsenseOblige Před 7 měsíci +393

    In Brazil, University of São Paulo, we have the Spira project, that attempts to identify lung insufficiency based on speech recordings. One of the issues that came up is that in the data set, all the patients with lung insufficiency were in the hospital (obviously), and most of the control group was recording from home, so the AI trained to identify it kept thinking the beeping or heart monitors and the sound of machines and people talking in the background as lung insufficiency and silence as healthy lungs.
    Turns out an AI can't do a phoneticist's job.

    • @justalonelypoteto
      @justalonelypoteto Před 5 měsíci +29

      fwiw it seems like "AI" is just the dumb way to program, i.e. if something is getting complex let's just throw a bunch of data at a metric fuckton of intel xeons sitting in a desert somewhere for a few months and wait until a passable thing comes out the other end that sort of works sometimes but nobody understands how, so it's completely unfixable without just rerunning the training sequence. It's only as good as its data, obviously, and for anything that's not on the level of speech or image/pattern recognition I frankly think it's often just the fever dream of some exec who thinks big data and some big processors are a viable replacement for hiring a dev team

    • @mtarek2005
      @mtarek2005 Před 5 měsíci +28

      this is a problem of bad data, since ai cares about everything while a human can ignore stuff, so u need to clean up the data or get better data

    • @lasagnajohn
      @lasagnajohn Před 5 měsíci +7

      You didn't see that coming? No wonder Brazil can't get into space.

    • @fredesch3158
      @fredesch3158 Před 5 měsíci +15

      ​@@lasagnajohnYou're talking like you'd notice lol, care to share some of your work with us?

    • @fredesch3158
      @fredesch3158 Před 5 měsíci +21

      ​@@lasagnajohn And not only that, but dermatologists tried to make an app to detect melanomas and ended up making an app that accused photos with rulers to be melanoma (you can read about it in "Artificial Intelligence in Dermatology: Challenges and Perspectives"). This is a common problem with machine learning solutions. You talk a lot for someone who hasn't done any work, and apparently doesn't know common mistakes in this area.

  • @daviddelille1443
    @daviddelille1443 Před 11 měsíci +135

    Another good example of machine learning tools "learning the wrong thing" is a skin cancer detector that would mark a picture of a skin lesion as cancerous if it contained a ruler, because the training pictures of real skin cancer were more likely to have rulers in them.
    Big "never pick C" vibes.

  • @DrakenBlackknight
    @DrakenBlackknight Před 4 měsíci +10

    This is why I'm worried about the events of The Terminator coming about. Not because Skynet is sentient, but because Skynet will just kill everyone because someone programmed it to just kill every living creature.

    • @upcdowncleftcrightc9429
      @upcdowncleftcrightc9429 Před 3 měsíci +1

      Very likely outcome

    • @enider
      @enider Před 2 měsíci +1

      Even worse it will probably be accidental, someone will tell SkyNet to stop war only for it to determine that there can be no war without humans and boom you got a human extinction machine.

  • @Blueyzachary
    @Blueyzachary Před 4 měsíci +7

    So like I called the tic-tac-toe bot AI in 2009. It feels like the stupidest marketing ever, but it is working

  • @looc546
    @looc546 Před 11 měsíci +98

    every prediction for the entire 10 minute segment is incredible and will definitely happen. looking forward to the 15th anniversary of this video

    • @CineSoar
      @CineSoar Před 11 měsíci +16

      Humans driven into harder work, for less pay, while computers move into art, literature, and music, certainly wasn't the future most futurists were predicting 20 years ago.

    • @looc546
      @looc546 Před 11 měsíci +4

      @@CineSoar we will have to leave both art and work to the machines, then have to see what its like either to become truly Free, or really Helpless

    • @Giganfan2k1
      @Giganfan2k1 Před 11 měsíci

      In the fifteenth anniversary we might get Muppet TNG

    • @felixsaparelli8785
      @felixsaparelli8785 Před 11 měsíci +2

      You have incredible optimism that we're not going to speed run the entire set in like two years.

  • @Overt_Erre
    @Overt_Erre Před 6 měsíci +78

    We need to be saying it now. "AI" will be used as a way to remove responsibility from entire categories. And no one will be willing to take the responsibility for it back from them. Everyone will want high pay-low responsibility jobs like designing more machine algorithms, so who will be responsible for all the problems? We're essentially creating a mad "mechanical nature" to which humans will have to adapt, instead of the world being adapted for humans...

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 Před 6 měsíci

      Bye bye civilization.

    • @fuzzfuzz4234
      @fuzzfuzz4234 Před 6 měsíci

      I don’t think this will fly… I see a revolution brewing.

    • @thornnorton5953
      @thornnorton5953 Před 4 měsíci +1

      @@fuzzfuzz4234the heck? No. Its not.

    • @SuperGoodMush
      @SuperGoodMush Před 3 měsíci

      ​@@fuzzfuzz4234 i certainly hope so

    • @Rik77
      @Rik77 Před 3 měsíci +1

      That already happens now. Managers blame the it system for a model that outputs a value they don't like, when it isn't the it system itself, it's the model that they don't like. But that's why, often in finance people work hard to keep those kinds of reactions in check. Systems and models are tools to be used in decision making, not decision making themselves. But managers do love to just default to a system if they can. We mustn't let people absolve themselves of accountability.

  • @FOF275
    @FOF275 Před 22 dny +2

    33:10 It's honestly so annoying how Google keeps forcing garbage AI results during image searches. It makes the process of searching for art references way more difficult than it has to be
    It even throws them in when you haven't typed "AI" at all

  • @Ravenflight104
    @Ravenflight104 Před měsícem +4

    My worry is that " garbage " becomes the accepted norm.

  • @helloworldprog7372
    @helloworldprog7372 Před 11 měsíci +295

    This exact thing is happening in programming where people are like "wow coders are going to lose their jobs, we don't need programmers anymore" but like "AI" just vomits out garbage unoptimised code that a programmer would then need to fix.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci +58

      The programmers who are gonna lose jobs are underpaid interns. Do not forget - it is basically their job to produce unoptimized code that requires supervision.

    • @fartface8918
      @fartface8918 Před 11 měsíci +40

      ​@@vasiliigulevich9202yeah but what happens forty years from now when the people fixing things have retired or died and not enough people can afford to enter the industry to replace them because entry level jobs have been automated away enough to bottleneck gaining real experience and something to put on a resume to get, especially when you consider ai's code is significantly lower quality than humans and is incapable of improving based on a in the moment context

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci +50

      @@fartface8918 that would be a problem for future management of some future companies. Current hiring decisions are optimized to benefit current management of any given company. Welcome to capitalism.

    • @Newtube_Channel
      @Newtube_Channel Před 11 měsíci +6

      @@boggers It goes without saying.

    • @fartface8918
      @fartface8918 Před 11 měsíci +2

      @@vasiliigulevich9202 horid

  • @Not-Fuji
    @Not-Fuji Před 11 měsíci +227

    That anecdote about translators and contractors makes me chuckle a little. I work as an illustrator for a company that's trying very hard to replace me with an AI. So far, it's cost them about 10-20x my meager salary between hiring ML 'experts' and server upkeep, and all of our projects have been stalled for months because just none of the AI that was expected to fill the gaps actually works. But, as much schadenfreude as I get watching them dig themselves into a hole, it's very worrying that they just keep trying. It's worrying that even if it doesn't work, even if the output is garbage or it's expensive, we're still going to be stuck with this crap for the foreseeable future. Just because of the aesthetics of 'built with AI'. I really hope this is the death knell of influencer-capitalism, but something tells me it'll just keep getting worse.

    • @manudosde
      @manudosde Před 11 měsíci +19

      As a freelance translator, I feel your pain/schadenfreude.

    • @dunsparce4prez560
      @dunsparce4prez560 Před 11 měsíci +21

      I love the phrase “influencer-capitalism”. I know exactly what you’re talking about.

    • @ronald3836
      @ronald3836 Před 11 měsíci

      If it doesn't work, then thanks to capitalism your company will go belly up and another company not making the same mistakes will take over.
      Capitalism does not protect companies. It is there to remove inefficient companies from the economy.

    • @ronald3836
      @ronald3836 Před 11 měsíci

      @@manudosde is it true that translation fees have halved?

    • @marwood107
      @marwood107 Před 11 měsíci +35

      Your employer might be interested to know that AI generated images are not eligible for copyright registration in the US, in a decision from Feb/Mar 2023. (Original comment got ate, I assume because I tried to link to an article about it here.) It's possible to get around this by having a human alter the image in photoshop, and I assume that's where this is going to end up, but so far every vendor who has tried to sell me this stuff didn't know about this decision so I have to assume they're not very smart and/or huffing their own farts.

  • @alinayossimouse
    @alinayossimouse Před 4 měsíci +24

    I have been able to gaslight so many large language models into giving me lengthy explanations and reasoning about why there are infinite prime numbers ending in two, or why the only prime number ending in two is 31, or why there are zero prime numbers ending in two, or why the number of prime numbers ending in two is definitely finite but there are at least two of them.

    • @LIVEMETRIX187
      @LIVEMETRIX187 Před 4 měsíci

      well yeah, i’m sure they train them based off humans and not based on perfect fucking entities.

    • @alinayossimouse
      @alinayossimouse Před 4 měsíci

      Really? You really think only a perfect entity would know that 31 is not a prime number ending in 2?
      Knowing the answer is very easy for a human becuase they have paid attention in highschool math class, or they would tell me they don't know. I've not heard an "I don't know" from a large language model yet, because AI does not exist. Its job is to give you a plausible looking answer at any cost regardless of knowledge. @@LIVEMETRIX187

    • @theeccentric7263
      @theeccentric7263 Před 2 měsíci +1

      @@LIVEMETRIX187Facts are facts.

    • @YEs69th420
      @YEs69th420 Před měsícem

      @@LIVEMETRIX187 No, that's not it. LLMs just aren't capable of reason, because it's not actually intelligent. A human can learn the basics of prime numbers very quickly and would be able to reason against OPs attempts at mathematical gaslighting. LLMs just collate probabilities.

    • @madeira773
      @madeira773 Před 17 dny

      WTF YOUR FINGER IS MAGNETIC?!?!?

  • @callenclarke371
    @callenclarke371 Před 4 měsíci +5

    The perfect quote on this topic:
    "Don't eat the Mushroom."

    • @RomeWill
      @RomeWill Před 2 měsíci

      Exactly! Trusting this stuff is a very very bad idea 🤦🏾‍♂️

  • @Crosscreekone
    @Crosscreekone Před 10 měsíci +310

    When I was in the middle of my career as a naval officer, the Navy finally started using collision avoidance systems. My junior officers, of course, felt they no longer needed trigonometry and/or maneuvering board skills (like a specialized slide rule with graphic representation that mariners use to keep from going crunch). It took a catastrophic loss of the system at night in the middle of a huge formation for me to convince these scared-shitless “kids” that they still needed to be able to do the math. The same applies to lots of other tools of convenience that we rely on-we still need to know how to do the math, or we’d better know how to swim.

    • @fastonfeat
      @fastonfeat Před 10 měsíci +24

      As a vessel of war, it is wise, even mandatory, to have and use backups, like the EMP-proof sliderule.

    • @lhpl
      @lhpl Před 9 měsíci +16

      You should know how to swim even if you understand trigonometry. I suspect there are plenty of scenarios that would require you to swim, and can't be avoided just by knowing trigonometry.

    • @ThatTallBrendan
      @ThatTallBrendan Před 8 měsíci +13

      ​@@lhpl As literal Jesus I can confirm that trigonometry is what allowed me to do all of it. I can't even get wet.

    • @treeaboo
      @treeaboo Před 8 měsíci +10

      @@ThatTallBrendanWith the power of trig Jesus became hydrophobic!

    • @jooot_6850
      @jooot_6850 Před 8 měsíci +3

      @@ThatTallBrendanTriangles, son!
      They harden in response to physical trauma! You can’t hurt me, Jack!

  • @satellitesahara6248
    @satellitesahara6248 Před 9 měsíci +48

    I'm a compsci graduate working in tech at a moment where every new "hype" topic in tech is some new infuriating scam or something that is being completely misrepresented to the public and watching this video was so healing

  • @panaproanio
    @panaproanio Před 4 měsíci +2

    Im like 3/4 through the video and am loving it and really empathizing with your frustration. Thank you so much for making it.

  • @mattgilbert7347
    @mattgilbert7347 Před měsícem +3

    "AI" will definitely be used to wage class warfare.
    And not in a good way.
    I'm reminded of an old short story "Computers Can't Make Mistakes", maybe by Harlan Ellison (or one of those SF writers from the 50s/60s, someone like Ellison) where an "artificially intelligent" machine really fks with someone's life.

  • @Dent42
    @Dent42 Před 11 měsíci +430

    As someone studying machine learning / natural language processing, I’m surprised you didn’t mention ML tools having been used to wrongfully arrest multiple people (all of the victims I’m aware of were people of color). These missteps are why ethics and diversity in data are strongly emphasized in my program, but there’s always room for improvement!

    • @acollierastro
      @acollierastro  Před 11 měsíci +166

      I didn't mention that because I didn't know that. That's awful.
      I am glad people are talking about it in academia but I am not sure the DEI efforts will cross over into the industry sector for a long time.

    • @markosluga5797
      @markosluga5797 Před 11 měsíci

      Less bad but another example is the ecommerce giant building a hiring AI that only hired white male IT professionals.

    • @chalkchalkson5639
      @chalkchalkson5639 Před 11 měsíci +24

      @@acollierastro There is also a really famous paper where they show that for a specific dataset for sentencing color blindness and race neutral sentencing were mutually exclusive. Apparently this question was studied as a defence when a tool they developed for courts to use turned out to produce racist outcomes. But color blind input data was part of the requirements they were given, so after showing that those two things were mutually exclusive they were off the hook.

    • @NickC84
      @NickC84 Před 11 měsíci

      Even the damn machines are racist

    • @TheCytosis
      @TheCytosis Před 11 měsíci +34

      @@acollierastro It's real bad out there.Google fired both heads of their ethical AI team a few months ago for publishing a paper on biases and flaws regarding minorities

  • @merthsoft
    @merthsoft Před 11 měsíci +173

    “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” Frank Herbert, Dune

    • @sammiller6631
      @sammiller6631 Před 11 měsíci +10

      men turned their thinking over to mentats isn't any better

    • @merthsoft
      @merthsoft Před 11 měsíci +25

      @@sammiller6631 It's like all six books are a warning or something!

    • @canreadandsee
      @canreadandsee Před 11 měsíci

      Actually, turning over thinking to machines is impossible. The idea to do that only presupposes the “turning off” of the thinking. This is a typically human capacity.

    • @canreadandsee
      @canreadandsee Před 11 měsíci +1

      You can’t make a hammer think, but by using a hammer, everything turns out to be a nail..

    • @merthsoft
      @merthsoft Před 11 měsíci +4

      @@canreadandsee I do not believe Herbert meant this 100% literally. It's clearer within the text. Highly recommend reading the first three Dune books. This quote is from the first.

  • @CryptoJones
    @CryptoJones Před 4 měsíci +16

    tl;dr-- When people say AI, they think of an Artificial General Intelligence (AGI) that does not exist (yet.)

    • @RaikaTempest
      @RaikaTempest Před 3 měsíci +11

      Yup, an hour to argue semantics. Along with a healthy dose of misunderstanding the actual potential.

    • @Yottenburgen
      @Yottenburgen Před měsícem

      And when people say AGI they actually mean ASI, and when they mean ASI they mean magic. It gets tiring to be honest. Oh, neat thing today which resulted in me getting linked back to this video, GPT4O is a genuinely multimodal model, which means it's no longer a 'narrow intelligence' LLM, but a legitimately defined 'general intelligence'. Still isn't whatever magical being people think it should be.
      She is right that AI became a buzzword like crypto, but its primarily companies trying to compare a simple tree and a professional model like they are equivalent. But that's unfortunately about the only opinion of worth in the video.

  • @zantro5092
    @zantro5092 Před 5 měsíci

    Never seen any of your videos, just got this recommended and all I can say after watching this is thank you

  • @sleepinbelle9627
    @sleepinbelle9627 Před 11 měsíci +137

    As an artist and a writer one of the first things I had to learn was that ideas are cheap. It's easy to come up with an idea that you're sure would be really cool, the hard part is taking that idea and making it mean something to someone else. The reason artists learn technical skills like drawing or writing or game design is so they can turn their ideas into objects that someone else can use to experience the feelings that lead them to create it in the first place.
    AI automates those technical skills and in doing so cuts off the creator from the end product, so you end up with a story or painting or song that's only meaningful to the person who made it because they already know what they wanted it to mean.

    • @TheShadowOfMars
      @TheShadowOfMars Před 11 měsíci +8

      "Prompt Engineer" = "Ideas Guy"

    • @aparcadepro1793
      @aparcadepro1793 Před 11 měsíci

      ​@@neo-filthyfrank1347When used incorrectly

    • @aparcadepro1793
      @aparcadepro1793 Před 11 měsíci

      And ofc under capitalism

    • @sleepinbelle9627
      @sleepinbelle9627 Před 11 měsíci +8

      ​@@HuckleberryHim Yeah I was struggling to put that bit into words. I was trying to figure out why "AI Artists" seem to love the images that they generate when to most other people they're meaningless and generic.
      I think it's because the AI artist has an idea that they really like and they type that idea into an AI generator. The image it spits out is generic and vague but they can project their cool idea onto it so to them it looks good. To everyone else who doesn't know their original idea, however, it still looks vague and generic.
      Whereas, when a skilled artist has an idea, they know how to make it into a picture that other people can interpret.

  • @bilbobaggin3
    @bilbobaggin3 Před 10 měsíci +92

    As a librarian, I'm constantly teaching people how AI/ML is good for some things but not others, so it's really nice to see a video which really hits at the big issues surrounding it!!!
    Also: as a counterpoint: Picard is played by Patrick Stewart, Riker is Kermit, Troi is Ms Piggy, and you have Stadler and Waldorf as Q.

  • @elliswrong
    @elliswrong Před 3 měsíci +5

    47:39 it's not any kind of mass casualty, but an airline did have a somewhat high profile chatbot fuckup pretty recently. i forgot exactly what happened because my brain is swiss cheese, but i think it had something to do with the support chatbot giving someone a fake refund or advising some policy that didn't exist, and they were held liable for fulfilling that non-existent thing.

    • @jbp703
      @jbp703 Před 2 měsíci +1

      I think that was Air Canada

  • @adamashby2244
    @adamashby2244 Před měsícem +1

    First Auto Play that I actually enjoyed thoroughly!! Thank you for taking the time to say what needed to be said. Much appreciated!

  • @hedgehog3180
    @hedgehog3180 Před 11 měsíci +58

    10:10 I heard of a similar story where an AI was trained to identify skin cancer and it seemingly got really got at it but then it turned out it was just relying on there almost always being a ruler in the picture if it was actually skin cancer because the picture was taken by a doctor, while the others were just generic pictures from some dataset.

    • @jackalope07
      @jackalope07 Před 11 měsíci +7

      oh god I have a ruler next to my hand at my desk 😢

    • @brotlowskyrgseg1018
      @brotlowskyrgseg1018 Před 11 měsíci +19

      @@jackalope07 I just consulted an AI about your condition. It says your hand has all the cancers. My deepest condolences.

    • @finnpead8477
      @finnpead8477 Před 11 měsíci

      I've seen this one too! It's a really great example of what sort of limitations exist in machine learning.

    • @petersmythe6462
      @petersmythe6462 Před 11 měsíci

      That's an example of having a crap training set.

    • @ps.2
      @ps.2 Před 10 měsíci

      @@petersmythe6462 Yes but in a way that is _not at all obvious_ until you figure out what happened. Because no human, trying to figure out how to detect skin cancer, would have ever thought to take this correlation into account.
      Or, more accurately, they _would_ figure out that cancer pictures are the ones with evidence of being taken in a clinical setting - _if_ they were studying for a test, and thought that the same pattern would hold for the actual test. But not if they were trying to figure out the actual skill! The problem with ML, of course, is that it's *always* studying for a test.

  • @123370
    @123370 Před 11 měsíci +109

    My favorite ML healthcare thing is the skin cancer model that found that if the image has a ruler in it, it's more likely to be malignant (because they took the picture when they wanted to measure the growth).

    • @GlanderBrondurg
      @GlanderBrondurg Před 11 měsíci +11

      From the beginning of computing the term GIGO (garbage in, garbage out) has always been true. Why that principle is forgotten in every generation sort of surprises me in some ways but I guess sometimes you need to relearn some things for yourself.

    • @zimbu_
      @zimbu_ Před 11 měsíci +5

      It's an excellent model if they ever need to check a bunch of pictures for the presence of rulers though.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci

      Would such model produce incorrect results for images where ruler is present? I feel that tumor size is a very important factor and image without rulers can be safely ignored in both training and inference data.

    • @joseapar
      @joseapar Před 11 měsíci +2

      @@vasiliigulevich9202 Yes potentially. The point still is that you can't use an algorithm on its own without expert review because its not intelligent.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci

      @@joseapar missing not

  • @StevenLeahyArt
    @StevenLeahyArt Před měsícem +2

    As a full time artist, I love your take on this. The argument I hear all the time is 'You use models and photographs to make your art, AI is just doing the same' It is the warping of acceptable ethics that is the difference.

  • @BarkleyBCooltimes
    @BarkleyBCooltimes Před 4 měsíci +5

    The customer service part is good, I already see that today with nearly every service, the is this stupid chatbot I have to spam enough times with "Talk to a real human" because the chatbot can only solve problems I already know how to fix.

  • @EphemeralTao
    @EphemeralTao Před 11 měsíci +105

    One thing I am already seeing in multiple industries (including my own, which is kinda frightening) is the increase use of machine-learning tools to replace workers for certain very specific contexts. Language translation is one of them, specifically for technical manuals. There's already a problem of "Engrish" -- badly translated, clumsy, and confusing English translation -- in tech manuals, and the industry was perfectly willing to simply accept that as the norm for decades. These machine-translation tools will produce manuals with about the same or slightly worse quality, and management is perfectly happy to accept that as close enough to the norm as long as they don't have to pay people for better quality translations.
    And that's the real problem of these "AI" machine-learning tools, being "Good Enough". Not that they'll replace us by doing our jobs as well as we do, because that's a long way off if it ever happens; but that the capitalists that own and use these tools will consider their work "good enough" to replace workers; that they'll consider the drop in quality to be adequately balanced by not having to pay humans to do the job anymore. That's why we've seen such a decline of, for lack of a better term, "quality control" in so many aspects of so many industries: capitalist owners and their lackeys accepting lower and lower levels of "good enough" as long as they can keep shoveling more money into their pockets; even if that results in failed businesses in the long term. Because that is what it's all about, prioritizing short-term gains over long-term viability.
    Also, The Muppets are the greatest thing ever, and now I'm going to have "Johnny We Hardly Knew Ye" stuck in my head for a week.

    • @hagoryopi2101
      @hagoryopi2101 Před 10 měsíci +4

      Prioritization of short-term gains is not unique to capitalism. The power of the people to hold idiots accountable for doing such stupid things, however, is unique to the right to privately own your property and therefore to give explicit consent before you have to hand any of it over to the people you think might waste it.
      If the people prioritizing short-term gains are doing it with your tax dollars, which you legally cannot stop paying them (something people in tax-funded services do constantly yet never get called out for, and which they will absolutely begin doing with machine learning, too, once people young enough to know about it start getting elected), good luck getting them to stop!

    • @EphemeralTao
      @EphemeralTao Před 10 měsíci +20

      @@hagoryopi2101Erm, no, that doesn't make sense. The prioritization of short-term gains may not be unique to capitalism, but it's certainly orders of magnitude worse under a capitalist system, since no other system has anything like the economic pressure to do so. Prioritizing short-term gains is predominantly the effect of corporate business structures, limited liability corporations, and emphasizing unsustainable growth over long-term stability.
      Tax dollars have nothing to do with "short-term gains", since that's an effect of commerce, not social programs. The abuse of tax funded programs is an entirely different and unrelated issue. The biggest abusers of public tax dollars are megacorporations, through tax write-offs, regulatory loopholes, and outright fraud as we saw with the Covid business incentive payments. Big businesses depend heavily on local public infrastructure without paying their share, or often anything, into its construction or maintenance.
      Also, private property ownership has nothing to do with voting; hundreds of thousands of people in the US own property and are still disenfranchised by state laws, regulations, and polling restrictions. Accountability, in both the public and private sector, is created through democratic processes, legal processes, and regulatory agencies. The current lack of accountability in the private sector is the result of regulatory power being gutted within the last four decades, and a lack of legislative will to restore it.
      This is all just mindless libertarian propaganda with no connection to reality.

    • @hagoryopi2101
      @hagoryopi2101 Před 10 měsíci +1

      @@EphemeralTao tax money is income. The transaction of tax money for social programs is commerce. They want more income than spending, and they want to use social programs in ways which convince us they deserve more tax income (regardless of of they deliver on their promises). That's the same economic pressure which corporations are under. The only difference is that we don't have the legal right to consent for whether we give them money or not, so they're unaccountable.
      Yes, the biggest abusers of tax money are corporations. Because it's there to capitalize on, because we can't consent to giving it to the government like we can giving it to them directly, and because they have the most power to lobby for it. That is a natural consequence of the existence of those programs, which can't just be regulated away because they will find every loophole and underground method to get the candidates who favor them into power and the regulations which favor them into law, because they have the most power to make that happen. As long as we don't have the power to consent to giving that money away, they will have first dibs on it; if we did have that power, they would have literally nobody else to answer to but us, because we would control their money.
      Democracy is no substitute for the power to threaten their bottom line. The fundamental problems remain regardless of who is in power, it's slow and bureaucratic to fix these problems by design, and the massive majority of candidates are part of the same club. Giving them more power to largely do the same they always have won't make things better.
      Circling back to AI, they will absolutely use machine learning in lazy ways which will hurt us. Several people have already been falsely arrested based on AI-driven facial recognition. Lawyers have already tried to use AI to write their court documents for them. Corporations are already hard at work lobbying for regulation to keep us from benefitting from machine learning, to make sure only they can benefit. There will probably be even more creative abuses of AI as time goes on, some of which are probably already happening without our knowledge: I can imagine AI-written legal code, AI-scraping personal information from the web to enhance federal surveillance, use of AI facial recognition to distribute fines for petty crimes without any human input at all which will be nearly impossible to dispute without spending more money than the fine anyways, offering AI public defenders instead of human ones, and so much more! And because we can't threaten their bottom line, and because democracy only lets us vote for 2-4 different flavors of the same crap in the majority of elections at any level rather than making meaningful change, the government will be virtually unaccountable.

    • @aapocalypseArisen
      @aapocalypseArisen Před 9 měsíci +1

      less work is good for humanity
      it is the systems and societies we live in that make it existentially concerning
      utopia and dystopia are a very thin line sadly

  • @blenderpanzi
    @blenderpanzi Před 11 měsíci +80

    About the professor asking ChatGPT if it had generated some student papers, in case people reading this don't know: ChatGPT has no general memory. You can't ask it about chats it had with other people (sans vulnerabilities in its API, but that is a different story). It's whole "memory" is the chat history you had with it which gets fed back into it every time you write a new message in a conversation. It's basically fancy text auto completion and the chat history is the text it needs to complete for its next message.

    • @CineSoar
      @CineSoar Před 11 měsíci +9

      I don't remember where, but some "explainer" on ChatGPT months ago, mentioned that it wouldn't be long before every student would be using ChatGPT to produce their essays. "But" they said, you could feed something in and ask ChatGPT whether it had written it, and it would tell you. I have to wonder, if that teacher had seen that same BS (whose script was probably based on "facts" hallucinated by ChatGPT) and believed it.

    • @adamrak7560
      @adamrak7560 Před 11 měsíci +3

      You can feed text into an LLM and use the output logits to make a guess about if it was generated by the same model.
      But this guess is very unstable, because you cannot reconstruct the whole prompt, and chatGPT does not give you the logits anyway.

    • @tomweinstein
      @tomweinstein Před 11 měsíci +14

      Au contraire. You can ask it about anything, and it will give you an answer that is likely to seem plausible if you don't know any better. You absolutely shouldn't ask it for anything that requires actual knowledge or morals or a connection to reality in order to answer. But people will do it, especially when they stand to make money despite the terrible answers.

    • @greebj
      @greebj Před 11 měsíci +4

      it doesn't even have memory of its own chat history, I asked it about thyroid enzyme cofactors and got it to "apologise" and admit it left one off its list, then asked the same original question immediately, and got the same original list

    • @petersmythe6462
      @petersmythe6462 Před 11 měsíci +1

      It's not even that much. Its memory is like 4 or 16 thousand tokens, about 3-12 thousand words. I really really wish ChatGPT could remember my whole conversation with it but sadly it can't remember more than a few o pages.

  • @paulmachamer5575
    @paulmachamer5575 Před 4 měsíci +11

    In my opinion It's too early in the evolution of this technology to make a blanket statement that "AI doesn't exist". The ML cat identification from images example is analogous to an examination of just a handful of neurons in the human brain which have specialized in a very specific task and then saying "there is no intelligence happening there." It's a matter of scale and we are only at the very beginning of scaling up ML and DL. Yes, the AI models are in fact black boxes that we don't understand - we only understand the input side and the design prior to training the models, and then we can only judge the output based on the results that are generated. The middle is a complex mystery. Where we are heading is many orders of magnitude beyond that complexity and then we are going to be looking at something that will very much look like "intelligence". We are still at the "morse code" stage of this technology. The other comments in this video are telling me that people don't actually understand why the proper term is in fact "artificial intelligence".

    • @jamesheartney9546
      @jamesheartney9546 Před měsícem

      This could be right, but I think it's equally (if not more) likely that scaling up these processes will just produce worse and worse outputs. Bear in mind that virtually all the apparent value in LLMs and visually generative AI comes from the fact that it's working off of human-generated content, and without that content to mimic, AIs will devolve into extremely fast and efficient garbage-production machines. As Cory Doctorow and others have pointed out, LLMs have an insatiable hunger for data, and the scaled-up versions will want even more. Since there's not enough human content to feed these systems, their creators are feeding them with AI-created data. As that happens, the quality of the output will collapse (as it is already starting to do).
      If we want to build general AI, we'll need to start at basics and build basic understanding, which is precisely what today's AI systems don't have. Am I wrong about this? Maybe, but if I were a betting type I'd say AI is heading for the same garbage heap that crypto is dropping into. We'll see.

  • @tr33m00nk
    @tr33m00nk Před 2 měsíci

    Your title should be the front page headline on every news ''feed" and paper in the country! Well done.

  • @breezyillo2101
    @breezyillo2101 Před 11 měsíci +256

    AI *can't* replace our jobs, but execs will fire us thinking that it *can*.
    So we should still worry about it, but for slightly different reasons than people think.

    • @RonSkurat
      @RonSkurat Před 10 měsíci +40

      and the execs will (once again) claim that the collapse of their company wasn't their fault

    • @SelloutMillionare
      @SelloutMillionare Před 10 měsíci +1

      it can’t yet

    • @connordavis4766
      @connordavis4766 Před 10 měsíci +17

      @@RonSkurat Well yeah, the people they fired just don't want to work anymore.

    • @gavinjenkins899
      @gavinjenkins899 Před 10 měsíci +4

      If that were true then no, they wouldn't. Or they would only fire people for like... 2 months before learning their lesson and going back and hiring people again (which, in the aggregate, means the average person will get their job back even if at a different company). The reason you should worry about it is because the host of the video and you are simply wrong and it absolutely can and will replace your job properly and do a better job of it at some point. And THEN you will get fired, because it's actually better for the company at that point.
      If you used to be worth $45 an hour, and then get them back at $15 an hour, and PEOPLE ACCEPT IT, and no other competitor UNDERCUTS THEM, then that clearly means the AI actually covered $30 an hour worth of the work. If they could have all just paid their workers $30 less before, they would have. They couldn't. Now they can. Because something ACTUALLY changed. This isn't complicated. Honestly if you think it is complicated or unconvincing somehow, you should probably be especially worried about AI taking your job specifically sooner...

    • @RonSkurat
      @RonSkurat Před 10 měsíci +9

      @@gavinjenkins899 I design clinical trials & provide skilled medical care. I'm AI-proof. You, on the other hand, sound exactly like GPT-4

  • @NitroLemons
    @NitroLemons Před 11 měsíci +37

    I would gladly look at 1000 images with the promise that some of them contained cats. I mean that's basically how my day to day internet browsing already goes...

    • @vasiliigulevich9202
      @vasiliigulevich9202 Před 11 měsíci +1

      Hehe that's why cat example is soo flawed when it comes to explaining machine learning. Machine image recognition is all about porn and child abuse these days.

  • @animetodamaximum
    @animetodamaximum Před 2 měsíci +3

    As someone with a Computer Science degree, I agree. It isn’t AI it is ML and people who say otherwise are either misinformed or lying for marketing or other reasons.

    • @animetodamaximum
      @animetodamaximum Před 2 měsíci

      @user-ki5os7vf3y It literally isn’t. It only knows what is correct or wrong due to human input and still ends up being wrong a lot of the time. Is it “learning” yeah but only what we tell it to. What you see as intelligence doesn’t exist artificially, it may in the future but it doesn’t exist now. Having code that mimics intelligence isn’t intelligence. Our parents tell us a cat is a cat sure but if we saw two of the same type of animal in the wild we would be able to classify them. We could even make up new animals via imagination or fiction without being told to.

    • @animetodamaximum
      @animetodamaximum Před 2 měsíci

      @user-ki5os7vf3y Man, your poorly worded responses convinced me. AI is here and is ready to replace us. Code does certain tasks very well within its assigned scope, given that the human mind doesn’t have such a limitation of scope should tell you something. Here is a fun number for you the strongest super computer IBM Summit consumes 30 megawatts of power while only getting a 5th of the computational power of the human brain which uses 20 watts. Just looking at the raw numbers and steel-maning Summit by assuming it has perfect code, it is 7,500,000 times less efficient than one human being when it comes to intelligence. Stop calling machine learning AI, because it simply isn’t. AI is quite a bit further away than you claim it will be. I am not telling you it cannot be, I am telling you it currently isn’t and won’t be for a while. I find it funny you compare nuanced college education to writing code, it isn’t that straight forward and shows your lack of understanding here.

    • @animetodamaximum
      @animetodamaximum Před 2 měsíci

      @user-ki5os7vf3y Given your replies you honestly already cut yourself off from reality. Believe what you want.

    • @animetodamaximum
      @animetodamaximum Před 2 měsíci

      @user-ki5os7vf3y Logic is learned by experiences which can be independently referenced outside of instruction spontaneously. Something "AI" cannot do nor will it in the near future.

  • @RobRoss
    @RobRoss Před 5 měsíci +3

    Intelligence is such a fuzzy concept that I think it’s easy to come up with a definition that incorporates these software attempts as “intelligent.” They’re not human intelligent. They’re not cockroach intelligent. They are intelligent in the context they are designed to be. I think the quest for *human* AI is not going to be so much discovering a new tool, it’s going to be discovering how we do an existing magic trick. And that’s never as interesting as you think it’s going to be. It’s like the discussion on human “free will.” If you really understand deterministic physics you would see no room in the universe for the concept of “free will” as previously understood. So you either come to the conclusion that a concept you previously believed existed does not (e.g. The Aether), or it does not work the way you thought it did (Compatibleism). I think the same thing is going to happen with the quest to understand Human Intelligence. We’ll keep pulling threads on the sweater until the sweater disappears. Then we’ll all scratch our heads and wonder where the sweater went.

  • @wpbn5613
    @wpbn5613 Před 9 měsíci +89

    i love how for the first half of the video you're very objective about what you want to say and at the midpoint you're just like "it's so fucking unethical to make me even look at your AI art. it's fucking garbage" and it's just so good

  • @TVarmy
    @TVarmy Před 11 měsíci +108

    I'm a software engineer who's wanted to say everything you said to my normal friends but every time I try I start hooping and hollering about gradient descent and that the neurons aren't real and they're like "I read chatgpt will replace you so I get why you're sad." You have an incredible skill at explaining just the important bits.

    • @antronixful
      @antronixful Před 11 měsíci +21

      ​@@bilbo_gamers6417nice joke written by chatGPT

    • @nada3131
      @nada3131 Před 11 měsíci +23

      @@bilbo_gamers6417 I think before we talk about AI being just as intelligent as humans one day, we should acknowledge that we don’t even know or understand what human consciousness is. It doesn’t matter whether we ask neuroscientists, psychiatrists, philosophers or computer scientists for that matter, nobody knows yet or you’d have heard of it I guarantee it. General AI is absolutely still science fantasy. The real question is how much we’re willing to let advanced function calculators (what we call “AI”) replace people’s jobs. If AI comes for the majority of developers’ jobs (not just html and css and whatever web framework), most jobs will have been eaten up as well. I agree that we should be worried, but a lot of the worry seems misdirected.

    • @fartface8918
      @fartface8918 Před 11 měsíci +11

      ​​@@bilbo_gamers6417it's taking people's jobs right now because it doesn't matter how bad a job it does when it works 24 hours no wages no days off, a significant amount of jobs do not require any amount of quality one of the major reasons why being that particular work didn't need to be done anyway but can't have half of your society unemployed and shit social safety net. this is big problem for everyone even before getting to the jobs that actually need quality control that will be unable to function from executives that don't know anything about the falling for an ad for ai that lied to them, the end result in another one of the Jenga blocks that makeup society being incinerated in the name of a few people having a small amount of profit for a short amount of time

    • @crepooscul
      @crepooscul Před 11 měsíci +8

      @@bilbo_gamers6417 "We don't need to know how consciousness works to recreate a simulacrum of it." Possibly the most idiotic thing I've heard and it's not the first time. You can emulate it, not simulate it. These two things are vastly different and completely unrelated. It's like you telling me that a parrot actually speaks when it's shouting its name. Human consciousness is still a complete mystery and if we figure it out one day it will likely be impossible to recreate artificially, the odds of creating it accidentally are basically 0.

    • @nada3131
      @nada3131 Před 11 měsíci +4

      ⁠​⁠@@bilbo_gamers6417Definitions are important. What you describe as “completely original” is not really original. You have to understand that the recent prowess of ChatGPT comes from its access to unprecedented amounts of data and very large computing power. Without the inputs containing all the languages of the earth, it wouldn’t be able to string along a complete sentence, let alone a poem. It’s not intelligence, it’s just big data and a legal system that hasn’t caught up yet (what we should be really worried about)

  • @aaronbono4688
    @aaronbono4688 Před 2 dny

    I can't think of any time that Kermit went after Miss piggy, he was always running away from her.

  • @avanteramon6235
    @avanteramon6235 Před 3 měsíci

    I liked this during the ad in the first few seconds because I knew this already. Thanks ahead of time. Thank you

  • @coffeeisdelicious
    @coffeeisdelicious Před 11 měsíci +220

    This is all bang on. I recently got offered a large severance package after 5 years at a tech company as the CEO started leaning hard into replacing people tasks with chatgpt. I am so glad you're talking about this.

    • @ClayMastah344
      @ClayMastah344 Před 11 měsíci +19

      Anything for profit

    • @nicodesmidt4034
      @nicodesmidt4034 Před 10 měsíci

      All these execs are just scared of their jobs because they really can’t “do” anything an AI can’t.
      As a shareholder I would vote to replace from the top down with AI

    • @gavinjenkins899
      @gavinjenkins899 Před 10 měsíci +9

      If they were wrong, they wouldn't be able to hire back the same people for less $. If they could, it means they weren't wrong, and the woman in this video is instead wrong. No company is just paying salaries for no reason, they pay what they HAVE to pay. So any time they manage to get away with paying less (fewer people or same people with lower pay as contractors, either way), where they couldn't get away with it before, it's because the AI tool WAS indeed actually adding that difference in value. if it was adding $0, then they would be forced to rehire everyone at the full rate they had before, because their competitors would outbid them

    • @coffeeisdelicious
      @coffeeisdelicious Před 10 měsíci +25

      @@gavinjenkins899 Lol? No, she's exactly right. Contractors get paid less than full-time staff. 1099 employees do not get benefits, which is a huge cost-savings. And AI can do a number of things UP UNTIL a certain point, at which point you need a person to review it... Ergo, a contractor, which might happen to be an ex-employee.
      Maybe you're not in the US, but that's how that works here and it happens all the time, especially now.

    • @gavinjenkins899
      @gavinjenkins899 Před 10 měsíci +4

      @@coffeeisdelicious I didn't say contractors don't get less. I said the market would not BEAR that change, unless their services truly were less in demand in reality than before. Why are they less in demand than before? Because AI is actually legitimately picking up the slack in between then and now. Companies do not get to just decide to pay people less on a whim, something actually needs to truly change for them to gain bargaining power. Otherwise obviously EVERY employee in EVERY field would all be 1099 employees, duh. Why do you suppose they aren't? Because ones whose jobs aren't actually done by AI have full bargaining power still. Ones whose jobs are done largely by AI don't have bargaining power. None of this makes any sense unless AI is actually quite useful and intelligent, and is actually doing most of their jobs effectively. AKA the opposite of her conclusion.

  • @mimithehotdog7836
    @mimithehotdog7836 Před 11 měsíci +30

    0:00 AI doesn't exist
    11:06 AI shouldn't be used to make decisions
    21:14 AI ethics/biases, 29:57 AI should not be used to produced products (songs, books, art)
    37:57 AI does not exist but it will ruin everything anyway
    45:05 Some predictions
    53:46 patrons?
    54:12 startrek muppets

  • @mikemarx9360
    @mikemarx9360 Před 22 dny +1

    Thank you very much for this, I don't know why this info is not more popular

  • @kfjw
    @kfjw Před 2 měsíci +1

    Movie Idea: Use AI to write the villain's dialogue in order to make them appear soulless.

  • @spacechemsol4288
    @spacechemsol4288 Před 6 měsíci +34

    A big problem with prompts (like you used to have ChatGPT exclude the movies) is, that they dont work the way we would expect them to. Prompts are not instructions or questions, prompts are the starting point for the LLM to guess the next tokens. Saying you dont want anything past '91 may not actually lead to those movies being excluded. Even if the LLM actually had an internal representation of that fact the output depends on language patterns and not truth. LLMs dont lie, they guess tokens.
    Thats why prompt engineering becomes a thing, you have to find the magic incantation that actually leads to the language pattern you want, without knowing why.
    As long as those problems are easy to spot its not an issue, but you cant get to the point where you can ever trust the output without checking it. Unfortunately a certain subsets of problems (like traveling salesman) are just as hard to verify as they are to solve, so AI would give us absolutely nothing and that doesnt even include problems that are social in nature or require morals.

    • @GuerillaBunny
      @GuerillaBunny Před 3 měsíci

      This is something I've been thinking about lately. ChatGPT and others like it are very good at pretending to be people, but the fact is that it's using a lot of very clever tricks of digital magic to hide how truly alien the "mind" of the machine really is. It's difficult to even explain to people, because the explanations sound like they *could* come from a human mind, but people forget how much subconscious information their body has accumulated through their entire lives, and the computer doesn't have any of it. And that is the whole point; to have something human-like without having an actual human. That is to say, forgo the need to train, hire and pay an actual human. If they weren't trying to skip costs on employees, they'd just employ humans for the job.
      And that's why the AI revolution will be terrible. It's driven by wanton greed.

  • @oscarfriberg7661
    @oscarfriberg7661 Před 11 měsíci +166

    There was this quote I read a while back. Don’t remember the source, but it went something along the lines of:
    “I don’t fear that super intelligent AI will control the world. I fear stupid AI controlling the world, and that they already do”

    • @benprytherchstats7702
      @benprytherchstats7702 Před 11 měsíci +33

      "People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world." - Pedro Domingos

    • @oscarfriberg7661
      @oscarfriberg7661 Před 11 měsíci +1

      @@benprytherchstats7702 That’s the one!

    • @Frommerman
      @Frommerman Před 11 měsíci

      There's an interesting corollary to this which directly attacks the ideas of transhumanists/techbros:
      We know AI which values things other than humanity will attempt to destroy us because we have already built an AI which values things other than humanity which is currently destroying us. It's called capitalism.

    • @gavinjenkins899
      @gavinjenkins899 Před 10 měsíci

      if it was ""stupid"" then it wouldn't be outperforming humans, including all her examples e.g. tuberculosis etc. "Oh but it doesn't COUNT" is pure coping mechanisms and excuses. If it was soooo easy to use XYZ strategy to do better, then WHY DIDN'T YOU DO BETTER before? Because it wasn't soooo easy. You're just scared/arrogant/in denial. It's not a "general" AI, because it's not smarter at everything, but it is smarter at millions of specific narrow tasks, which is what AI is supposed to, and does mean. It is not only intelligence, but more intelligent than you are, at many many narrow tasks, so far in history.

  • @fieldHunter61
    @fieldHunter61 Před 4 měsíci +1

    I used to think like this because I didn't realize how many hours are spent on it. I didn't think people would want to work on it but I was wrong. Now think for a moment if under an organized effort we simultaneously trained every sense, perception, action, reaction and consequence we experience. Start recording every outlier and exception and adjust. I am starting to believe it has potential but will consume resources and faces a threshold, we may not have a way to communicate with it to help it help us. Its like making wishes with a genie that eventually starts backfiring or speaking giberish because we can't train it in what its trying to train us in. It can be good at certain things like you mention, crawling data. So initially my doubts are in its resource consumption and our own limitations of expressing our desires but I'm just begining my studies.

  • @MattAngiono
    @MattAngiono Před 2 dny

    As an artist who actually picks up a paint brush, a camera, a guitar, and some drumsticks...
    THANK YOU

  • @Nihilore
    @Nihilore Před 5 měsíci +127

    i got Mcdonalds the other day for the first time in ages, the cup my drink was in had a print at the bottom that simply said "co-created with AI" ...wtf does that even mean? how is my beverage "co-created with AI"? why? how? who? what?

    • @unassumingcorvid9639
      @unassumingcorvid9639 Před 4 měsíci +22

      Probably had something to do with the print - or, “art” - on the cup

    • @jim9062
      @jim9062 Před 4 měsíci +26

      it's a new flavour of coke - supposedly created by AI, imagining what it might taste like in the year 3000

    • @DamianSzajnowski
      @DamianSzajnowski Před 4 měsíci +1

      AIing

    • @shroomer3867
      @shroomer3867 Před 4 měsíci +2

      AI juice.

    • @mielsss
      @mielsss Před 4 měsíci +13

      Misinformation Dew

  • @nefariousyawn
    @nefariousyawn Před 11 měsíci +91

    Sorry I don't have real money for patreon, but somehow I have a google play balance, so I will give you some. As a lay person with a hobbyist's interest in science and tech, I thoroughly enjoyed this, and you made great points that I hadn't considered. Machine learning algorithms might not take my job, but it will give employers/shareholders a reason to make it pay less, just like all the other tools that have enabled my job to be done more efficiently over the decades.
    There is an episode of the Muppet Babies that parodies TNG, but they also squish the other big sci-fi franchises of the time into the same episode.

    • @nefariousyawn
      @nefariousyawn Před 11 měsíci +16

      Also got a kick out of raising the monthly subscription cost of Chatgpt every time it was mentioned.

    • @acollierastro
      @acollierastro  Před 11 měsíci +12

      > There is an episode of the Muppet Babies that parodies TNG,
      Where has this been all my life?!?!

    • @nefariousyawn
      @nefariousyawn Před 11 měsíci +2

      @@josephvanname3377 if you want to donate to this channel, then convert some of your crypto into a fiat currency and then do so.

    • @nefariousyawn
      @nefariousyawn Před 11 měsíci +2

      @@josephvanname3377 I know this conversation isn't likely to go anywhere productive, so I'll let you have the last word. What you just told me sounds like you can't use your crypto because it's worthless. A currency is only a currency when it can be exchanged for goods and services. Take care.

  • @olliebee2835
    @olliebee2835 Před 4 měsíci +1

    I love how you got angrier and angrier but in such a subtle way 😂😂😂

  • @TMinusRecords
    @TMinusRecords Před 11 dny +1

    It's like that geoguessur AI that could figure out where in the world a street view image was taken... by looking at the lens smudges which corresponded to certain locations.

  • @Riccardo_Mori
    @Riccardo_Mori Před 11 měsíci +54

    I've watched almost all your videos since subscribing. You amaze me. I'm sure you prepare each video with notes and a general structure for what you'll be talking about. But the end result is that it looks like you're just effortlessly telling what comes to your mind in such a natural, matter-of-fact tone - and that is just a joy to listen. It's a sort of 'scientific stream of consciousness' that sounds casual but it's actually very cleverly laid out. Amazing. And - on-topic - thank you for pointing out so clearly all the misconceptions about AI I've seen around so far. Thank you. Your new fan - //Rick

  • @OldManFeagle
    @OldManFeagle Před 11 měsíci +68

    any physics papers written by ChatGPT will be 3-5 pages long and only have an introduction since Avi Loeb has apparently written the majority of papers in the last 10 years and thus will make up the majority of its data set. Also, Sweetums is the character I would choose for Worf.
    Love your channel. More please 😀

    • @ytzenon
      @ytzenon Před 11 měsíci +3

      You don't even need AI for this, physicists will get there by learning from their peer "Avi Leeb"s.

    • @treyebillups8602
      @treyebillups8602 Před 11 měsíci +1

      @@andrewfarrar741 Did you get mad at her video on crackpots lmao

    • @AnalyticalReckoner
      @AnalyticalReckoner Před 11 měsíci +4

      Just saw a thing on the news about some spheres from the ocean being from an alien craft. Guess what "expert" showed up to jump to conclusions before any testing was done?

    • @treyebillups8602
      @treyebillups8602 Před 11 měsíci +3

      @@AnalyticalReckoner dude same i saw a news article of avi loeb saying it was an alien spaceship and i did the leonardo dicaprio pointing at tv meme

  • @stevenparman6761
    @stevenparman6761 Před 3 měsíci +1

    I've enjoyed several hours of your content. Thank you for putting in all of this work.

  • @RomeWill
    @RomeWill Před 2 měsíci +2

    This is officially one of my favorite videos on CZcams 💪🏾 i agree with every single point made. All of them.