A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru

Sdílet
Vložit
  • čas přidán 25. 04. 2023
  • SUBSCRIBE TO FACTUALLY: link.chtbl.com/nD2_iAuV
    SUPPORT THE SHOW ON PATREON: / adamconover
    So-called “artificial intelligence” is one of the most divisive topics of the year, with even those who understand it in total disagreement about its potential impacts. This week, A.I. reseachers and authors of the famous paper “On the Dangers of Stochastic Parrots,” Emily Bender and Timnit Gebru, join Adam to discuss what everyone gets wrong about A.I.

Komentáře • 1,4K

  • @fran3835
    @fran3835 Před rokem +258

    When I was in college I did an internship in a AI company, they asked the interns to make each a small project that could help the community and would be open source I proposed to make a videogame (a small game jam type thing that visually represented how AI works) everyone looked at me like I was stupid and told me I have no idea how much effort it takes to make a videogame, the other guy proposed to make an AI psychologist and everyone thought it was a great idea... By the time we ended, you could tell it you were about to kill yourself, and sometimes the thing would answer "good luck with that, goodbye" and close the connection (they removed the psychologist from the site and leave it as a regular chatbot)

    • @Theballonist
      @Theballonist Před rokem +46

      Perfect summary, no notes.

    • @sabrinagolonka9665
      @sabrinagolonka9665 Před rokem +100

      Absolutely love the conceit that producing an effective psychologist is easier than programming a game

    • @MaryamMaqdisi
      @MaryamMaqdisi Před rokem +1

      Rofl

    • @estycki
      @estycki Před rokem +18

      I know I shouldn’t laugh but the bot probably figured if the person was dead then the conversation is over 😆

    • @Neddoest
      @Neddoest Před rokem +4

      We’re doomed

  • @JayconianArts
    @JayconianArts Před rokem +382

    I wanna say as an artist, hearing proffesional researchers and entertainers explictly saying the same things that artist have been relieving. With Image generators being one of the first big booms of this curent wave, artist have been raising the alarm on this topic for almost a year now, and what negative impacts it's going to have and how it's exploiting us. Feels like we've been on our own for most of the fight, so seeing that there are others on our side is comforting.

    • @paulmarko
      @paulmarko Před rokem +13

      Did you see that us us copyright office won't copyright AI generated work because there was no human authorship? The trajectory is moving in a positive direction already. Also pro concept artists already use a myriad of theft-like tools like photo bashing, Daz 3d, inspiration without concent, etc. Im an artist and in think the artists worries are being misplaced. People aren't going to be replaced, they're going to be able to spend their artistic time just doing the really fun parts of art, juicing the work and pushing it creatively. (at least until agi comes for every job all at once)

    • @gwen9939
      @gwen9939 Před rokem +24

      @@paulmarko The whole reason this whole scare about AI replacing art has even been a thing is because the extremely low bar most people have for what constitutes good art. This has been a serious issue in the world of music commissioning long before AI where it was impossible to get started on paid freelancing commissions because someone was always offering the same as you for incredibly cheap. It generally sucked but the game devs, film directors, marketing agents, etc., were incapable of telling the difference between a professional and a hobbyist. Same goes for sites like Audiojungle, where that at least is very high technical quality but it's also completely soulless inoffensive market-tested elevator music, and sounds like you've heard it the 500th time the first time you listen to it.
      And it's every level of every industry. The whole Mick Gordon getting screwed out of a contract was because the lead on the project just kicked him to the curb and punted the rest of the project over to their own in-house sound guy, figuring that would be just as good as anything Mick Gordon could make, which is why it sounded like garbage.

    • @JayconianArts
      @JayconianArts Před rokem +50

      @@paulmarko People's jobs are already being replaced. The nature of these machines isn't to help artists, it's to remove them. Illustrators that have made book covers for years are finding companies they've worked with now use Image Generators. There was a Netflix movie made that used Ai to do backgrounds in an animated film because of a 'labor shortage'- meaning that artist where wanting better pay and unionizing, but the companies would rather simply not pay artists at all.
      Also, calling photo-bashing and insperation theft-like, and on the same level as image generators trained off billions of stolen images is simply absurd. If an artist is inspired by something, they're still putting their own spin, skill, and creativity behind it. To say that me being inspired by great artists, studying their works, techniques, and ideas, is comparable at all to someone typing words into an algorthim and getting a result minutes later is insulting. Machine's can have no inspiration, no direction, no life or thought into what it's making.

    • @paulmarko
      @paulmarko Před rokem +4

      @@JayconianArts
      They can't own an AI book cover though. I'm not sure what kind of book writer doesn't care that they don't and can't own their cover art except maybe really crappy ones? Sure there will be an adjusting period before the entire market is flooded with AI art, but it can't replace the real artists because companies need to be able to own the asset, and the low skill involved means that people will gradually stop interpreting an AI art generated cover as a signal of quality. Well see of course but I'm very optimistic that it'll just become an artists tool that will help people make new and amazing works much faster.

    • @paulmarko
      @paulmarko Před rokem +1

      @@JayconianArts
      Also re: photo bashing. Ive definately seen some artists do some iffy stuff. Like one was painting a desert and it wasn't coming together, so right at the end he basically dropped the desert photo on top and smudged it in a bit. Similar with character design photo bashing. Definately seen a fairly large amount of contribution from what were photos basically just grabbed from Google images.

  • @cphcph12
    @cphcph12 Před rokem +189

    I'm a 53 years old programmer who started playing with computers when I was 12 years old, in the early 80's. They then expected AI to be just around the corner. 40 years later, AI is still "almost finished" and "so close". The more things change, the more they stay the same.

    • @Fabelaz
      @Fabelaz Před 11 měsíci +7

      You know, the fact that those things can write a code for a problem you just came up with is pretty impressive, even if there can be mistakes (which can be fixed through more requests). Also rate of improvement of things like stable diffusion points towards significant decrease of amount of commissions artists are going to receive, especially in corporate environment.
      Is it anywhere close to sentience? hopefully not. Are those things gonna leave a lot of people without jobs? Likely, if no policies are going to be implemented really soon.

    • @Ruinwyn
      @Ruinwyn Před 11 měsíci +14

      The biggest problem in programming is still exactly the same it has always been. The people that want the program, don't know what they want. They can't define what they need, they keep changing their mind and their priorities. They also have unique problems. The common, general problems have been solved, and are available off the shelf with one click. Every now and then, new languages crop up that "make programming more understandable ", and after a while they get more complicated, because the simplified couldn't solve more complex problems.

    • @GioGio14412
      @GioGio14412 Před 11 měsíci +2

      its not around the corner anymore, its here

    • @brianref36
      @brianref36 Před 11 měsíci +10

      @@GioGio14412 No, it's not. We have nothing even close to an AI that could replace a thinking person.

    • @slawomirczekaj6667
      @slawomirczekaj6667 Před 10 měsíci

      like with breed nuclear reactors. In addition all the people capable of real break throughs are eliminated from the industry or science.

  • @3LLT33
    @3LLT33 Před rokem +37

    The instant she says “the octopus taps into that cable” and the cat reaches out from under the blinds… perfect timing!

    • @Ecesu
      @Ecesu Před 8 měsíci

      Yes! Putting a timestamp so people can see it 😅 59:09

  • @joshuachesney7552
    @joshuachesney7552 Před rokem +247

    Just today an automation product we use was promoting it's new AI integration saying how the old way was slow and bad because we had to spend time researching things. The new way is awesome because AI just finds the answer and tells it to you.
    The question was how do you prevent a computer from upgrading to Windows 11, and the AI answer was to permanently disable windows from getting any types of updates ever. (For those who don't know, this is considered by industry professionals to be as we say in the biz "fucking stupid")

    • @tttm99
      @tttm99 Před rokem

      Testify! It's non sequitur isn't it! But you can't sell people - can't even give away - the seemingly obvious truth: that maybe relying on something implicitly and unconditionally that you don't understand and don't control is a bad idea. It might happen when you can't help it. But it ain't a good thing to go shopping for. 🤣 On the other hand, I guess I'd have to concede the AI might indeed be higher intelligence if it instructed you install Linux or just put your machine in a bin and go on a well deserved holiday 🤣. We can dream.
      But sadly sometimes contextual answers actually need to be practical and sensible, and those won't come from any ai until it is *vastly* more intelligent and far more corrected to the real world. Hopefully long before then we realise building *that* would be a very bad idea. And the fatalist inevitability crowd who argue against this might want to ponder why we *still*, after all these years, haven't nuked ourselves into non-existence yet.

    • @franklyanogre00000
      @franklyanogre00000 Před rokem +11

      It's not wrong though. 😂

    • @louisvictor3473
      @louisvictor3473 Před rokem +47

      @@franklyanogre00000 The AI took "technically correct is the best type of correct" at face value and made it its motto.

    • @SharienGaming
      @SharienGaming Před rokem +31

      and that perfectly illustrates the difference between finding "an answer" and finding "the (correct) answer"
      the idea that a chatbot can do research like that is laughable and anyone doing serious software development or systems maintenance will be able to tell you that... automation tools are nice, because they free up our time to do the actual hard work... the research and analysis... but they dont replace that hard work

    • @PeterKoperdan
      @PeterKoperdan Před rokem

      What was the AI's next solution?

  • @kibiz0r
    @kibiz0r Před rokem +226

    The eugenics connection is bone-chilling. People don't realize how popular eugenics was, across the whole world. It wasn't some fringe Nazi-specific thing. People really thought we were on the verge of creating a new superior species by applying genetic engineering principles to ourselves. We're in the same situation again, but businesses are enacting it unilaterally -- no government coordination required -- and public opinion seems (un?)surprisingly amenable to it.

    • @UnchainedEruption
      @UnchainedEruption Před 11 měsíci +35

      We still practice some aspects of the eugenics movement, but obviously we don't call it that anymore. Prospective parents receive information about what risks their child might have if they go through with the pregnancy, and some may decide to abort the fetus if the life will be too hard on both the family and the child. We have organizations concerned with the accelerating growth of the global population, urging people to have fewer kids to prevent overpopulation down the line. What made eugenics insidious was that somebody else, an authoritarian regime, would dictate who had the right to live and reproduce and pass on their genes. Those decisions were not voluntary. However, if people want to have some small effect on the future of the species by voluntarily choosing whether or not to have kids, I don't think that's evil. It only becomes evil when you decide for somebody else what value their life has.

    • @Ben-rz9cf
      @Ben-rz9cf Před 11 měsíci +6

      We're not just creating dangerous technology. We're creating dangerous people, and thats what we should be more worried about.

    • @yudeok413
      @yudeok413 Před 11 měsíci

      The thing about eugenics is that its proponents are obviously on top of the pyramid. All you need is a few billionaires who already think that they themselves are the pinnacle of humanity (Thiel and his minions like Musk) to get the ball rolling.

    • @Frommerman
      @Frommerman Před 11 měsíci +8

      Also, consider the similarities in effect between eugenics and the modern field of economics. Both make broadly unfalsifiable claims which could not be adequately tested even if the people studying them wanted to. Both serve the purpose of continuing to enrich and empower the already powerful. Both are used to justify the continuing horrific conditions in the colonized world by calling them the result of natural laws rather than human malignity. And both are regularly used to justify outright mass murder. In the case of economics it may be hard to see how that is the case...until you know what the estimated yearly cost of completely eliminating hunger is.
      $128 billion dollars. Total. For the cost of liquidating less than a third of Jeff Bezos' absurd dragon hoard, nobody anywhere in the world would starve to death for an entire year. Economists, in their infinite malice, justify a single man's daily decision not to prevent any human anywhere from being hungry. And the truly damning part of this is it wouldn't cost that much the next year. Once you removed the threat of starvation from every community everywhere, they would be able to focus on building up the resources they need to feed themselves next year. It's difficult to estimate, but the whole program of ending hunger globally, permanently, could well cost the wealth of one single person.
      Economists tell us this is unrealistic. Much like eugenicists told us it was unrealistic for white people to live peacefully with the rest of humanity. These aren't different arguments, or even different disciplines. Economists are just eugenicists using bad math instead of bad genetics to justify their arguments. If any of us survive the next century, I expect the histories we write will put Milton Friedman in the same category of evil as Adolf Hitler.

    • @ckorp666
      @ckorp666 Před 11 měsíci +5

      (not sure if this was mentioned in the episode, but) that was the original specialty of Stanford, too. we shouldn't be surprised that they're continuing the legacy now that a decade of low interest rates has allowed the vapid, rich children of palo alto skull measurers to become the sci-fi villains theyve always wanted to be

  • @stax6092
    @stax6092 Před rokem +421

    It's actually kinda incredible how much corporations get away with considering that they have the money to straight up just do a better job. More regulation is always good when it comes to corporations.

    • @tttm99
      @tttm99 Před rokem +37

      Starting with stopping competion-crushing mergers!

    • @1234kalmar
      @1234kalmar Před rokem +4

      Collectivisation. The vest regulation for private companies.

    • @Mr2greys
      @Mr2greys Před rokem

      @@tttm99 I agree except when you have other countries allowing it then they just stomp out local competition which the only response to that is protectionism. Horse is already out of the barn, it's pretty much too late

    • @andrewmcmasterson6696
      @andrewmcmasterson6696 Před rokem +13

      It's the MBAification of corporate excellence: whenever you can, substitute the appearance of excellence for the real thing.

    • @kyleyoung2464
      @kyleyoung2464 Před rokem +9

      this comment goes hard. proof that the best for profit does not = the best for us.

  • @robertogreen
    @robertogreen Před rokem +143

    on thing you didn’t focus on here is that GPT (and the octopus in emily’s paper) is that the bias is to ALWAYS ANSWER QUESTIONS. Like…if Chat GPT could just not answer you at all, not even “i don’t know” then it would be something very different. but its bias towards answering is the heart of the problem

    • @Ayelis
      @Ayelis Před rokem +5

      But then it wouldn't be useful as a question answerer, which it might as well be. Without input, it would literally be a random sentence generator. So they trained it to answer questions incorrectly. Which is, kinda, better.

    • @MarcusTheDorkus
      @MarcusTheDorkus Před rokem +52

      Of course it can't really even tell you "I don't know" because knowing is not something it does at all.

    • @robertogreen
      @robertogreen Před rokem +12

      @@MarcusTheDorkus this is the way

    • @scrub3359
      @scrub3359 Před rokem +4

      ​@@MarcusTheDorkus Chat GPT can easily do that. It knows what it knows at all times. It knows this because it knows what it doesn't know. By subtracting what it knows from what it doesn't know, or what isn't known from what is (whichever is greater), it obtains a difference.

    • @Brigtzen
      @Brigtzen Před rokem

      @@scrub3359 No? It can't know things, because all it does is parrot words. It _cannot_ know the difference, because it doesn't know what it doesn't know, because it doesn't think at all.

  • @davidwolf6279
    @davidwolf6279 Před 11 měsíci +17

    The irresponsible claims of programming 'thinking' and intelligence dates back to McDermott's 1978 paper: Artificial Intelligence meets natural stupidity

  • @sleepingkirby
    @sleepingkirby Před rokem +170

    14:51 "Open AI is not at all open about how these things are trained... according to Open AI, this is somehow for safety, which doesn't make sense at all."
    Yes! Thank you! As anyone in the industry will tell you, security through obscurity is BS.
    @Adam Conover
    Thank you for getting real experts on this. People that, not only do they know context of the topic, but they know how it actually works/is built.

    • @moxiebombshell
      @moxiebombshell Před rokem +9

      🎯🎯🎯 All of the yes.

    • @alexgian9313
      @alexgian9313 Před rokem +15

      @sleepingkirby - Of course obscurity is necessary for security :D
      *Their* security, before the lawyers have a field day sorting through how much IP theft was involved.

    • @skywatcher2025
      @skywatcher2025 Před rokem +3

      I agree that the security argument isn't great, but it's not entirely a lie, either.
      It's called an information hazard. Things that qualify are things like "how to build a nuclear bomb", "how to make chemical/biological weapons", etc.
      EDIT:
      I'd like to note that I'm not saying I support only a few companies knowing how to "build the weapon", per se. I'm speaking solely to the fact that the security (however limited in scope that may be) is one of the very few (reasonably) good reasons to not be very open about the process.
      Also, I'm am well aware of how some datasets are borderline, if not completely, illegally sourced. I do not support that in any capacity, and I realize that not showing how the systems are trained could allow such immoral usage. I do not claim to know a solution to this very important issue.

    • @sleepingkirby
      @sleepingkirby Před rokem

      @@skywatcher2025 are you referring to the "security through obscurity" aspect or something else? Because this comment seems like a tangent to me.

    • @alexgian9313
      @alexgian9313 Před rokem

      @@skywatcher2025 Oh, come on....
      Because if they explained how they did it.... why, then just ANYONE could buy millions of dollars of computer equipment, consuming more electricity than a large county, and then rip off millions of poor people to classify all the trillions of GB of data they'd scraped off the internet without permission, and create a hype bomb that this was "dangerous AI", that we needed to be protected from by covering it in total obscurity.
      I mean, WON'T ANYONE THINK OF THE CHILDREN???

  • @jt4351
    @jt4351 Před 11 měsíci +30

    Fun fact: it is still very buggy even for writing code as well. Depending on your prompt, it may assume you know what you're doing and suggest some amalgamation of what you asked.
    In programming, there are these things called methods and properties. Think of these as English words that tell the computer to do something. These are common tasks you don't have to tell a computer to do step-by-step, and are built-in tools of a programming language. However, if you ask it in a specific way, it will suggest your wording as property of the language, even though it is non-existent. You can tell it that it's wrong, and for the most part, it just repeats the same output. Unless I specifically ask it to use a different method, it keeps regurgitating the same thing while "apologizing".
    In plain English, it's something akin to: let's say you want a recipe for some crepes, and you typed in some gibberish. Something like "I want crepes that are smoverfied" - the model finds a recipe for crepe, and will add "once cool, be sure to smoverfy your crepes" with no idea of what that is. lol This is a random example that may not work, but I've had many cases where it gave me code that if I try to run it, I just get an error because something doesn't exist and it just morphed my prompt. It's a great tool to get started, but it mixes and matches, and is often wrong.
    It is just as artificially intelligent as it is artificially dumb. No wonder the mistakes in AI are called hallucinations...

    • @Atmost11
      @Atmost11 Před 11 měsíci

      I imagine part of your job was to help cover up for the fact that it, while having a role in business, cant perform as hyped in terms of actual un-supervised decision making?
      Including to protect your own team from evidence that it doesnt work I bet.

  • @hail_seitan_
    @hail_seitan_ Před rokem +147

    "I am dumbfounded by the number of people I thought were more reasonable than this..."
    Never underestimate human stupidity. If there's something you think people would never do, they've probably done it and more

    • @schm00b0
      @schm00b0 Před rokem +7

      It's never stupidity. It's always greed and fear.

    • @ethanhayes
      @ethanhayes Před rokem +7

      Not disagreeing with your point, but I think her point was that specific persons she knew, she thought were more reasonable than they turned out to be. Which is a bit different than general "human stupidity".

    • @asdffdsafdsa123
      @asdffdsafdsa123 Před rokem

      god people like you make me sick. you're not smart. neither of them addressed the emergent capabilities that are expressed in gpt4 which is the SOLE REASON that people think LLMs may eventually achieve AGI. plus their entire gotcha about the sparks of agi paper was that they had a problem with IQ tests???? something thats widely used to this day to determine human intelligence??

    • @Ilamarea
      @Ilamarea Před rokem +1

      This comment section is a perfect example of it.
      The AI we have is a research preview of a pre-alpha, an AI embryo. It's literally the first version that worked. It got vision after a couple of months. At current rate, we have years until the collapse of the capitalist system, which will spark wars for resources. And the inevitable end result regardless of how it goes is our extinction because once AI makes decision, once it learns by itself, we will have lost our agency, we won't control our fate and will be unable to react to threats, most potent of which will come from the AI and in forms we wouldn't expect - like perfect robotic lovers we can't breed with.

    • @schm00b0
      @schm00b0 Před rokem +8

      @@Ilamarea Dude, just keep working at that novel! ;D

  • @polij08
    @polij08 Před rokem +151

    Just yesterday, my law firm held a legal writing seminar for us associates. At the end, the presenter made a brief note about using AI for legal writing. In a word: DON'T. He had ChatGPT (or whatever bot) generate a legal memo. First, it was stylistically poor. Second, the bot failed to know that the law at the center of the memo had recently changed, so the memo was legally inaccurate. AI text may be able to generally get the style of writing legal briefs, but until it can accurately confirm the research that supports the writing, it is useless at best, very dangerous at worst. My job is safe, for now.

    • @jaishu123
      @jaishu123 Před rokem +7

      GPT-3.5 is not connected to the internet, GPT-4 can be via plugins.

    • @robinfiler8707
      @robinfiler8707 Před rokem +3

      it can already confirm it via plugins, though most people don't have access yet

    • @deltoidable
      @deltoidable Před rokem +6

      I won't be long until it can, look at GPT 4 plug-in that allow you to feed current data for it to analyze. You'll be able to upload digital copies of the laws in your state. Or give it access to a powerful AI calculator like Wolfram alpha, current stock market data, or just the internet generally. Letting it use tools when it doesn't know the answer itself.
      Currently Chat GPT isn't actively seeing data, it was trained on data from 2021 or earlier. It's remembering from data it's been trained on. When you give chat GPT access to data or tools for it to ground it's answers in real information, that problem goes away.

    • @skyblueo
      @skyblueo Před rokem +1

      Thanks for sharing that. Is your firm creating policies that forbid the use of this tool? How could these policies be enforced?

    • @achristiananarchist2509
      @achristiananarchist2509 Před rokem +21

      One of the main uses I've found for it as a programmer is pretty funny and related to this. I use ChatGPT for two things 1) generating boilerplate (which it's actually pretty bad at but sometimes it takes less time to correct its mistakes than write it myself) and 2) something we call "rubber ducking".
      Rubber ducking is when you corner a co-worker and talk at them about your problem until you brute force a solution yourself, often with little to no input from said co-worker. It's called "rubber ducking" because you could have saved someone else some time by talking to a rubber duck on your desk instead of a person. ChatGPT is *extremely* useful for this precisely because it is 1) very dumb and 2) has no idea how dumb it is. If I'm stuck on something, I can ask ChatGPT about it, and it will feed me a stupid answer that I've either already thought of or very obviously wouldn't work. In the process of wrestling with the AI, I'm forced to think about the problem and will often get my "Eureka" moment as a result of this. A rubber duck just sits there, ChatGPT feeds me wrong answers that make me think about my issue in assessing why they are wrong. Big improvement over the duck.
      So it's great as a high tech rubber duck. If there are any other applications where being naive, often wrong, and unable to self-correct is actually a feature rather than a bug they should start pivoting into those markets.

  • @sunnohh
    @sunnohh Před rokem +107

    I work with ai and my entire job is fighting against people thinking it works somehow

    • @FuegoJaguar
      @FuegoJaguar Před 11 měsíci +21

      In a short period 100% of what I do as a director in tech will be to tell people not to put AI in stuff.

    • @RKingis
      @RKingis Před 11 měsíci +5

      If only people realized how simplistic it really is.

    • @TheManinBlack9054
      @TheManinBlack9054 Před 9 měsíci +3

      @@RKingis who are these people who know NOTHING about AI and then say that its really simple to understand how it works? Its not simple, its hard, its basically kind of magic because we have no idea on what is actually happening, interpretability is a LOOOOOOONG way away.

    • @carl7534
      @carl7534 Před 9 měsíci

      @@TheManinBlack9054how do think it is hard to understand what chat „AI“ does?

    • @Maartenkruger324
      @Maartenkruger324 Před 8 měsíci

      @@TheManinBlack9054 "we" will never know because everything that GTP says is non-reference-ale.They, the GTP programmers, do know how it got to it's answer. Through statistical calculations. At best it can be worked back to a crap load of data with no direct answer,. The bot has no physical reference system. Most ly scripted sentences with no clue of the meaning of any of the words separately or together. ChatGTP does not know what a chair is.

  • @estycki
    @estycki Před rokem +101

    What I don’t understand is all these people who keep saying “well it’s still in its infancy! 👶 And let’s replace our doctors, lawyers, programmers with hard working babies today!” 😂

    • @2265Hello
      @2265Hello Před 11 měsíci +8

      A weird mix of instant gratification and and the need to save money as a side effect of basic survival based mindset in America

    • @Praisethesunson
      @Praisethesunson Před 10 měsíci +5

      ​@@2265HelloSo capitalism.

    • @2265Hello
      @2265Hello Před 10 měsíci +2

      @@Praisethesunson basically

    • @ShadowsinChina
      @ShadowsinChina Před 10 měsíci

      Its the racism

    • @parthasarathipanda4571
      @parthasarathipanda4571 Před 9 měsíci +2

      I mean... these are pro-child labour people after all 😝

  • @XPISigmaArt
    @XPISigmaArt Před rokem +74

    As a digital artist (and human living in society) I really appreciate this discussion, and hope this side gets more traction to combat the AI hype. Thank you!

    • @andrewlloydpeterson
      @andrewlloydpeterson Před 9 měsíci +2

      this is funny because like 2-3 years ago (and even now) digital artists were gatekeeped as hell and now they suffer from ai haters because digital art is easily mistaken for ai art

    • @TheManinBlack9054
      @TheManinBlack9054 Před 9 měsíci

      "(and human living in society)"
      WHy would you add that? Did you think we thought you were maugli or something? Or do you think there people out there who are not humans either in sci-fi way or nazi way?

    • @andrewlloydpeterson
      @andrewlloydpeterson Před 9 měsíci +1

      @@TheManinBlack9054 anti AI folks too lazy so asked AI to write an anti AI post thats why it said such a weird phrase

  • @GregPrice-ep2dk
    @GregPrice-ep2dk Před rokem +402

    The larger issue is Techbros like Elon Musk who think they're real-life Tony Starks. Their track record of actually *accomplishing* anything prove otherwise.

    • @CarbonMalite
      @CarbonMalite Před rokem +79

      If Elon was tasked with inventing a reality-busting mech suit he would invent the 8 day work week instead

    • @mshepard2264
      @mshepard2264 Před rokem

      Space x put as much mass in orbit as pretty much every other company on earth put together. Also without tesla electric cars would still be getting mothballed every 5 years. So feel free to hate Elon but he isnt a dumb guy. He is terrible at public speaking. He is bad with people. He is also super weird. But not like your average silicon valley tech bro.

    • @GirlfightClub
      @GirlfightClub Před rokem

      100%. Also, AI or big tech execs dictating their own morality on all us thru censorship that doesn’t reflect real life laws and community standards.

    • @stevechance150
      @stevechance150 Před rokem

      I used to be an Elon fanboi, but not so much now. However, 1. NOBODY was manufacturing electric cars until Tesla did it. 2. NOBODY is going to orbit and landing a rocket back on the pad.

    • @O1OO1O1
      @O1OO1O1 Před rokem

      No, con men aren't the problem. It's people who fall for them. And continue to fall for them for decades. And journalism and journalists are also at fault. And the government at fault for continuing to fund him. And the people for voting in such stupid representatives. And he's employees for putting up with these crap instead of striking and leaking all of the dodgy s*** he's been up to. People, good people, would take down Elon very easily. And he can sell used cars like he should be.
      "I tried to think about what would be most important for humanity..."
      " Dude, shut up. I just want to buy a car"

  • @BMcRaeC
    @BMcRaeC Před rokem +6

    59:13 when Emily's cat decides to enter the conversation… I burst out laughing in the library.

  • @heiispoon3017
    @heiispoon3017 Před rokem +89

    Adam thank you so much for providing Emily and Tinmit the opportunity for this conversation!!

  • @terriblefrosting
    @terriblefrosting Před rokem +10

    I _really_ love listening to people who really really know their stuff, do serious thinking about more than just "right now", and genuinely think about the real benefit to all of new things.

    • @oimrqs1691
      @oimrqs1691 Před rokem

      Do you think people working on OpenAI don't think about stuff?

    • @LafemmebearMusic
      @LafemmebearMusic Před rokem +1

      @User Name do you think their point was that the other side is stupid? That’s what you took from this?
      For me I heard them say hey, I don’t have to agree with everything you want but there are serious concerns about the marketing of AI versus the reality and we need to have more transparency so we can actually know where we stand with the Tech and how it can help others. Also they are deeply concerned about the eugenics angle it seems to be taking
      Can I ask, truthfully with 0 malice , real question: how did you take away from this that they think the other side is stupid? I definitely do think they find what their doing dangerous and ridiculous, but stupid? Idunno can you elaborate?

  • @samanthakerger3273
    @samanthakerger3273 Před rokem +14

    I love how much smarter I feel for having listened to this when it's a podcast that includes the sentence, "Is the AI circumcised?" Which is one of the funniest and darkest sentences in the podcast.

  • @sleepingkirby
    @sleepingkirby Před rokem +33

    I do want to mention that people who monitor bot accounts have seen recently an large uptick in said bots posting things that talk up/saying positive things (basically spam) about AI on things like user reviews and tiktok and comments on random things.
    Also, there has been a report out recently that said AI generated code is often unsafe code and it won't point that out unless you ask it to.
    But yes, it has become a marketing term.

    • @MCArt25
      @MCArt25 Před rokem +1

      to be fair "AI" has always been a marketing term. At no point has anybody ever managed to make intelligent software, it just sounds cool and Scifi and people will always fall for that.

    • @sleepingkirby
      @sleepingkirby Před rokem +2

      @@MCArt25 Well... no. Artificial intelligence goes back to science fiction first, before it was even close to being a thing in reality. Like Isaac Asimov. There's a novel in 1920 that talked about intelligence in robotic beings. It wasn't always a marketing term, but it has become one.

  • @UK_Canuck
    @UK_Canuck Před rokem +55

    Thanks to you and your guests, Emily and Timnit. This was a fascinating conversation that filled in so much detail for me. I had a vague sense of disquiet about the hype, the possibly plagiaristic nature of the output, and the accuracy of the data sets used for training. Emily and Timnit have provided some solid background to give a more defined shape to my concerns.
    I found particularly interesting the information that the groups driving the AI/AGI project had such clear links to the philosophy behind eugenics. Disturbing.

    • @robertoamarillas
      @robertoamarillas Před 11 měsíci

      I honestly believe Adam fails to understand the real potential and potential harm that artificial intelligence represents.
      human intelligence is not as unique as he wants to make it out to be, the reality is that human creativity is nothing more than the ability to blend concepts and ideas, and in that, LLMs are incredibly powerful and we are only scratching the surface.
      I think your whole concept of skepticism and discovering the truth in everyday deception is very valid and necessary, BUT I think you are really losing sight of the kind of paradigms that LLMs represent, I really think you underestimate the potential existential risks and it is annoying and irresponsible for you to indirectly attack the voices that have been raised to warn about it.
      you treat people like Eliezer yudkowsky and the like as doomsayers, who are motivated by some kind of financial gain.

  • @futureshocked
    @futureshocked Před 11 měsíci +48

    The reason they're pushing AI is because SILICON VALLEY IS OUT OF IDEASSSSSSS. If you look at what they've been doing for the past 15 years and you're brutally honest about it--we've wasted an entire generation of brilliant young programmers to make mobile apps. We've wasted a generation of brilliant product designers to make the Juicero. Bitcoin. Subscription apps. Tech has been in absolute clown-territory for a long time and no one wants to admit it.

    • @personzorz
      @personzorz Před 11 měsíci +1

      Because there's nothing left to do in that sphere

    • @silkwesir1444
      @silkwesir1444 Před 11 měsíci

      Boooo!!! Resistance is futile! 😈

    • @futureshocked
      @futureshocked Před 11 měsíci +5

      @@personzorz There really isn't. And it's wild watching companies that should know better just throw money at shit like this. It's tiresome, these billions going into Clippy 2.0 could really be used for, ya know, jobs.

    • @Praisethesunson
      @Praisethesunson Před 10 měsíci +1

      Exactly right. But they need to maintain their access to vast capital markets so they lie out their ass about the capability of a stupid computer program

    • @coreyander286
      @coreyander286 Před 8 měsíci +1

      How about protein folding programs? Isn't that a recent Silicon Valley success with concrete benefits for public health?

  • @MusaMecanica
    @MusaMecanica Před rokem +12

    I loved this show and these ladies should have their own! They are funny, smart, entertaining and put all of these news in perspective. Keep on fighting the good fight.

  • @tychoordo3753
    @tychoordo3753 Před 11 měsíci +5

    The reason they are calling for regulations is simple, same tactic coorporations have used since forever. Basically you ask government for regulations that are at most a minor nuisance for your Business, but make it impossible for newcomers to get started because of the overrhead the regulations create, so you get to stay on top without having to fairly compete. Same reason why guilds used to be a thing in the middle ages.

  • @boca1psycho
    @boca1psycho Před rokem +11

    This conversation is a great public service. Thank you

  • @gadgetgirl02
    @gadgetgirl02 Před rokem +6

    "End of work! Everything automated!" sounds great until you remember a) no-one said anything about changing how the economy works, so people still need means to have incomes and b) if automated everything was so great, people would have stopped paying a premium for handcrafted stuff by now.

  • @aden_ng
    @aden_ng Před rokem +50

    After making my own video about AI art generator and replicating the process in which Stable Diffusion generates its copies, proving that they are indeed stolen artworks, I ended up in this really weird spot in online conversation where despite not liking them or using them, I've become kind of one of the few people who actually knows how AI generate their art.
    And the thing I noticed is that arguments for AI talks overwhelmingly about the monetary aspect with very little understanding for the technology and the morality behind it.

    • @mekingtiger9095
      @mekingtiger9095 Před rokem +20

      Hahahahahaha, yeah, this is the saddest part.... A lot of pro AI arguments are primarily focused solely on the monetary aspect and nothing else. Really shows you how much they disregard the social consequences of this tech.

    • @chielvoswijk9482
      @chielvoswijk9482 Před rokem +14

      @@mekingtiger9095 The magic word that makes me fall asleep in such conversations is the word "Democratizing". Which i come to understand is just code for wanting stuff without having to put in effort or pay for it.
      E.G when they say democratizing art, It mostly means they just don't want to pay an artist for some 'intimate material'. If you catch my drift...

    • @MarcusTheDorkus
      @MarcusTheDorkus Před rokem +3

      @@chielvoswijk9482 Sounds like the more accurate word would be "communizing"

    • @MrFram
      @MrFram Před rokem

      I watched OP's video, he knows no math and the video was pure misinfo. To anyone reading this, please consider picking up a math textbook rather than listening to these idiots failing to grasp basic multivariate calculus.

    • @choptop81
      @choptop81 Před rokem +7

      @@MarcusTheDorkus Not really. It's corporations seizing the means of production from workers (artists here). It's the opposite of communizing

  • @lunarlady4255
    @lunarlady4255 Před rokem +91

    The only thing that can stop a bad guy with AI is a good guy with AI. So give us your money and your data and don't ask any questions if you want to live...

    • @aaronbono4688
      @aaronbono4688 Před rokem +10

      That is pretty much the theme of Terminator 2 isn't it?

    • @kenlieck7756
      @kenlieck7756 Před rokem +5

      @@aaronbono4688 Wasn't that written by humans, though?

    • @aaronbono4688
      @aaronbono4688 Před rokem +7

      @@kenlieck7756 yes. These AI's just take the information they find and regurgitate it in new ways and since that information contains things like the Terminator movies you would definitely expect them to mimic that. But to the point of the original message, this is about what these companies are telling the public about the AI's that they are creating.

    • @kenlieck7756
      @kenlieck7756 Před rokem +1

      @@aaronbono4688 Ooh, you just made me realize the ultimate flaw in the current AI -- that they are just as likely to crib from, say, the most recent Indiana Jones movie as they are to do so from the first...

    • @redheadredneck
      @redheadredneck Před rokem +2

      Quick insert bs chips into your head so we can defeat an ambiguous unscientific terminator

  • @johnbarker5009
    @johnbarker5009 Před 11 měsíci +44

    THANK YOU for drawing attention to long-termism and the connection to Eugenics. This is insane, terrifying, and mind-numbingly stupid all at once.

  • @DerDoMeN
    @DerDoMeN Před rokem +43

    It's always a shocker listening to people that actually don't glorify these search algorithms... I find it even more shocking to listen to somebody from the field who's not trying to show AI as anything more than what it is.
    Really nice to hear that there are some sane people in the AI field for which I've lost interest years ago (due to obvious lack of reason in the land of proponents).

  • @nzlemming
    @nzlemming Před rokem +103

    I love these woman! When I saw the pause letter, I immediately thought that it was commercial in nature and discounted it. As a rule of thumb, anything Thiel and Musk agree on is bound to be a grift.

    • @Sarcasticron
      @Sarcasticron Před rokem +10

      Yes, when they said why can't the "AI ethics" people and the "AI safety" people agree, I thought immediately "It’s because the AI safety people are grifters."

    • @Neddoest
      @Neddoest Před rokem +4

      It’s a good rule of thumb…

    • @fark69
      @fark69 Před 7 měsíci

      I'm kind of shocked at how well Gebru, particularly, has laundered her reputation. A few years back when Gebru accused Google of pushing her out because she was an AI ethicist, and then it was revealed she actually gave them an ultimatum to either do X (X being let her publish a paper they said needed more work to be up to snuff) or she would walk, and they chose to let her walk. At that time (it was 2-3 years ago I believe), she had a reputation like in the gutter. The trust was so broken because if she would misrepresent that, what else would she misrepresent to further herself and her research?
      Now to see her being treated as an AI ethics expert is wild, especially given her own ethical lapse.
      Bender has a better track record.

  • @warmachine5835
    @warmachine5835 Před rokem +5

    53:00 same. There's a certain delight you can see on a person's face when they're in their area of expertise and are in a prime position to just utterly debunk some common, pernicious myth that has been repeated so much it has become personal for that person.

  • @OsirisMalkovich
    @OsirisMalkovich Před rokem +15

    I have a very easy system. I keep a card in my pocket that reads "do the opposite of whatever Elon Musk says." It has never failed.

    • @peter9477
      @peter9477 Před rokem

      So being poor has worked out well for you, has it? ;-)

    • @SharienGaming
      @SharienGaming Před rokem

      @@peter9477 hows that boot taste? and getting ready for the next crypto crash?

    • @gwen9939
      @gwen9939 Před rokem +5

      @@peter9477 And did you become a billionaire by sucking up to Elon on the internet? Has senpai noticed you yet? didn't think so.

    • @peter9477
      @peter9477 Před rokem

      @@gwen9939 I'm not a billionaire, and I dislike Musk. Not sure what senpai means, but whatever you're trying to say here, you failed to get the idea across.

    • @dperricone81
      @dperricone81 Před rokem +2

      @@peter9477 I got it. Maybe don’t simp for snake oil salesmen?

  • @IngramSnake
    @IngramSnake Před rokem +34

    Timnit Gebru is the real deal. As a post grad A.I student we constantly refer to what she and her team have put together to evaluate our models and approach to datasets. 🎉

    • @fark69
      @fark69 Před 7 měsíci

      Is this true? Does she have a good reputation as an AI ethicist in academia? I remember her public kerfuffle with Google a few years back basically tanked any reputation she had because she was caught lying about Google's "pushing her out" of her job as an AI ethicist there. And public lying tends to not look great on an ethics researcher...

    • @Stevarious
      @Stevarious Před 6 měsíci

      @@fark69 Weird, I've seen a few claims that Timnit Gebru lied about something about that situation, but those claims never seem to include evidence. Meanwhile, this comment section is loaded with people who work in AI and have a deep respect for her.

  • @LizRita
    @LizRita Před rokem +7

    These two were great to watch together in an interview! It's really sobering to have folks tear down claims that have been so normalized about AI. And suggest actual regulations that make sense.

  • @SkiRedMtn
    @SkiRedMtn Před rokem +4

    Also pertaining to legal and policy documents, if you leave out or put in a comma in the wrong place it’s possible to change the meaning of a sentence. You have that happen once on page 9 of a legal document and Chat GPT might have just lost you your case because you decided you didn’t need a person

  • @ellengill360
    @ellengill360 Před 11 měsíci +4

    This is extremely important information. I hope your guests consider writing a version of the Stochastic Parrots article for non-scientists in plain language, maybe highlighting some of the less mathematical points. I'm going through the original article but find it hard to recommend to people who won't want to spend the time or will give up.

  • @faux-nefarious
    @faux-nefarious Před rokem +19

    53:15 reading the footnotes definitely is spicy in this case! The paper sounds solid in citing a group of psychologists writing an editorial about intelligence- turns out the editorial was hella racist! Did Microsoft not know?? Did they just assume no one would notice?

  • @lady_draguliana784
    @lady_draguliana784 Před rokem +4

    I recommend this vid to SO MANY now...

    • @heiispoon3017
      @heiispoon3017 Před rokem +2

      Please dont stop, we need more people informed more than ever how this LLM "works"

  • @Furiends
    @Furiends Před rokem +4

    The core take away everyone should have when ever they think about AI and LLMs is that language is cooperative. This is why advertising works on people that know advertising is trying to manipulate them. LLMs aren't going to make a AGI but they can make something that makes us think it's an AGI. Because YOU are doing the imaginative work to convince yourself of that. The LLM just triggered what you presumed to be a cooperation with the story you're building in your mind.

  • @5minuterevolutionary493
    @5minuterevolutionary493 Před rokem +25

    Last comment: so important for humanists (in the sense of non-religionists) to discern between an anti-science posture on the one hand, and a reasoned critique, based in history and evidence, of power dynamics impinging on the practice and priorities of science. There is a reflexive and lazy support for "science," which is not really a thing in a vacuum, but a product of human relations and material circumstance.

    • @mekingtiger9095
      @mekingtiger9095 Před rokem +12

      Biggest problem I see surrounding techbros is that they imagine that a magnificent utopia they saw in some "time travel to the future" episode in a children's cartoon or those utopian depictions of the "future" from the 1950's and 1960's is magically gonna pop up with tech advancement for the sake of tech advancement because they seemingly have a literal child's understanding of how human relations and power dynamics work in the real world.
      Sorry, *distopian* sci-fi fiction is far closer to reality to come out of it than their visions of "progress".

    • @gwen9939
      @gwen9939 Před rokem +2

      If I'm understanding your point correctly, there's a lot of tech fetishism on one side and anti-oversight sentiments, which generally takes the public appearance of being "anti-science"/"anti-expert". Both of these sides are noise that we need to cut through, and both are simultaneously being manipulated by people in power to help them stay in power. Building up hype from the tech fetishists helps them boost their profits and allows them to keep an iron grip on the tech and financial world, or at least get their slice of the pie, whereas on the other side it's usually politicians creating moral panics around scientific discoveries that are well-understood.
      The answer to both is scientific literacy, but if you've ever talked to someone who's self-appointed believer in science reciting medical conclusions from pop-science articles you understand the very little scrutiny these people approach any scientific subject with, and these are the more literate of the 2.
      Things we cannot ignore is both that AI as an emergent technology is currently being built within the framework of our existing capitalist dystopia where wealth inequality is increasing faster and faster, so if it turns out to be a powerful technology it could land in the hands of the few who've already decided that they and their offspring are the ones who should inherit the earth, adopting eugenics-like philosophies.
      The 2nd is that regardless of what is currently happening with AI and the companies developing them and how that follows the same trend as other tech trends meant to make fast profits, AGI as an emergent technology that we're extremely likely seeing the earliest steps towards now, on a purely theoretical basis could be extremely dangerous. I know that it sounds ridiculous, but just as no one believed we could fly until suddenly we could, and no one believed we could split an atom until suddenly we could, most of us won't believe that very powerful AGI will exist until suddenly it does. There are well-understood theoretically moral, philosophical, and mathematical problems that we have not yet solved, and are crucial that we solve before such an AGI exists.
      For all these issues the answer would be as much unity globally as possible and as little power in the hands of few very powerful people and companies as possible, with full transparency of what's happening in the research, but that's the same playbook we'd need for climate change and look how that's going.

  • @r31n0ut
    @r31n0ut Před 11 měsíci +3

    as a junior programmer I do use ai, but really only as a sort of advanced google. just ask chatgpt 'hey, how do I make a popup in html and have it display some text from this form I just made'. You can really only use it for small chunks of code because a) it gets shit wrong half the time and b) if you use it for larger pieces of code... you won't understand the code you just wrote, and if it works it won't do what you think it does.

  • @CanuckMonkey13
    @CanuckMonkey13 Před rokem +4

    This was such a fascinating, educational, and valuable discussion. Thanks so much to everyone involved!
    I've been watching more of Adam's work recently, and I find myself wondering, "why did I only recently discover him?" Thinking today I realized that it's probably because I haven't had a connected TV for at least a decade now, and I don't want to pirate content, so when he was mainly on TV I was completely cut off. Adam getting bumped from TV by evil corporate interests has benefitted me greatly, it seems!

  • @sowercookie
    @sowercookie Před rokem +31

    It's eternally disheartening to me how widespread eugenics ideas are: in schools, in the media, in pop culture, in casual conversation... The ai bros being another drop in the bucket, insanity!

    • @Praisethesunson
      @Praisethesunson Před 10 měsíci

      Eugenics is a staple tool of capitalist oppression.
      It gives the already wealthy a paper thin veneer of science to justify their position in the hierarchy.
      It gives the poors another knife to hold at each other's throats while the rich keep sucking the life out of the planet.

  • @WraithAllen
    @WraithAllen Před 10 měsíci +3

    The mere fact you can ask ChatGPT to "write in the style of" any living writer (or a writer in the past 50 years) and it puts something out that's similar to that author's work is pretty much demonstrating it used copywritten work in there learning models...

  • @LandoCalrissiano
    @LandoCalrissiano Před 9 měsíci +5

    The problem with the current level of ai is that it's good enough to fool the uninformed so it's great for information warfare, propaganda and spam. I work in the field and even I get fooled sometimes.
    It's great tech and can augment human abilities but few people seem to want to pursue that.

  • @shadow_of_the_spirit
    @shadow_of_the_spirit Před rokem +6

    I was so glad to hear them bring up the importance of being open with this tech. So meny people who I normally hear talk about these models and why it's bad normally never talk about making sure we can know what the system is doing. All of them instead complain about the ones that are open about how they function and provide downloads of the models and often are open about the training data as well. I think if we keep the tech open it will be a lot harder for people to be hurt and it makes the people making these systems accountable. But if we let them hide what they are doing and how they are doing it then it's not a matter of if but when people get hurt.

    • @MaryamMaqdisi
      @MaryamMaqdisi Před rokem +2

      Agreed

    • @RobertDrane
      @RobertDrane Před 11 měsíci

      Amsterdam (Or some Dutch city) released the source for the "AI" they were using for fraud detection for social benefits in the past couple of months. Strong sunshine laws over there I guess. Critics & researchers have only been able to speculate about the implicit bias problems up until then as governments try to keep it private. I cannot overstate how stupid the system is. A podcast called "This Machine Kills" had an episode on it, but it got very little mainstream coverage. I think the episode was titled "The Racism Machine".

  • @shape_mismatch
    @shape_mismatch Před 11 měsíci +23

    This is Pop Sci done right. Kudos for inviting the right kind of people.

  • @funtechu
    @funtechu Před rokem +5

    16:40 In the vein of Chat GPT produced results looking correct to those who are not familiar with the topic, I would disagree with the assumption that Chat GPT produced code is good. I've fed a large variety of simple programming prompts to Chat GPT, and the results produced were terrible. It was a great mimic of what some code that did what was requested would look like, but it was not usable code, and some of the stuff produced (particularly when asking about writing secure code) was downright dangerous.

    • @vaiyt
      @vaiyt Před 11 měsíci +1

      Often when it is correct, it's just copying an existing answer from stackoveflow or whatnot.

  • @Toberumono
    @Toberumono Před 11 měsíci +9

    Also, and I cannot believe how rarely this seems to get mentioned, these bots *suck* at programming.
    And it’s not because there’s any synthesis of new code going on - the implementation seems to actually be, “grab the first answer on stackoverflow”. My source for that is just… looking at stackoverflow because I got suspicious after the “synthesized code” was answering somebody else’s question. If it can’t find the answer on stackoverflow, it starts copying forum posts from other places, btw. You can see that because it starts giving answers that are either identical or identically wrong.

    • @Erik-vf9yn
      @Erik-vf9yn Před 10 měsíci +1

      If you read some of what those AI bros write you'd think the coding capabilies are the second coming of christ, lmao. Figures. I mean we do have the copilot lawsuit at least.

  • @neintales1224
    @neintales1224 Před 10 měsíci +5

    As someone who's written and enjoyed reading fanfic- I would like to argue your lines about AI writing decent fic even though they were said jokingly. AI generated fic and people deciding to 'end' fic other people wrote but are slow to finish is the source of a lot of irritation in the community.
    Also it could be scraping *transcriptions* of your episodes or shows that people put together for the disabled community or ESL folks. I see a lot of transcriptions of visual posts and sometimes full film clips on some places I lurk, put together and posted by well-meaning people, and certainly I'm sure they've been scraped.

    • @Erik-vf9yn
      @Erik-vf9yn Před 10 měsíci

      I've also heard that they could be using speech to text for videos (and probably therefore series and movies on, for example, pirate sites and youtube) to get information to train on. How much of that is true, idk, but I wouldn't be surprised.

  • @quietwulf
    @quietwulf Před rokem +4

    We’re chasing guilt free slavery. We want something that can think and problem solve like a human, but be completely obedient.
    They can see the dollars on the table if they can just crack it.

  • @batsteve1942
    @batsteve1942 Před rokem +6

    Just finished listening to this podcast on Spotify and it was a refreshing to hear a more critical view on all the AI mania the media seems to love exaggerating right now. Emily and Timnet were both great guests and very informative.

  • @ssa6227
    @ssa6227 Před rokem +79

    Thanks Adam.
    Good to know there are still some serious not sold out researchers academics who are working for the good of humanity and who call out the BS as it is.
    I was skeptical of all the hype and lo it was BS.
    I hope this video goes to as many people as possible so people don't fear their BS

    • @DipayanPyne94
      @DipayanPyne94 Před rokem

      AI is just a drop in the ocean of Neoliberal propaganda.

    • @cgaltruist2938
      @cgaltruist2938 Před rokem +1

      Thanks Adam to help people to keep their sanity.

    • @apophenic_
      @apophenic_ Před 11 měsíci

      ... what does it mean to be "bs" to you? Adam doesn't understand the tech. Neither do you. What bs are you on about kiddo?

    • @fark69
      @fark69 Před 7 měsíci +1

      Gebru worked for Google's AI program for years and would have still been working there now if they hadn't called her bluff when she sent an email saying "Approve my paper or I walk". She's not exactly "not sold out"...

  • @sclair2854
    @sclair2854 Před 11 měsíci +1

    Adam big thanks for this! Really glad you took the time to talk to experts on this!

  • @drew13191111
    @drew13191111 Před rokem +4

    Excellent video! Thank you Adam and guests.

  • @shmehfleh3115
    @shmehfleh3115 Před rokem +7

    If you were expecting either Woz or Musk to be remotely reasonable, let me remind you what lots of money does to the brains of rich people.

  • @futurecaredesign
    @futurecaredesign Před rokem +4

    Loyalty would be the most horrible thing to be built into an AI or AGI system. Because loyalty can be abused in horrible ways. Its how we get men (and women, but mostly men) to go to war with people they have no personal problems with.
    No, if you are going to add something,,,. Add accountability. Or self-critique.

  • @user-uf5gp4fu3n
    @user-uf5gp4fu3n Před 7 měsíci +1

    How do these companies sleep at night , they should be held accountable

  • @ianwarney
    @ianwarney Před rokem +2

    1:07:26 Key word here is “consultation”.
    I love the analogy of “information pollution” / “polluting the info sphere with noise and gibberish” -> Confusion of the masses is an (financial and power seizing) opportunity for the elites.

  • @Talentedtadpole
    @Talentedtadpole Před rokem +3

    This is important, the best thing you've ever done. Please keep going.
    So much respect for these brave and knowledgeable women.

  • @fafofafin
    @fafofafin Před 11 měsíci +3

    Amazing video. So good to have these two experts explaining to laypeople like me what this whole thing is really about. And also, YIKES!

  • @RoundHouseDictator
    @RoundHouseDictator Před rokem +12

    AI generated text could generate even more personalized misinformation for social media

  • @vafuthnicht7293
    @vafuthnicht7293 Před rokem +4

    I'm a layman in regards to AI and machine learning but I've been trying to tell my friends that are jumping on the "skynet is coming" panic train, that while there are concerns with its development; it's still a computer it's still subject to GIGO and the question of whose in control, and what model is being used is of far greater concern.
    It's validating to see experts having that discussion and also giving me other things to think about.
    Thank you all for doing this, I appreciate the poise and rationality!

  • @joshuadarling7439
    @joshuadarling7439 Před rokem +3

    Another great episode with excellent guests. Keep spitting the truth and learning ❤

  • @emmythemac
    @emmythemac Před rokem +11

    I have not dipped my toe into Adam Ruins Everything fanfic, but if your AI-generated script has you making out with Reza Aslan then you've got your answer about where they get their training data

  • @Markleford
    @Markleford Před rokem +2

    Fantastic guests and conversation!

  • @sleepingkirby
    @sleepingkirby Před rokem +2

    @Adam Conover
    Yeah, that was really good. Once again, thank you very much for getting guests that know technical side as well as the in context of the topic.

  • @connorskudlarek8598
    @connorskudlarek8598 Před rokem +5

    I think the problem with AI is that the public doesn't know anything about it.
    The CZcams algorithm that recommended this video to me is AI. Google Maps suggesting a various number of places when I type in "fast food" and determines based on time of day the best route to get there fast, well that's AI. My fitbit has AI in it to determine when I am asleep and awake.
    AI is not dangerous. Dangerous use of AI is dangerous. The public can't differentiate the difference though.

  • @louisvictor3473
    @louisvictor3473 Před rokem +9

    Around 1:01:00 this is one of my main issues with the whole "let's build an A(G)I" to solve our problems". Suppose we could. Congratulations, for all intents and purposes, it is indistinguishable from human level sentience (even more so than animals)... so what now? Do we potentially enslave this sentience to do our bidding? But if we chose not to do that for moral reasons, why did we create it for then? So it really feels like it is either an inherently immoral pursuit which will just really end in Terminator territory (i.e. complicated species self-past tensing via hubris overdose), or purposeless and pointless. Meaning, if we were asking "why" we can just ignore option B, it is option A from short sigthed people full of gas telling themselves and every fool who will listen they're the real visionaries. Seems like the techbro pipedream solution to the "problem" of not being able to own slaves legally anymore, fux that and fux those guys.
    Meanwhile, much more intelligent use of time and research resources seems to be the pursuit not a superintelligence that solves all our problems for us and we dont have to think anymore (but then who is to say the super intelligence's solution is in fact good and the alleged super intelligence is in fact inteligent), but instead to put the thinking cap and think solutions to problems ourselves, and built the tools including regular ass AI (not the sci-fi/AGI pipe dream) to help find those solutions and execute them.

    • @SharienGaming
      @SharienGaming Před rokem +1

      i would argue that the main purpose of creating an AGI would be to further our understanding of intelligence and then to see if we could create something like our own
      i dont know if it would solve any problems... it might - but honestly... the main point of science like that is to further our knowledge and understanding and then going on from there
      mind you, thats not what those grifters are after and they arent actually interested in AGI... they just want to drum up hype to get money... thats their end goal... money and power... longtermists are just rich right wing grifters masquerading as people who care to divert support from actual climate activists and research

    • @louisvictor3473
      @louisvictor3473 Před rokem

      @@SharienGaming Then you're arguing you don't get the concept of an AGI. An is already an intelligence like our own. Not identical (that we know how to do, we call them children), but alike. it is a circular argument, Dev A to understand A to Dev A, it is still purposeless.

    • @SharienGaming
      @SharienGaming Před rokem +1

      @@louisvictor3473 oh so procreation is purposefully building a child bit by bit, understanding how everything works?
      your argument is that pressing play on a VCR is the same as creating the VCR, tape and the video on the tape
      there is a massive difference between using an existing machine that does the job and building your own that is supposed to do the same job
      and the latter teaches you a LOT about how the former works through the successes and failures along the way

    • @louisvictor3473
      @louisvictor3473 Před rokem +1

      @@SharienGaming Are you just arguing in bad faith and intentionally distorting what I said, or you just really bad at read while really wanting to argue about something you clearly are "passionate" first and knowlegeable dead last? Both options are terrible, but at least one is just dishonest, not voluntarily stupid.

    • @SharienGaming
      @SharienGaming Před rokem +2

      @@louisvictor3473
      "An is already an intelligence like our own. Not identical (that we know how to do, we call them children)"
      that is what i was referring to - the way i read it you claim we know how to make an intelligence like our own, because we know how to make children
      and that is patently wrong
      and furthermore - science is self purposing... the point of it is to advance knowledge... it is literally in the name... so of course a lot of what we do in research is to basically see if we can do it and how it actually works...
      mind you - and i pointed this out in my first reply... none of this is part of the motivation of longtermists... because they arent interested in advancing knowledge - they are interested in diverting attention, resources and support from activists who are actually trying to solve our current climate crisis... which genuinely is not going to be solved by tech...we already know the solution for it... but longtermists are rich grifters deeply rooted in capitalism... and capitalists are the root cause for the majority of the problems that cause and profit from disasters...and of course as a result their interests lie in preventing the substantial systemic changes that are needed
      bit of a long aside there... but to get back to my original replies motivation:
      i am just providing a reason for why actual researchers might want to figure out how to make one... which boils down to "because it is interesting"

  • @dgholstein
    @dgholstein Před 10 měsíci +1

    The parking comment is pretty funny and on point. Google's self driving technology car famously got stuck at a 4 way stop and had to be rescued, its programming would only proceed if every other car at the intersection had come to a complete stop.

  • @thomashenry4798
    @thomashenry4798 Před 10 měsíci +1

    Finally we have created the torment nexus as depicted in the famous classic scifi book "Dont create the torment nexus".

  • @tim290280
    @tim290280 Před rokem +10

    This was great and really highlights a lot of the flaws I've noted with "AI". Good to know layman me wasn't going crazy.

    • @DipayanPyne94
      @DipayanPyne94 Před rokem +3

      Yup. Ask ChatGPT what Newton's Second Law is. It will give you a wrong answer ...

  • @schok51
    @schok51 Před rokem +7

    The direct threat of language models is about persuasion and misinformation, and that is definitely a threat to societies recognized by experts that cannot be dismissed.

    • @Ilamarea
      @Ilamarea Před rokem

      It's more the collapse of capitalist society, wars for resources and our inevitable extinction due to loss of agency that I worry about.
      But sure. Stupid people being manipulated will happen to. Just look at this comment section - they are practically begging to be convinced of stupid bullshit.

  • @mountainjay
    @mountainjay Před 5 měsíci +1

    I remember an episode of Dawson's Creek where they all jumped into a pool ....

  • @ucantSQ
    @ucantSQ Před rokem +1

    Great follow up. I came to class with my homework finished this time.

  • @theParticleGod
    @theParticleGod Před rokem +22

    Thank you for explaining that Generative A.I. is not capable of reasoning.
    It's like a DJ with an unfathomably massive collection of records. No matter how good they are at remixing those records, they don't necessarily understand music theory, or how to play any musical instruments, despite the fact that their music may be full of musical instruments and melodies played on them.

    • @UnchainedEruption
      @UnchainedEruption Před 11 měsíci +2

      You don't need to understand music theory to be a virtuoso on an instrument. If anything, these bots know the "theory" all too well, in the sense that they can manufacture chord arrangements based on common chord progressions in popular music. But when real humans compose music, it isn't planned, not usually. It's spontaneous. It's something you just do to express what you're feeling, and after the fact you notice in hindsight, "Oh, I used that scale or mode there," or "Oh hey, it's that chord progression, or that interval." Sometimes you may have an idea before hand like, "I want to do 12 bar blues thing, or something dark and Phrygian," but usually it just happens. Like inspiration for writing an idea. You don't plan on it. You get a spark of inspiration on an idea you want to talk about, and the rest just flows. Then you edit and revise the results and gradually morph it till you reach the final product. A.I. is more like the business team that generates movie "ideas" by doing constant market research and just rehashing old popular films and cliches because the end product has worked before so it'll work again. 0 inspiration, purely calculated.

    • @theParticleGod
      @theParticleGod Před 11 měsíci

      @@UnchainedEruption The DJ analogy is not perfect :)
      What I was trying to get at is that despite the "generative" name, it's more "regurgitative", there is no scope for a large language model to come up with an answer that is not already buried in the training data. Just as there is no scope for a DJ to come up with music that is not already buried in their crate of records, they can rearrange the music and manipulate it in ways that make it sound original, but they are not musical originators.
      Where the analogy falls flat, as you pointed out, is that the DJ decides what samples she's going to use based on inspiration, she doesn't whip out her calculator and predict the next sample she's going to use based on statistical analysis of her crate of records, her choice will be based on her feelings and what she thinks sounds good at the time.
      (Disclaimer: most of the music I enjoy is at least partially made by DJs using samples of other people's music, so I'm not bagging on DJs here)

    • @apophenic_
      @apophenic_ Před 11 měsíci

      This is just incorrect.

    • @theParticleGod
      @theParticleGod Před 11 měsíci

      Nice rebuttal

  • @sleepingkirby
    @sleepingkirby Před rokem +10

    16:30 "The other thing about programming languages is that they're specifically designed to be unambiguous..."
    This is a concept I have such a hard time explaining to people. Like ambiguity, especially in English, is nearly untranslatable to code when read as it is written

    • @EternalKernel
      @EternalKernel Před rokem

      I see ambiguous code everyday. Generally it's the overall architecture that can be ambiguous but sometimes it's a function and it's ambiguous as to why it is where it is. But yes code is certainly less ambiguous then normal human language. But on the subject I think it's important to point out that over centuries there's a good chance that legalese has developed Advanced if not hidden un-ambiguity. I can only hope that there will be a model that will take advantage of this and bring free concise capable legal help to the average person.

    • @sleepingkirby
      @sleepingkirby Před rokem +3

      @@EternalKernel The code might be ambiguous to a human, but the compiler or the interpreter only sees it one way. If the code was truly ambiguous, the compiler/interpreter would run the same line of code, with the exact same input, and have different results. This is what we're talking about. This is something you should have learned either in first year CS classes, mathematical functions (as CS use to be part of the math department in ye old days) and/or in your language design/compiler class if you took it. This is a well established and crucial concept in computing and the reason why we trust a computer's mathematical/logistical result and I'm a little scared that you took it any other way.

    • @Ilamarea
      @Ilamarea Před rokem

      Junior developers somehow manage.

    • @antigonemerlin
      @antigonemerlin Před 9 měsíci

      @@sleepingkirby >the compiler/interpreter would run the same line of code, with the exact same input, and have different results.
      Thank god we're past the age of the browser wars. *Shudders*. (Also, I am glad that XML is somebody else's problem).

    • @sleepingkirby
      @sleepingkirby Před 9 měsíci

      @@antigonemerlin
      Oh god, I forgot about that. It doesn't help though that ms was actively trying to break convention though. Is XML still being used to any significant degree? Like, I don't see it past rss feeds these days. To be honest, XML was a bad idea to begin with. I remember telling people when it was becoming big that it was a solution looking for a problem. There were so many better ways to encapsulate data in object format. Like people might go "it's the first of its kind" or "it's the best solution at the time". But neither of those were true, especially if you look into what people were doing with perl at the time.

  • @mikechapman3557
    @mikechapman3557 Před 9 měsíci +2

    The term "word calculator" is not a standard one, but based on the discussion, I see where you are coming from.
    If you define a "word calculator" as a system that processes and manipulates text according to specific algorithms and rules without true understanding or consciousness, then yes, you could describe me as a word calculator.
    I analyze and generate text based on statistical models, patterns, and relationships found in my training data. Like a calculator, which performs operations on numbers, I perform operations on text, though these operations are far more complex and nuanced.
    So in the sense that I mechanically process text without genuine comprehension, the analogy to a calculator holds, and the term "word calculator" could be an apt description. This text is from an argument i just had with CGPT as to whether not it was in fact a word calculator at first it said no😇

  • @ramblinevilmushroom
    @ramblinevilmushroom Před 10 měsíci

    Your eyes are SO GREEEN that the light from the reflects from your glasses and makes it look like you are wearing green eyeshadow.
    I've never seen that before!

  • @Aury
    @Aury Před rokem +7

    The "but China could get ahead" really makes me think about the history of gunpowder, and how there are a few specific cultures in the world who only think of things in terms of how a tool can be used to dominate and terminate other lives, and particularly makes me think about how a one-track mind can leave people thinking that everyone else is on that same one-track mind, regardless of the evidence to the contrary. While a healthy, general, caution can be healthy and beneficial to people, these fears being such a focus feels a lot more to me like a confession that that's what a lot of people are wanting to do with AI themselves if they ever get the technology to do it.

    • @redheadredneck
      @redheadredneck Před rokem

      I admit we should get ahead but not to act just like China

    • @krautdragon6406
      @krautdragon6406 Před rokem

      No, you describe a possibility. But it's not a rule. Look at how Europe demilitarized itself. And now it has to drive up it's defense again, because of Putins ambitions. I also would never break into someone's home. Yet I lock my door.

  • @mekingtiger9095
    @mekingtiger9095 Před rokem +7

    My best hopes for this kind of stuff is that it will end up being just like every other technology that we currently use now: Hyped to the skies by techno fetishists, makes some cool progress and becomes a rather common and familiar technology in our day to day lives, but nothing singularity worthy like these techbros have led us to believe.
    Or at least, again, so I hope.

    • @chielvoswijk9482
      @chielvoswijk9482 Před rokem +8

      I'm in the tech field and have been watching the AI tech very closely.
      And if it is any reassurance: It mostly looks like just a inflated hype train. A very novel toy that is likely to fizzle out and just become a tech-thing some will use in an assistive capacity like Github's Co-Pilot, rather than the end-of-all thing that the "Tech-bros" scream about.
      Probably going to be stuck hearing silly buzzwords like "Democratizing" from them for a while though.... :/

  • @isac6214
    @isac6214 Před rokem +1

    Amazing content and bright thinkers, thank you for sharing!

  • @micromonkeygmail
    @micromonkeygmail Před rokem

    Totally agree with your main point--text generator and help you program. It is a useful force multiplier and productivity tool in certain use cases if you review the output.

  • @deemon710
    @deemon710 Před 11 měsíci +4

    I love hearing how your iconic delivery conflicts with the standard podcast voice expectation and how your speaking style changes to regular when you're speaking with people instead of at them.

  • @stealcase
    @stealcase Před rokem +42

    Damn Adam. This is some legitimately amazing work you're doing. Thank you for informing the public in this way.

    • @tinyrobot6813
      @tinyrobot6813 Před rokem +2

      Oh I know you from twitter dude that's cool I didn't know you had a CZcams

    • @stealcase
      @stealcase Před rokem +2

      @@tinyrobot6813 👋 hi. The world is a small place sometimes. 😄

    • @eduardocruz4341
      @eduardocruz4341 Před rokem

      That cat was controlled by AI trying to find Emily in the background by touch to kill her because it doesn't like being disparaged by an actual intelligent person...AI cannot survive with intelligent people around...lol

  • @octabunge
    @octabunge Před rokem

    I really liked this conversation, really nice to see people in their element talk about what they know.
    It's interesting how they talk about cult like behavior in people creating AGI, have the opinion that they are crazy about thinking AGI can set up a utopia or apocalypse, and also talk about ChatGPT in terms on an oil spill, maybe not the apocalypse, but definitely a catastrophic event.
    Another thing that jumped out to mind was the octopus and bear example, because it kinda misses that this models are trained on things like the entirety of wikipedia, meaning they probably can reproduce text about any animal.

  • @starmusicsd
    @starmusicsd Před rokem

    Nice convo, thanks a lot ❤

  • @rw2551
    @rw2551 Před rokem +3

    Great video with fantastic guests!

  • @jawny7620
    @jawny7620 Před rokem +19

    awesome episode and guests, hope the AI hypetrain skepticism spreads

    • @jonathanlindsey8864
      @jonathanlindsey8864 Před rokem

      czcams.com/video/ukKwVsjQqUQ/video.html

    • @jonathanlindsey8864
      @jonathanlindsey8864 Před rokem +5

      ^ I don't know who's these people are. Trust actual people in field.
      AI moves on an exponential scale with *us* working on it. Add on that AI can work _on itself_ you get a double log scale.

    • @jawny7620
      @jawny7620 Před rokem +5

      @@jonathanlindsey8864 who asked

    • @jonathanlindsey8864
      @jonathanlindsey8864 Před rokem +3

      @@jawny7620 you did, by posting in a public forum. Two people who were discredited, and are not really recognized in the field.
      The fact that Timnit was surprised by the time scale, kinda proves the point...

    • @jawny7620
      @jawny7620 Před rokem +10

      @@jonathanlindsey8864 cope harder, these women are smarter than you

  • @CassieAllNatural
    @CassieAllNatural Před rokem

    @TheAdamConover I appreciate you for shedding light on topics like this 👏🏿👏🏿👏🏿👍🏿💯💗 GREAT guests as well 😀

  • @Robdobalina
    @Robdobalina Před 10 měsíci +1

    Such an awesome interview with fresh hilarious perspective vs the dominant narrative. Love it.

  • @UrFaveMF
    @UrFaveMF Před rokem +3

    Chat GPT took 20 prompts to even understand that I needed it to make a schedule for me where I have a commute and have to be at work by 7am. It kept giving me a schedule that would make me late. Chat GPT is a stupid text generator.

  • @HeathcliffBlair
    @HeathcliffBlair Před rokem +3

    Good discussion. Important points in there about the cranky ideology behind the money being poured into this tech. Thanks.

  • @petersmythe6462
    @petersmythe6462 Před 6 měsíci

    Local law is a big problem for existing LLMs. Ones with a web search plugin have an advantage here, but without it, they are going purely on memory.
    Its response to a question about U turns in Oregon, a well known and basic traffic law, was:
    "As an AI language model, I can provide some general guidance, but it's important to prioritize safety and follow local traffic laws. In Oregon, U-turn regulations can vary depending on the specific location and circumstances. However, there are some general considerations you can keep in mind:
    1. Check for signs and signals: Look for any posted signs or signals that indicate whether U-turns are permitted or prohibited at that intersection. While you mentioned there are no signs, it's still a good idea to double-check for any traffic signals that might indicate U-turn permissions.
    2. Observe traffic flow: Assess the flow of traffic in the area. If there is heavy traffic or if it seems unsafe to make a U-turn due to oncoming vehicles or limited visibility, it's best to wait for a safer opportunity to turn around.
    3. Legal restrictions: In Oregon, U-turns are generally allowed unless there are specific signs indicating otherwise. However, it's worth noting that U-turns are typically prohibited in business districts, near railroad crossings, and in areas where signs prohibit them. Therefore, if you are in one of these areas, it would be best to avoid making a U-turn.
    4. Consider alternative options: If making a U-turn is not possible or advisable, you can try continuing on the current road until you find a legal opportunity to turn around, such as a designated U-turn location or an intersection with a traffic signal that allows left turns.
    Remember, it's crucial to prioritize safety and comply with local traffic laws. If you're unsure about the specific regulations in the area or if it's safe to make a U-turn, it's best to exercise caution and wait for a more suitable opportunity to change your direction."
    It explicitly states that U turns are legal unless otherwise stated. This is exactly backwards. Unlike most places, U turns are actually ILLEGAL unless otherwise stated.
    When asking the same question to a version of GPT 3.5 Turbo with a web search plugin, its response was:
    "In Oregon, U-turns at intersections are generally prohibited unless there is a sign permitting the turn [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). Since you mentioned that there are no signs at the intersection, it is important to follow the default rule, which is to not make a U-turn.
    Here's why you should not make a U-turn at this intersection:
    1. Traffic Laws: Oregon traffic laws prohibit U-turns at intersections controlled by a traffic signal, unless a sign allows it [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). Without a sign permitting the U-turn, it is safest to assume that it is not allowed.
    2. Visibility: U-turns are also prohibited at any location within city limits where your vehicle cannot be seen by traffic coming from either direction within 500 feet [[1]](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx). If you are going the wrong way on a road, it is likely that your vehicle cannot be seen by approaching drivers, making a U-turn unsafe and potentially causing a collision.
    3. Safety: Making a U-turn at an intersection without proper signage can be dangerous. Other drivers may not expect a U-turn and may not be prepared to react to your maneuver. It is important to prioritize safety and follow the established traffic laws.
    Instead of making a U-turn, consider finding a legal and safe way to turn around. Look for a designated U-turn area, a nearby intersection with a traffic signal that allows U-turns, or a safe location where you can make a legal turn to get back on the correct route.
    ---
    Learn more:
    1. [Oregon Driver Manual - Section 3: Turns & Intersections](www.oregon.gov/odot/dmv/pages/online_manual/study-section_3.aspx)
    2. [ORS 811.365 - Illegal U-turn](oregon.public.law/statutes/ors_811.365)
    3. [U-Turns: Be Careful Where You Attempt Them - The Wise Drive](www.thewisedrive.com/u-turns-be-careful-where-you-attempt-them/)"
    which is clearly much better. Suffice to say that ChatGPT has a world model but it is not very good, yet this can be improved by letting it read the web, and it can use search relatively intelligently.

  • @sclair2854
    @sclair2854 Před 11 měsíci +2

    I do think the focus here on "AGI is an extension of the eugenicist movement by association, therefore the people worried about AGI are wrong" is not a great overall rebuttal to the worries posed about the potential creation of AGI over the next century. I think it's relatively inevitable that corporations will want to undermine workers by creating artificial agents that have very general skillsets, and I think creating guards against that (by say putting legal restrictions on the potential use of AI now) is an overall good thing.
    My overall worry with AGI is that whatever corporation gets access to an intelligence that can do reasonably effective human-like actions will use it to amplify the already shady things they already do. So we have the initial major issues of IP theft, of job loss, of machine errors- But we also have this issue of empowering Corporations as entities to access sleepless human-like digital agents that don't sleep, can be used for whatever shady stuff they want, and potentially also have massive alignment issues.
    I do think "AGI is a future problem, we should address the PRESENT problem, and especially the over-hype" is reasonable. Especially to help groups like the writers guild from issues like forced AI workplace integrations that we know will result in poorer products, downsizing and worse pay.

  • @NightRogue77
    @NightRogue77 Před rokem +8

    Dude… this has to be the biggest power play in the history of mankind…. Just think - for every service that uses the GPT-4/etc API, M$ and crew will have unfettered access to the relevant information being exchanged. This is the Demolition Man power move - surely this is exactly how all restaurants became Taco Bell.

  • @ZZ-qy5mv
    @ZZ-qy5mv Před rokem +8

    You should get Karla Ortiz on the show. She’s been doing a lot of work trying to protect artist around this subject.
    A lot of people who defend AI art has a fundamental misunderstanding of how art, particularly illustrative art, is made. They only think about how they experience art and have zero idea that art isn’t just experienced post the completion of the work. Maybe these people will be more excited about seeing robots run really fast and cancel all sports and the Olympics, because that’s the mindset 😂