The ACTUAL Danger of A.I. with Gary Marcus - Factually! - 218

Sdílet
Vložit
  • čas přidán 11. 07. 2023
  • Whether we like it or not, artificial intelligence is increasingly empowered to take control of various aspects of our lives. While some tech companies advocate for self-regulation regarding long-term risks, they conveniently overlook critical current concerns like the rampant spread of misinformation, biases in A.I. algorithms, and even A.I.-driven scams. In this episode, Adam is joined by cognitive scientist and esteemed A.I. expert Gary Marcus to enumerate the short and long-term risks posed by artificial intelligence.
    SUPPORT THE SHOW ON PATREON: / adamconover
    SEE ADAM ON TOUR: www.adamconover.net/tourdates/
    SUBSCRIBE to and RATE Factually! on:
    » Apple Podcasts: podcasts.apple.com/us/podcast...
    » Spotify: open.spotify.com/show/0fK8WJw...
    About Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at www.headgum.com
    » SUBSCRIBE to Headgum: czcams.com/users/HeadGum?sub...
    » FOLLOW us on Twitter: / headgum
    » FOLLOW us on Instagram: / headgum
    » FOLLOW us on TikTok: / headgum

Komentáře • 854

  • @lilaredden
    @lilaredden Před 11 měsíci +854

    Honestly, any time a company comes in and says "please regulate me" what they're actually saying is "I see a minefield of legal liability due to the harm we're about to create, please give us rules to follow so we can't be sued"

    • @Mr.Bubbles42
      @Mr.Bubbles42 Před 11 měsíci +34

      Pessimistic but so so true

    • @jeffw991
      @jeffw991 Před 11 měsíci +127

      As often as not, what they're saying is, "Hey, we climbed this particular ladder to success. We like it up here, so could you please make sure no one else can follow us up?"

    • @Dullydude
      @Dullydude Před 11 měsíci +6

      Is this... a bad thing...?

    • @justin10054
      @justin10054 Před 11 měsíci +70

      @@Dullydude Yes because when a company does this they are usually trying to write the rules in their favor. Ideally they want to be able to shield themselves from lawsuits without having to change their damaging behavior and they will work hard to influence laws to this effect.

    • @Dullydude
      @Dullydude Před 11 měsíci +5

      @@justin10054 Yeah you can convince yourself that everyone is evil and always trying to do things that only benefit themselves. Or you can look a bit more positively and see that them pushing governments to regulate the industry is a good thing that definitely would not have been discussed yet without them bringing it up. Just because they brought it up does NOT mean they get to write the laws. I know people tend to forget this, but for all it's flaws we do still live in a democracy

  • @DeonTain
    @DeonTain Před 11 měsíci +167

    I won't believe self driving cars are safer than humans until some time after my insurance company will give me a discount if I let the car drive it's self.

    • @cacojo15
      @cacojo15 Před 11 měsíci +19

      That's a good metric

    • @erintraicene7422
      @erintraicene7422 Před 11 měsíci +8

      Agree! VERY good metric!

    • @Charon85Onozuka
      @Charon85Onozuka Před 11 měsíci +8

      Fair! Probably the only thing I'd trust my insurance company with is trying to protect their bottom line.

    • @TheRealSykx
      @TheRealSykx Před 11 měsíci +1

      @@Charon85Onozuka bingo

    • @steve501
      @steve501 Před 11 měsíci

      This, exactly this.

  • @sashaboydcom
    @sashaboydcom Před 11 měsíci +112

    As a programmer, I wanted to push back a bit on 7:14. A lot of the work of programming is not writing code, but fixing issues that come up, and language models tend to create quite buggy code. Being able to write code more quickly doesn't actually help that much if it's introducing subtle issues that take longer to fix down the line.

    • @geoffdewitt6845
      @geoffdewitt6845 Před 11 měsíci +17

      Thank you! Not a programmer myself, but I've tried using LLM's to write SQL queries and it was buggy as HELL.

    • @Raletia
      @Raletia Před 11 měsíci +6

      Yeah really! I'm mostly self taught and don't code for a living, but, from my experience the best thing I think AI could do to help me is making context aware information about what functions or datatypes or structures or whatever I'm trying to use/work with do, can do, where and how they fit together. I don't need someone to write my code for me, I need assistance to understand what the code I want to write will do, or can do.

    • @bluegreenmagenta
      @bluegreenmagenta Před 11 měsíci +5

      Thank you, came to make sure someone had left this comment. It's not even that useful for that...

    • @Eugen1344
      @Eugen1344 Před 11 měsíci +10

      Yea, I spend 90% of the time thinking about the code, constructing it in my head, and
      10% - actually writing it. I would have virtually no benefit in writing it faster. But why don't I offload the process of thinking to AI? Because it is what makes me a programmer, it is the whole point of my job - to make sustainable code that won't break down in the future, that is susceptible to change or new things, that has good structure, modularity, no bugs, and is well thought out. And guess what? AI can't do any of this shit, we are still much better at our jobs. And until we are, we can't be replaced, and this AI stuff is virtually useless for programmers. Believe me, I tried

    • @dragon1130
      @dragon1130 Před 10 měsíci +3

      Im not a programmer, but the stories I've heard led me to believe exactly what you are saying. I've seen countless videos where someone says something along the lines of "So I wrote this code and... uh oh, something happened that wasn't supposed to. So I had to go back into my code, find the problem, and... something else broke now; After many hours of troubleshooting, I finally got it to work...barely.;

  • @StefanLazicMusic
    @StefanLazicMusic Před 11 měsíci +92

    In my opinion "please regulate me" is also a marketing stunt. CEOs know it's really hard to regulate anything about that field and probably are quite aware of all limitations of this type of AI, so that sentence means "OMG our tools are soooo powerful, magical, please don't use it, omg please someone stop us".

    • @TheManinBlack9054
      @TheManinBlack9054 Před 10 měsíci +2

      Should have talked with Eliezer. Risks of AGI are not even close to these childish considerations

    • @CuriousKey
      @CuriousKey Před 10 měsíci +3

      @@TheManinBlack9054 We're a very, very long way away from needing to be concerned with the risks of AGI.

  • @SharienGaming
    @SharienGaming Před 11 měsíci +227

    we already have cars that can basically self drive and massively outperform humans... they go on rails and we call them trains
    and we are still smart enough that we keep a competent human at the controls that can make critical choices in an emergency

    • @banquetoftheleviathan1404
      @banquetoftheleviathan1404 Před 11 měsíci +34

      And sometimes, if you behave. They will transform and defend the planet from aliens, as a treat.

    • @cacojo15
      @cacojo15 Před 11 měsíci +13

      I don't understand why people have this false dichotomy, either self-driving cars or trains. I mean, let's put as many trains and buses as we can and finish the rest with cars. I live in Switzerland, we have a lot of trains (and buses) but we still need cars.
      Furthermore, whether the cars are self-driven or not the debate about car vs train does not change. It's still better to have trains as they are more efficient.

    • @SharienGaming
      @SharienGaming Před 11 měsíci +27

      @@cacojo15 its mainly because cars are inefficient to the degree that they are unsustainable at scale
      and to be fair - i was being fascetious... the entire focus on cars is generally completely misguided and it doesnt really matter whether they are self driving or whether they are electric powered... the volume of cars simply does not work
      its just incredibly annoying when so much energy is being focused on turning cars into whats essentially a 1 person train with massive added complexity...when we already have trains, trams and buses to solve the problem of transporting people safely and without them requiring to pay attention...

    • @harperna3938
      @harperna3938 Před 11 měsíci +22

      @@cancermcaids7688 people in the us "want" cars because there was a massive push by automotive industry lobbies to completely reshape america around the use of a car, which oftentimes involved automotive companies purchasing and sabotaging public transit infrastructure. there is majority public demand for high speed rail infrastructure, but the automotive industry has spent decades erecting as many institutional barriers as possible to prevent these initiatives from getting developed.

    • @SharienGaming
      @SharienGaming Před 11 měsíci +13

      @@cancermcaids7688 actually...even in the us people just want to get from A to B reliably... its just that the US has been bulldozed for the car to the point that its the only viable transport method... its less that people want it that way... and more that most of them have never known it any different

  • @Vode1234
    @Vode1234 Před 11 měsíci +243

    Dear Adam.
    Just wanted to say that us illustrators are rooting for you guys. This isn't luddite vs tech it's human rights vs the billionaires greed. Stand Strong we will win this eventually.

    • @martinfiedler4317
      @martinfiedler4317 Před 11 měsíci +19

      The historical luddites were actually in a pretty similar situation. New technology was used as a means to replace skilled workers and to reduce pays to the workers that were still needed to operate the tech.

    • @anj1273
      @anj1273 Před 11 měsíci +23

      @@martinfiedler4317 Techbros using the term Luddites as a form of insult is baffling to me because the movement was RIGHT all along, look at how abhorrent the sweatshops in the apparel industry are right now.

    • @lexismore
      @lexismore Před 11 měsíci +4

      @@anj1273 Came to the comments for discussion of the Luddite movement. Am not disappointed!

    • @sbiecoproductions6062
      @sbiecoproductions6062 Před 11 měsíci +2

      dude, remember the invention of photograpy at the end of 1800? that shift art from illustrating the woeld as it is to illustrate the world inside the artist. i can't wait to see what ai will brings to the table for the ARTIST. dude, we'll never be replaced, they can makes us only stronger and more necessary then ever ;).

    • @2265Hello
      @2265Hello Před 10 měsíci

      @@sbiecoproductions6062honestly it’s really just going to flood an already extremely over saturated market with mediocrity and meaninglessness for a few years. As well as replace and further exploit workers via businesses and corporations. People seem to forget actually learning a creative skill exercises those creative muscles. Not to mention more people that enter a field especially when they aren’t properly trained doesn’t equal an improvement in quality or creative revolutions.
      But regardless I expect a renaissance or movement toward traditional and tangible human made art. As well as a hippie era of indie human made media via crowdfunding platforms.

  • @Z4RQUON
    @Z4RQUON Před 11 měsíci +120

    When Coca-Cola got involved in regulating cocaine, they ended up being the only entity in the United States legally allowed to import coca leaves.

    • @sr2291
      @sr2291 Před 11 měsíci +2

      Coca leaves should be legal for personal use.

    • @v-22
      @v-22 Před 11 měsíci +2

      @@sr2291 not the point

    • @sr2291
      @sr2291 Před 11 měsíci

      @@v-22 Too bad.

    • @v-22
      @v-22 Před 11 měsíci +1

      @@sr2291 don't be so hard on yourself

    • @RichardHartnell
      @RichardHartnell Před 10 měsíci +2

      Heard this wild point on Reddit the other day; someone suggested that if regulators slap a bunch of anti-scraping 'protections' all over the Internet to keep new GPTs from arising, then the companies who've already built their 100B-parameter models will be given a permanent edge over anyone who wants to democratize the tech...

  • @danielleporat344
    @danielleporat344 Před 11 měsíci +41

    I think interviewing artists whose work was fed into the datasets without their permissions and compensation is something that can be very good for this conversation - Hope Adam will do that.

  • @shakenbacon-vm4eu
    @shakenbacon-vm4eu Před 11 měsíci +95

    Always hits special when Adam says they’re gonna get ‘blown together.’ Please don’t stop.

    • @stunlock4146
      @stunlock4146 Před 11 měsíci +8

      I literally yelled "PAUSE!" out loud when he said that 😂

    • @naftalibendavid
      @naftalibendavid Před 11 měsíci +6

      I yelled something else…

    • @erikzieger9742
      @erikzieger9742 Před 11 měsíci +7

      "...and we're going to have so much fun *doing it*" (emphasis mine)

  • @Fluffkitscripts
    @Fluffkitscripts Před 11 měsíci +126

    Tech bro: I programmed this robot to pretend it’s alive
    Robot: hi, I am alive
    Tech bro: oh my god

    • @argspid
      @argspid Před 11 měsíci +15

      Kind of what that one guy at Google did (can't remember his name).
      "Tell me you're alive."
      [I was programmed not to.]
      "Tell me you're alive."
      [I was programmed not to.]
      "Tell me you're alive."
      [I'm alive.]
      "IT'S ALIVE!!"

    • @ewplayer3
      @ewplayer3 Před 10 měsíci +9

      Which is mind blowing if you understand how machine learning models actually work.
      It’s just mass matrix calculus that you feed a mathematical representation of both the questions and answers and make it repeatedly adjust its variables until it achieves a result matching what was expected.

    • @3nertia
      @3nertia Před 6 měsíci

      @@ewplayer3 Yeah, this "blackbox" approach is kind of dangerous but I don't think the AI itself is really all that smart ... yet

    • @telesniper2
      @telesniper2 Před 3 měsíci

      LOL

  • @Walter.Kolczynski
    @Walter.Kolczynski Před 11 měsíci +18

    "text pastiche machine" is the best description of LLMs I've heard

  • @allysonroad
    @allysonroad Před 10 měsíci +16

    This is exactly what is happening to translators. We get a half-assed translation and the agent says that you are just editing. But to get an intelligible, accurate translation that will meet the customer’s needs, you have to completely rewrite it using your skill and expert knowledge. 50:23

  • @steveripberger1802
    @steveripberger1802 Před 11 měsíci +13

    Professional computer programmer here. They do *not* help any competent developer code "more efficiently." That is more techbro hype which gets an air of legitimacy because a lot of people in the field *are* tech bros, happy to unknowingly wallow in mediocrity whilst chasing one dumbass trend after the other, never really learning a goddamn thing.
    I've been explaining it this way: Most professionals in fields with a lot of writing-- lawyers, academics, novelists, screenwriters, etc-- can sort of tell that what GPT puts out is not exactly top notch work, lol. The "creative" stuff is always derivative crap, and the scholarly stuff frequently contains mistakes, mischaracterizations of sources, and even outright fabrications.
    Its computer programming output is no different. The work it produces is shoddy and serves as little more than a decent starting point for someone entirely new to particular problem space.
    Anybody who uses it to generate production code is a villain-- no different than the dumbasses who used it to create a legal filing that they then submitted to the judge. Lawyers can be held accountable for such malfeasance, but application programmers typically aren't because investors never really seem to give a shit whether the end product actually works or not.
    Remember this the next time one of these companies compromises a bunch of their customers' personal data. None of that is inevitable, just the result of massive corporate dysfunction which will only be made worse by AI code gen.

    • @3nertia
      @3nertia Před 6 měsíci

      Yet another symptom of capitalism unfortunately 😢

  • @Toberumono
    @Toberumono Před 11 měsíci +15

    Speaking as a programmer, these generative models *suck* at writing code. Sure, they’re great at “intro to CS” stuff, but they rapidly fall apart after that (more specifically, they fall apart when asked for something not on stackoverflow).

    • @mettaursp309
      @mettaursp309 Před 11 měsíci

      What's weird to me about the code gen thing is that they're using it to generate code designed to be in a human facing format, when that same code already by design and definition has a graph form that's much more natural for computers. I'm far more impressed by anything any competitive modern C++ compiler does code gen-wise than anything these text models can make.
      I guess one of the primary benefits of the way they're doing it is it technically "works" for more languages, ignoring the caveats that you need a ridiculous amount of training data and the output is still limited in quality. However if my options are generating sub par code for any language vs just not using the thing, I'm just not gonna use the thing because I don't write enough boilerplate for it to matter.

  • @Amialythis
    @Amialythis Před 11 měsíci +120

    The first step to regulating AI should be to legally require these programs to be open source but none of our leaders know what that means. Maybe it's different in Europe but in america we really need some software engineers in government

    • @arturoaguilar6002
      @arturoaguilar6002 Před 11 měsíci +40

      Not sure that would help. The AI's source code is just a fraction of what makes them work; the real meat is the data that was used to train the AI model.

    • @Amialythis
      @Amialythis Před 11 měsíci +3

      @@arturoaguilar6002 true, ideally that should be viewable on the app somewhere

    • @gwen9939
      @gwen9939 Před 11 měsíci +13

      LLMs sure, but down the line we'll develop newer and more powerful AI that we won't want everyone to have access to. "Regulation" through free and open markets is usually not a good idea. It's kind of the same problem that exists with CRISPR. If that remains unregulated then any chump in their garage has all the tools they need to create something that could result in unstoppable pandemics.

    • @Alexander_Kale
      @Alexander_Kale Před 11 měsíci +6

      @@Amialythis We are not talking a couple of tik tok videos here. We are talking enormous servers full of data collected from all over the internet.
      Meaning, one, you are not getting access to that for free. that would be exploitable by literally every data mining company on the planet.
      And two, you would need a supercomputer to do ANYTHING with that data, so what exactly is the point of making available to the pubic something that no one but the companies themselves can use or even access anyway?

    • @Amialythis
      @Amialythis Před 11 měsíci +1

      @@Alexander_Kale I guess that makes sense but I also don't really care if big tech is forced to take a hit like that if they might be doing something shady but if it's not feasible then that's a whole other kettle of fish

  • @ViolentOrchid
    @ViolentOrchid Před 11 měsíci +132

    Editors should refuse to edit AI written content and let the company trying to replace workers with AI suffer the consequences.

    • @ChristopherSadlowski
      @ChristopherSadlowski Před 11 měsíci +20

      That's a good idea. We forgot the word "no" it seems. They should say, "Why don't you get your fancy 'AI' to read this over and correct the errors? Actually, isn't your fancy 'AI' supposed to work 100% of the time? Why is it making errors anyway? Go ahead and publish it. It should be good to go..." Queue the world immediately catching on fire in 3...2...1...

    • @JustButton
      @JustButton Před 11 měsíci +24

      @@ChristopherSadlowskiforgot? bro, the majority of freelance writers are struggling to begin with. They didn’t forget how to say no, many of them just cant afford to.
      You both are right, its just easier to say no when there’s a union.

    • @Giftedbryan
      @Giftedbryan Před 11 měsíci +9

      That is essentially what is going on already. Adam even mentions it IN the episode: "Let's talk about something very specific to my own heart. I'm a member of the writers guild of America, I'm also on a negotiating committee, we're on strike right now, and one of our strike issues is regulating AI in our contracts. That we have specific terms that we want to put in place to prevent AI from being used to... uuh, either passed off as our work product, or that we be forced to adapt the work of AI."

    • @stickjohnny
      @stickjohnny Před 11 měsíci +3

      Saying no to editing AI is literally exactly the reason the union is on strike right now.

    • @trybunt
      @trybunt Před 11 měsíci +2

      This is only looking at one small part of the problem, though. It's sort of like blaming boilermakers for using prefabricated parts or furniture salesman for selling mass manufactured furniture or something- with the intention of saving jobs from being lost to automation. I know the analogies aren't perfect, but hopefully my point gets across, which is that jobs are going to be lost, there's no way of stopping that, that's capitalism working as intended.
      I agree that people's jobs are going to be taken away, but that part isn't new, it's just happening to people who didn't expect it so soon. To me, that's a problem with how we tie our self worth to how much money we can make.

  • @JesseMaurais
    @JesseMaurais Před 11 měsíci +30

    If past promises vs reality are any indication for the future then silicon valley is over promising, and the AI rollout will be hugely disappointing.

  • @MonsterPrincessLala
    @MonsterPrincessLala Před 11 měsíci +60

    AI regulation could easily be a branch of the consumer protection bureau which would negate the need to make its own agency

    • @shaelunamidnight3585
      @shaelunamidnight3585 Před 11 měsíci +7

      That agency has no spine, and not enough employees working on the behalf of that agency

  • @freckledandred
    @freckledandred Před 11 měsíci +9

    I use AI on a daily basis and it is not nearly as advanced as everyone thinks it is. People are treating language models like they are 50 years more advanced than they actually are. It amazes me how little people actually know about something thats free and everyone had access to

    • @telesniper2
      @telesniper2 Před 4 měsíci

      It's better than 99% of user comments on social media sites, which is probably why tech company bros are losing their minds over it. They're all imagining all the auto-generated fart up companies they can create from nothing and get lavishly rewarded for with millions of dollars also created from nothing by the central bank. Guarantee you that's why they're all starry eyed about it. The mass potential in faking user bases. Reddit bragged about this years ago

  • @Steven-lg3zk
    @Steven-lg3zk Před 11 měsíci +2

    Pretty much every major point made by Adam & Gary has already been said by Hubert Dreyfus (a philosopher) in What Computers Still Can't Do, Mind Over Machine, and On The Internet
    - Dreyfus pointed out that the AI industry was claiming we already had self driving cars in the 70s that would be on the streets any day
    - Dreyfus pointed out that the AI industry markets itself as a science when its actually a business (one that constantly promises to deliver on stuff in the future)
    - He pointed out all sorts of issues that AI would need to overcome in order to have AGI
    - He pointed out the stuff about context & about common sense
    - He pointed out the problems with things like telecommunication (Zoom), virtual worlds (The Metaverse), and the leveling of information (the internet)
    - He talked about how humans develop expertise (and the problem with expert systems)
    And he talked about all of this between the 1970s-2010s
    These criticisms aren't new, but both contemporary AI proponents & AI critics talk as if these criticisms are new

  • @lanonymepaul5129
    @lanonymepaul5129 Před 11 měsíci +25

    As an engineer student my main concern is our ability to discern the line between arrogance and bullshit hype.
    Like I I'm not that olf and I remember when certain social media got big and i remember people saying this won't havea big impact and look at us now

    • @banquetoftheleviathan1404
      @banquetoftheleviathan1404 Před 11 měsíci +3

      Don’t you think it’s telling how unreliable it is that its being used in art instead of engineering? Everytime i add PME I am basically following the same rules. I assumed I would be out of a job before artist would be.

    • @havcola6983
      @havcola6983 Před 11 měsíci +9

      @@banquetoftheleviathan1404 I don't think the limitation of the tech is the main reason why artists and not some other profession are next on the chopping block. If someone with enough money wanted to I'm sure they could clobber together a model that could do my taxes for me in a couple of weeks, for example. But it doesn't happen, because the clients funding this stuff are venture capitalists who salivate at the idea of being able to generate potentially high-yield, high-status entertainment products without the need to employ creatives.
      It's all very depressing. 30 years ago I assumed the point of automation was to let people focus on more fulfilling jobs. Now it seems like the point is to force everyone who has to work for a living into the service industry.

    • @anachronity9002
      @anachronity9002 Před 11 měsíci +2

      @@havcola6983 No, see, the problem is that AI is *mostly* competent.
      If someone with enough money wanted to, they could clobber together a model that will do your taxes flawlessly 98% of the time. Then 2% of the time it will fuck it up so badly, in a way that no human accidentally could, that you get investigated for tax fraud and can in turn sue the company that made it.
      And if your reflex is to say that taxes aren't so complicated and we already have tax software more than equal to the task, you're misunderstanding AI. AI is not just what we already have but better; it is a different methodology entirely. AI is extremely well-suited to work on complex tasks which humans struggle with, but which can be overseen by human experts able to sanity check the AI's work and pick out the fuckups. It does not work well when used by laymen or in life-or-death situations because of that small chance of severe fuckups that a layman may not know how to compensate for or have time to correct.
      That's why it gets used for art and research, but not yet taxes or driving.

    • @AN-sm3vj
      @AN-sm3vj Před 11 měsíci +1

      ⁠@@anachronity9002I'm confused, I've been getting my taxes done by computer program for almost a decade soooo... We're already there? It even imports raw documents now. My phone can read written text. 🤷🏽‍♀️
      The thing with art is that it's generating images by using copyrighted work. Just because there's a bunch of steps in between doesn't mean it's "original" work. It's just people don't want to pay artists.

    • @anachronity9002
      @anachronity9002 Před 11 měsíci +3

      @@AN-sm3vj Read again. *AI is not just what we already have but better; it is a different methodology entirely.*
      The computer program you use is not an artificial neural network. It is conventional programming by human programmers. Taxes are something that is simple enough and formulaic enough to do that traditional algorithms are up to the task.
      An AI would essentially be trained to do taxes purely through random iteration and trial-by-error, then when it's 'good enough' to pass testing they release it out into the world as a finished product. This allows it to learn surprisingly complex tasks, but has an inherent potential for catastrophic failure since we don't know that it *can't* royally fuck up. We only know that it hasn't in testing.
      In practice, AIs are very prone to fuck-ups once any variables start changing from the test conditions they were trained under.

  • @cajunguy6502
    @cajunguy6502 Před 11 měsíci +27

    Established company's LOVE regulation, because it gatekeeps the industry by making it prohibitively expensive for start-up. Not to mention that as an added bonus the only people knowledge enough in the industry to work as regulators are veterans of their companies. Regulations are good, as long as they're not being written by the regulated.

    • @HeyLetsDoAThing
      @HeyLetsDoAThing Před 11 měsíci

      We The People write the regulations in order to regulate ourselves. Regulations are always written by the regulated. The alternative is that our lawmakers are above the law.

  • @MateusMeurer
    @MateusMeurer Před 11 měsíci +19

    About the regulation issue. I think we need them, of course we do, but sadly we have the worst generation possible of politicians to make them. I don't trust these greedy, biased and sold men and women to regulate something this dangerous without causing more harm than good.

    • @gwen9939
      @gwen9939 Před 11 měsíci

      The world is not america. The european union and its representatives aren't bought the same way US politicians are. They are a bunch of boomers that work extremely slow, don't get me wrong, but they're not corrupt by default.
      What we need is an international committee specifically for AI, preferably one that is a mix of state representatives and AI safety experts, and the US can have exactly 1 seat at the table, just like everyone else.

    • @suicune2001
      @suicune2001 Před 11 měsíci +1

      Agreed!

    • @anachronity9002
      @anachronity9002 Před 11 měsíci

      The other issue is they're all too damn old and uninformed to comprehend the issues. They don't know how the internet works, and their understanding of AI and technology comes from 80s and 90s movies. I was generally happy with the Obama presidency and yet he sold out to ISP companies because internet freedom just isn't a topic most politicians understand or care about.

  • @nobodyspecial2053
    @nobodyspecial2053 Před 11 měsíci +24

    You should talk to an emissary of one of the exploited Kenyans used to make the AIs function or about how generative AI will self destruction when fed its own data or the stupid amount of water used to cool these machines.

  • @hypersapien
    @hypersapien Před 11 měsíci +6

    Re: AI generating creative content- I think of the song Tears in Heaven by Eric Clapton. I won't say it's one of my favorite songs, but it is the song that has the most emotional impact on me, knowing the context of it being written about the death of his son who fell out of a 53-story window. If Eric Clapton never existed and wasn't around to write that song, but an AI made that EXACT song down to the waveform, it wouldn't have the impact that his original version does. Art is more than brush strokes, notes, or text. There's a heart behind it that AI can't reproduce.

  • @down-to-earth-mystery-school
    @down-to-earth-mystery-school Před 11 měsíci +6

    Im in several online writers groups and I’ve seen numerous people posting that their employer decided to replace them with Ai, it’s one of the main reasons the writer’s guild is striking. There already are short term consequences

  • @trainluvr
    @trainluvr Před 11 měsíci +17

    Adam, the un-flawed Geraldo Rivera, hits a home run with that intro.

    • @rorylynch1203
      @rorylynch1203 Před 11 měsíci +4

      Now that Geraldo is full fash, Adam should go full stash

  • @OSCARMlLDE
    @OSCARMlLDE Před 11 měsíci +5

    Love an episode of a podcast where one of the talking points is how a technology should be regulated and the sponsor is an unregulated dietary "supplement"

  • @MorningDusk7734
    @MorningDusk7734 Před 5 měsíci +2

    my biggest gripe with driverless cars: how the hell are they planning on dealing with snow? Snow covers literally any mark or sign that a visual-based system could use to identify how a car is supposed to behave on the road, and blocks remote signals from reaching things like wireless antennas. You can't tell me a driverless car can safely navigate that scenario better than a human from the area, you would have to show me, and even then I would be very skeptical. I've had to drive in conditions where I was relying on my memory of where the road is supposed to be relative to the trees and houses on either side, and what traffic signs are supposed to be followed along the way. Times where you're good to roll through a stop sign if no one is there, because if you stop, you're stuck. You're telling me you can automate when a car learns to break the law for road safety's sake?

  • @timwells5776
    @timwells5776 Před 11 měsíci +23

    Awesome show! Thank you for breathing some sanity into these AI discussions. And yes, copyright laws should be strictly enforced. These huge companies DO NOT and SHOULD NOT have the right to scrape digital content from the without the permission of the owners!!

    • @djzacmaniac
      @djzacmaniac Před 11 měsíci

      Don't release your work publicly if you don't want it included in the zeitgeist of the current era. If it's not a 1 for 1 copy of your work, being sold by an unauthorized person, shut up. IP=Imaginary Property

    • @timwells5776
      @timwells5776 Před 11 měsíci

      A commercial entity does not have the right to take your hard work and turn it into 1s and 0s, store it in their databases, and use it for their profit without your permission. Just because something's on the internet, doesn't mean it's free to steal. It doesn't matter whether it's a one for one copy.

    • @telesniper2
      @telesniper2 Před 4 měsíci

      I'm gonna make an ai that makes trivially modified versions of Mackey mouse. That should get it banned pretty damn quick

  • @edwardlwittlif
    @edwardlwittlif Před 11 měsíci +6

    While listening to this podcast, I opened Chat GPT and fed it the Sherlock Holmes story "The Hound of the Baskervilles" piece by piece with an order to shorten the passages. I repeated this process until Chat GPT had given me its most succinct summary of the story, which it couldn't shorten any further. I then asked Chat GPT to rewrite this nugget of Baskervilles as a limerick. That was my limit of requests per hour, so I switched over to Google Translate, and I translated the limerick from English to Spanish to Latin to Azerbaijani to Chinese and back to English.
    I don't know why I did any of this. Here is the finished poem.
    "They fear in Baskerville;
    Holmes and Mortimer start fighting again.
    They found Henry dead;
    Dogs and marriages are gone.
    Now Holmes suspects his accomplice, you see;
    I happily waded into the Badlands.
    After Stapleton died, he was buried in the grass.
    He should be happy when the case is resolved;
    Holmes and his friends fear nothing.
    They want to relax.
    You can find it in a theater;
    Joyful moments bring you closer together."

    • @sr2291
      @sr2291 Před 11 měsíci

      Tell it to turn your results into a Haiku.

  • @critterkarma
    @critterkarma Před 11 měsíci +7

    Now that SAG-AFTRA is on strike, seems those will be groundbreaking negotiations to set standards. Streaming services being able “to own” a person’s image to create content forever and ever, without any usage compensation to that “ working actor/extra”, is another case of corporate greed.

  • @maiabones
    @maiabones Před 11 měsíci +6

    My favourite thing about this episode is how genuinely Gary clearly had fun talking about this with you

    • @maiabones
      @maiabones Před 11 měsíci

      p.s. in the Holy Grail script book, "ecky ecky ecky pitang" is followed by "zoopoing goodem owli zhiv" in the dvd accompanying script book. in the actual take, it's close enough

    • @MynameisBrianZX
      @MynameisBrianZX Před 10 měsíci

      I also appreciated how often Gary pushes back on Adam’s preconceptions with more informed opinions and Adam doesn’t retaliate, which is getting rarer in interviews because outrage is promoted.

  • @danepatterson8107
    @danepatterson8107 Před 11 měsíci +39

    We already live in such a corporate dystopia that I don't know what hope there is. Our society is controlled and run by non-violent sociopaths who love money.

    • @lynemac2539
      @lynemac2539 Před 11 měsíci +2

      Insurance companies tell everyone what they can and can't do, and where.

    • @mynameis9389
      @mynameis9389 Před 11 měsíci +9

      I wouldn’t say non violent forcing people to pay for living expenses or else live exposed to the elements. Or literally destroying land and poisoning water for factories to build technologies or industries are not non violent

    • @user-zz5je1ry1o
      @user-zz5je1ry1o Před 11 měsíci

      Easy. Boycott. It’s not the companies, it’s your friends and colleagues and society who don’t care enough.

    • @gadget2622
      @gadget2622 Před 10 měsíci

      @@user-zz5je1ry1orelying on voting with your dollar just means you’re participating in a democracy where the rich have infinitely more votes than you.

    • @kenj4136
      @kenj4136 Před 10 měsíci

      ​@user-zz5je1ry1o we are drinking from the fire hose now. In a single sitting I can be given a dozen or more things to care about.

  • @jennkellie7341
    @jennkellie7341 Před 11 měsíci +4

    All of these people that think self driving cars work must live down in eternal-summer land. Where I live, we get snow for about 8 months of the year. The memes that joke about playing a game called "where's the road" are not lying. I have no faith that a self driving car could handle a Canadian winter without getting stuck in a ditch every 5 minutes.

  • @manfredkandlbinder3752
    @manfredkandlbinder3752 Před 11 měsíci +2

    I really enjoy the refreshing and important clarification that language models or image creating algorithms are not intelligent. Everyone who ever had to do with it even remotely through people in IT academia knew this already but the whole world just goes nuts about these false claims and misunderstandings.

  • @janewaysmom
    @janewaysmom Před 11 měsíci +3

    22:33 this bit about outliers is so true. My new car has a lane sensing ability that is supposed to help direct me into the center of the lane, but the first time I used it on the highway, it started to steer me into the lane beside me that was full of vehicles going 100km/h because the highway was damaged enough that it couldn't tell where the proper lane was. It felt like I was driving in high winds trying to keep myself righted, until I shut that system off. They really truly cannot predict outliers.

  • @haroldpierre1726
    @haroldpierre1726 Před 6 měsíci +1

    When companies are asking to be regulated, they are basically saying "now that we are the leader, make regulations to make it harder for our competition."

  • @nadamuchu
    @nadamuchu Před 11 měsíci +21

    Adam, I'm deaf and would love to see this captioned. I've personally used open source AI to create transcripts (MacWhisper) please look into using this and/or other tools for captions. thanks. love ur stuff!

    • @avidrucker
      @avidrucker Před 11 měsíci +3

      Accessibility matters!

    • @erintraicene7422
      @erintraicene7422 Před 11 měsíci +4

      If I hit CC on the screen closed captioning did pop right up (I’m hearing but I enjoy reading while I watch tv/you tube etc and keeping volume low due to my noise sensitivity)
      Maybe there is a delay until it becomes available? I hope you try again. This is a great episode.

    • @nadamuchu
      @nadamuchu Před 11 měsíci +2

      @@erintraicene7422 try watching the entire thing with automated captions without sound on. There is a reason why people call it automated CRAPtions! :) Its full of mistakes and doesn’t include other information like who is saying what, how they’re saying it, and other auditory info. Not to mention that proper closed captions are wayyy less fatiguing to read since they show full lines of text instead of revealing one word after another.

    • @erintraicene7422
      @erintraicene7422 Před 11 měsíci +2

      @@nadamuchu very great points. I wasn’t aware of how all that works.
      Again showing why artificial intelligence isn’t so intelligent.
      Thanks for sharing this insight.
      I hope Adam will consider having CC done then.
      It’s SO important that everyone can read/hear his viewpoints.

    • @nadamuchu
      @nadamuchu Před 11 měsíci +1

      @@erintraicene7422 💛

  • @raydgreenwald7788
    @raydgreenwald7788 Před 11 měsíci +3

    I'm more afraid of not being able to buy a car in the future that doesn't have Ipad hookup or smart tech. Especially with how hot the world is fetting, it is very dangerous to have a car entirely depending on screen tech.

  • @vexorian
    @vexorian Před 11 měsíci +3

    The problem of "the executives don't know what the job is" is actually something I feel safe in saying is the generalized issue with these things. Even for something that seems like chatGPT's main strength: Programming. I'm a professional programmer and I can tell you that it has the same problem. You can ask GPT to write code following a list of things that it should do and it is impressive that it can do it. But the problem is that's not programming hahaha. And a lot of people with surface-level knowledge can see at the python code generated by it and say WOW IT CAN PROGRAM. But in reality, even when setting aside the fact that chatGPT will often generate code that is quite wrong (even though it looks correct), the real issue is that writing lines of code is a very small part of a job.
    I am not lying when I say that there are weeks where I only write 10 lines of code in total. Because my job is a lot more about finding out why something is not working as intended and then analyzing how to fix that issue while causing as little disruption as possible. Or even when the goal is to write a completely new program, the challenge is then to make sure to first understand exactly what has to be coded. This is ultimately a job about understanding problems and therefore understanding people. When chatGPT is useful, it's useful in that it lets me save some time in the most repetitive part of the job, then one that even chatGPT can do.
    Programming isa job where language models alreayd have the maximum amount of data possible and even in that situation, it cannot do it. It's to me absurd to think that they could replace a writer or an artist if it gets more data and more processing power.

    • @brokenbreaks8029
      @brokenbreaks8029 Před 11 měsíci

      Hey dude, I'm a 17 year old teenager with a bright future ahead. I'm jumping into programming for video games. Is it safe coming from you?

    • @vexorian
      @vexorian Před 11 měsíci

      @@brokenbreaks8029 sorry for the late reply.
      In my opinion (not an AI expert) All jobs that involvr typing stuff with a keyboard are equally safe or unsafe. I think you'll find jobs, but it will be harder than right now.
      If you are good at itmand you lovr it, there will always be jobs for you. But you need to be able to adapt.

  • @Kevfactor
    @Kevfactor Před 11 měsíci +15

    I just watched this guy on hbo max. Way ahead of his time for 2015. I’m hoping there are more nerd fact series like that

  • @rsalbreiter
    @rsalbreiter Před 11 měsíci +12

    I'm curious if the increased "self driving" accidents are in part because the humans are assuming the car doesn't need intervention so they're not paying attention either

    • @moxiebombshell
      @moxiebombshell Před 11 měsíci +4

      Maybe I'm missing / misunderstanding something, but I thought "drivers don't pay sufficient attention while driving a car with 'self-driving' features enabled" was both a known issue and a given?

    • @stephenpittman4291
      @stephenpittman4291 Před 11 měsíci +2

      Yes, just look at the aircraft autopilot situation… we have had CAT3 auto land (full no visibility, auto land onto the runway since 1969 - BAC Trident III) and later, L-1011) . Still requires 2 pilots to monitor in the cockpit.

  • @steverrobbins10
    @steverrobbins10 Před 11 měsíci +3

    I like the optimistic thoughts at the end, but that optimism is the same optimism we heard about television, PCs, networking, the internet, social media, etc. All of this tech is being developed in an economic system in which the rational choice for the people in charge is to allocate all of the benefit of the tech advancements to themselves, through firing workers or algorithmically diminishing them (as you discuss with the WGA strategy of having AI write first drafts). If we don't change the way we distribute the benefits of this technology, it doesn't matter how good it gets, it won't actually make our lives any better.

  • @lostnumber08
    @lostnumber08 Před 11 měsíci +25

    I love your show Adam. You are so much cooler now that you are doing your own thing.

  • @Bizarro69
    @Bizarro69 Před 11 měsíci +3

    Adam notoriously interrupts guests, this guest was great he ploughs through that 🤣

  • @glassmonkeyface8609
    @glassmonkeyface8609 Před 11 měsíci +5

    The AI stuff, however, is another growing addition to the automation of our society, and our society is not in a place where this can be sustained. AI in various forms, will and has already cut MORE jobs. Self checkouts, order screens at restaurants, completely AI fast food places that places like Wendy's are trying, automated service over the phone and internet, ect ect... it's going to get to the point where automaton and AI will cut so many jobs out of our lives that we as humans have to decide if the greedy rich people continue to be to only ones who have money and survive, or if we make a society where people have living wages and are cared for while less work is needed. But the way it looks, the rich are going to continue to hoard everything, continue to cut labor forces with advanced tech, and expect the government, who they don't pay any taxes into, to keep people barely above water with food and shelter as they continue their competition to all be the biggest billionaires and TRILLIONAIRES from all the money they save on labor. We are not ready for the next steps, because we are not ready to eat the rich.

  • @WifeWantsAWizard
    @WifeWantsAWizard Před 11 měsíci +15

    One of the problems is that an AI that can actually learn via the Internet becomes a mirror that shows us the parts of humanity many of us don't want to acknowledge. We have not evolved enough as a species to then give birth to actual digital intelligence that can operate safely.

    • @johngibson4874
      @johngibson4874 Před 11 měsíci +14

      That is why they have been paying foreign workers to clean the training data. Some poor person somewhere is constantly looking at the worst of humanity to prevent AIs from duplicating it . Terrible

    • @altermann6753
      @altermann6753 Před 11 měsíci +3

      It’s also not learning. It’s just outputting data that has been input into its dataset. Please do not personify gAI it does not actually possess intelligence or consciousness.

  • @Scriven42
    @Scriven42 Před 11 měsíci +1

    First 4 minutes and it's already refreshing to hear the basics laid out so clearly.

    • @erintraicene7422
      @erintraicene7422 Před 11 měsíci +2

      That’s how I feel. These are all the common sense arguments and points that when I mention them people roll their eyes and walk away. Which means they can’t argue with blatant facts and common sense.
      Grateful to Adam for using his platform to be the voice of reason .

  • @shadowfax731
    @shadowfax731 Před 9 měsíci

    Long time fan; mucho respect for all your chutzpah and hard work, Adam! Merely would hope to offer a heads up about what I fear might be a dangerous trap which may have blindsided you. Sports betting is perhaps the most pernicious media threat yet to life, liberty and pursuit of happiness…even to hope, health and general wellbeing of us all. Please fight, Adam, to keep your integrity and dignity by saying “NO!” to “NFL Draft Kings” and the myriad evils of sports betting and gambling of all varieties?!? Whatever they are paying you can never be worth trashing trust and integrity. Thanks again for all the good and vital work you do and hearing my heartfelt concerns and hopes for your continued success!

  • @Gaffeghan
    @Gaffeghan Před 11 měsíci +2

    AI is a field of study in Computer Science. GPT is a form of AI that uses deep learning to produce a model that can output a "best guess" based on input.
    It isn't AI in the sense that it isn't Asimov, I-Robot level sentient/sapient.
    That is a different form of AI that may well be impossible.
    The danger lies (IMO) in trusting deep learning AI with critical tasks that require a distinction between fact and fiction. Something which GPT lacks, among other things.
    It can guess the statistically most likely next word in a sentence, or next pixel in an image. It can't guess facts.
    It doesn't know you exist. It doesn't know it exists. It isn't self aware, let alone capable of critical thinking or decision making. It can't self motivate. It won't decide one day to sit down and write a poem all on its own. It's a computer program, like any other. It starts working when you turn it on and stops when you turn it off. It won't remember past conversations and doesn't know it ever ran in the past. It's a very sophisticated parrot, capable of convincing chatter, but lacking even the most basic awareness even a bird possesses.
    As it stands, is it a threat to us? Only if we make it one.

  • @starbaron5506
    @starbaron5506 Před 11 měsíci +8

    Gary Marcus was an awesome guest! ❤

  • @trenomas1
    @trenomas1 Před 11 měsíci +5

    I think it's important to imagine what a deliberately slow roll out of technology would mean for the world.
    Personally, I think every technology ever could have been developed slower and more carefully and the world would be a better place.

  • @Multihuntr0
    @Multihuntr0 Před 11 měsíci +3

    A good and interesting conversation. I really like how Gary brings nuance to it all and even called out Adam when he was being unfair to AI. There's some really important decisions coming and it needs to happen soon. I also hope that companies are not the ones making the regulations.
    A comment and a question.
    First, as a programming educator, my personal experience with ChatGPT for programming has all the same trappings as for general language. It makes code that *looks* good, but subtly doesn't work, or changes the goals slightly. So, at 07:15 when you suggest that they are only good at helping programmers code more efficiently, I am worried. I don't want AI anywhere near my coding; more than anything, if the problem is in any way tricky, then it is likely to mislead you rather than help.
    And a question: is there a meaningful different between "AI", "computers" and "any form of digital automation" at this stage? All the ethical discussions are the same, aren't they?

  • @chawaphiri1196
    @chawaphiri1196 Před 7 měsíci

    Adam does good interviews that go like conversations between people that have known each other for a long time. I enjoyed this video

  • @louisvictor3473
    @louisvictor3473 Před 11 měsíci +1

    I like the climbing mountain metaphor, here is my take. AGI is getting to the real Mount Olympus of where Greek gods live. We have now climbed a hard to climb small but hard to climb mountain, and "we" are screaming to the four corners we are developing climbing techniques that will for sure help us reach real Olympus very soon now, maybe it is even the next mountain we climb... Except we don't know if it exists, and if it does where it actually is, if it is in this plane of existence or elsewhere, or if it is actually a mountain. And wether it is technically a mountain or not, it is home of gods and we have no clue wether their hiding and protection measures are comprehensible to a human mind or surpassable by means available to us, or if it follows the rules of physics of our universe to possibly apply our climbing methods to it. Yet some people are convinced climbing this tricky overgrown hill is gonna help for sure, just part of the path, simply because in the name we gave it there is "Mount" too. Seriously, it is not even hubris, it needs to be way less insane to qualify for that.

  • @gnustep
    @gnustep Před 6 měsíci +1

    Another potential danger is for these technologie (GPT-4 etc) to be used in interviews and hiring. Many managers will simply believe what the AI tells them no matter what the bias may be. Also, it is possible that the tech industry is MORE susceptible to this sort of hiring bias. I really appreciate you guy's discussion. It was very interesting and I am sure it helped to enlighten a lot of people who were clueless.

  • @ryanthomastew
    @ryanthomastew Před 10 měsíci +2

    This was very helpful to better understand some of the serious pitfalls of AI and how greed of mega billionaires are trying regulation capture. Great work !!

  • @doommustard8818
    @doommustard8818 Před 11 měsíci +2

    I just want to say this: if you are having an AI write a draft for something and then fact checking or parsing or editing it to replace the "bad stuff" or the "wrong stuff" with "good stuff" or "correct stuff", if you are actually doing that YOU ARE DOING THE WRITING. what the AI has written is AN OUTLINE, something your High School English teacher should have handed you before you graduated.
    For any fact based piece of writing 90% of the work is researching and in order to fact check the AI you still need to do all that research.
    For any creative writing 90% of the work is figuring out which of your ideas are worth keeping and which ones are just stupid, if you are parsing the "good stuff" the ai wrote from the "bad stuff" you are still doing that work. (creative writing involves a lot of other things AI is bad at emulating that you will have to manually inject into the work as well, but I'll just keep it simple)
    So regardless with the help of AI you are still doing 90% of the work, what the AI has given you is an outline with no substance.

  • @JustJanitor
    @JustJanitor Před 11 měsíci +1

    Haven't listened to any of these yet, haven't been in the mood I guess. But this was good, thanks Adam.

  • @williamcharnley5558
    @williamcharnley5558 Před 5 měsíci

    Great episode thanks Adam and Gary, really interesting and fun

  • @dolliscrawford280
    @dolliscrawford280 Před 9 měsíci +2

    A CEO often doesn't know how the sausage is made. We need scientists or program writers and testers to be able to do mini anonymous whistle blowing to a regulatory agency so it can be looked into before a disaster or monitored in development.

  • @Ianpact
    @Ianpact Před 11 měsíci +1

    Thank you, Gary and Adam.

  • @michaelgalligan1187
    @michaelgalligan1187 Před 11 měsíci +2

    When will we get the next monologue episode? Those are the best ones you have made on the channel?

  • @ArtyGal
    @ArtyGal Před 11 měsíci +1

    Nail on the head moment for me was when you said, "they don't have an abstract ability to reason"
    My fear is that this AI will be plugged into a quatum computer that all sorts of tech companies are scrambling to get up and running. There are a couple that concern me. One of these quantum computers is run on refraction mirrors in China.
    Micho Chacu ( hope i spelled his name correctly) says that combining a quantum computer with this language program would be incredibly dangerous and frankly I worry about it quite a bit.
    Please get Micho Chacu ( physics, quantum string theory) to talk on your show.

  • @DaveShap
    @DaveShap Před 11 měsíci

    Thanks for amplifying the signal.

  • @claffert
    @claffert Před 11 měsíci +1

    On people playing with Chat GPT and thinking we're on the verge of having Data from Star Trek. I'd say that it's closer (but not nearly as good) as the AI of the holodeck where they gave voice prompts for things and the holodeck kept getting it a bit wrong, needing revision, and sometimes causing a crisis that threatened the entire ship.
    "Oh no! I asked the holodeck to make me a worthy opponent for Data when I really meant a worthy opponent for Sherlock Holmes! Oopsie! Now the ship is threatened by a rogue holodeck character!"

  • @michaelhgravesjr9608
    @michaelhgravesjr9608 Před 10 měsíci +1

    That point about inference is really key. In fact, I would argue that inference and extrapolation are the key cornerstones of true intelligence. Holding data and spitting it back out on command is nothing; the earliest computers could manage that much flawlessly. The ability to look at data, and make theories about unknowns based on that data that can be tested, well... that's what separates humans and machines.

  • @Yournamehere368
    @Yournamehere368 Před 11 měsíci +1

    When will we reach AGI ? Answer: If AGI is possible, and we can achieve it with current models we are using. It would likely be anytime in the next 10 years. Longer than that, and it would probably be multiple decades away. Like the initial ML boom if it fails to reach promised advancements funding, and research time is likely to start drying up. If that happens the time line will likely lengthen substantially. So 0-10 years ,or 20+ are the most likely guesses.
    That's predicated on a few big IF statements. If AGI is possible with current models, and if AGI is possible with current hardware. AGI might take new approaches to ML to actually develop. If that's the case the time line is anyone guess. We can't predict anything where we don't have a baseline, and new approaches would lack a baseline to make any meaningful predictions.
    We say ML models have neurons, but they are nothing like human neurons. So we don't know if the current computer hardware is capable of the complexity needed to reach an AGI. It might take a whole new type of hardware to even have a chance to reach AGI status, but again that's an unknown factor. If we need a new type of hardware who knows there are so many variables any prediction we would make is at best going to be a complete guess.
    So what should we be worried about with AI. Well there are two main issues safety of current models, and the potential for self improving models. Both could lead to unpredictable outcomes, and none of them are ideal. Self improving models could possibly lead to an AGI, but they could also lead to them behaving in a dangerous way we cant predict. There are other concerns as well. Like using a LLM, and extracting the training data from it. Given that some LLM are using user input to train the model that leaves a vulnerability. People might put sensitive information into a LLM, and it would then be retrained on that data. Leading in the future to a bad actor being able to extract that information. A new type of data breach, and something to be concerned about. So the waters are dangerous, and I agree we need a governmental agency dedicated to AI. Possibly a new cyber division of the military, intelligence apparatuses, and or a regulatory body devoted to AI. The problem is we don't have 5 years to make it happen we need it yesterday.

  • @TheVallin
    @TheVallin Před 7 měsíci +2

    I personall think modern 'AI' companies should be put on notice and a round of artists should be lining up to sue for AI using their copywriten works without license or permissions.

  • @MsReclusivity
    @MsReclusivity Před 11 měsíci +10

    Honestly the most interesting part about the "rules for chess" thing you were talking about is that you can ask the AI if the move it made is against the rules or not it's able to go back and look at what it did and tell you if what it did was wrong or not.

    • @davidelmkies6343
      @davidelmkies6343 Před 11 měsíci +1

      Based on the idea that it's a complete the next bit of text kind of thing, maybe you just clued it in by asking

    • @CuriousKey
      @CuriousKey Před 10 měsíci +2

      And if its an LLM, there's a good chance it will be wrong in its assessment. Also keep in mind the big Chess AIs like AlphaGo are not LLMs.

    • @GreyTaube
      @GreyTaube Před 6 měsíci

      No it can't. That's the whole issue. That whole "are you sure?" back and forth was specifically added once ChatGPT became popular was there to basically agree with the person behind the screen. A "yes man" fallback if you will, to stop it from turning into a brainwashing session from the AI to the person.
      The technology behind it cannot do anything like that. There is no reasoning that could go over existing prompts and review them.
      Instead, OpenAI is just tricking people with that as well, by making the AI a "yes man" as soon as it hears phrases as "are you sure" and the like. If you play around with it a bit more, you can easily make it go back to it's initial wrong state by questioning if the "fixed" state was right and laying out some supposed arguments against it.

  • @Mikkelltheimmortal
    @Mikkelltheimmortal Před 11 měsíci +1

    I did not know you have a Netflix show.
    I'm going to watch it for certain.

  • @1805movie
    @1805movie Před 11 měsíci +2

    It should really be called "I.A." (Imitative Algorithm).

  • @MaryamMaqdisi
    @MaryamMaqdisi Před 11 měsíci

    Love your shirt Adam, also very interesting video

  • @vbywrde
    @vbywrde Před 11 měsíci +3

    17:45 on the topic of copyright regulations. Jaron Lanier pointed all of this out in 2013 in his book "Who Owns the Future" in which he said that all the creativity of the world was going to be used as free training data for LLMs and made similar prescient observations. Of course, he was summarily pooh-poohed by the Big Tech industry leaders, and the Mass Media completely ignored him.

  • @peterwolf4157
    @peterwolf4157 Před 11 měsíci

    On your tour, are you going to make it to Canada? It would be nice getting some of your insights and humour up here.

  • @ItWasSaucerShaped
    @ItWasSaucerShaped Před 11 měsíci +1

    Just a note about the Pentagon example:
    Sure, even 10+ years ago you could have had someone competently photoshop the Pentagon exploding. And just a few years ago probably it would have been possible for someone to create a convincing enough composite video of the same thing.
    But those would require having specialized expertise and software.
    AI changes things by axing the requirements. All I have to do is tell an generative AI model to create the video, and if the model is good enough, suddenly I have a convincing video of the Pentagon being destroyed or a politician being assassinated or an October Surprise that didn't actually happen. I don't need anything other than the motivation to do it and access to the AI model.

  • @WilliamHaynesTV
    @WilliamHaynesTV Před 11 měsíci

    He had me with the "7 dilithium crystals" Star Trek comment 🤣🤣🤣

  • @vtr8427
    @vtr8427 Před 11 měsíci +1

    Awesome to get Gary Marcus

  • @larrywest4130
    @larrywest4130 Před 10 měsíci

    I think that a group of experts brainstorming all on show at same time would be great.

  • @maficstudios
    @maficstudios Před 11 měsíci +2

    The problem with AI is less that it will replace humans. It's that it will replace enough of a human that the corporations will happily take the loss of function to save a buck. But they'll all do it, so while you've arrived a the intellectual restaurant you've loved all your life, and they'll serve you a grey gruel that cost more than the T-bone you used to eat. But you'll enjoy it, because that's what everyone severs, and you have no choice.

  • @AndrewEwzzyRayburn
    @AndrewEwzzyRayburn Před 11 měsíci +6

    Gary was a great guest. Glad to have him as the one taking all those DC meetings.

  • @garyclouse4164
    @garyclouse4164 Před 7 měsíci

    I work with an early AI application that used neural net simulation and fuzzy logic for reading handwriting. The AI gave the software the ability to guess the intended meaning of ambiguous squiggles.
    This ability to guess came with a side effect - the ability to make mistakes. The current algorithms have added the ability to fabricate lies

  • @Alex-cw3rz
    @Alex-cw3rz Před 11 měsíci +2

    Fantastic guest

  • @GregorySaintJean
    @GregorySaintJean Před 11 měsíci

    @TheAdamConover is there a video link for when Gary Marcus made a testimony to Congress?

  • @animefan25
    @animefan25 Před 11 měsíci

    Where can I find the full episodes of Adam Ruins Everything?

  • @PawsWithClaws_
    @PawsWithClaws_ Před 11 měsíci +2

    Kinda a weird comment, but your character in adam ruins everything was really comforing to me as an autistic kid. The entire trope of someone who lays out large amounts of information even if other people dont want it, and has people get annoyed because of it was so comforting. It just kinda made me feel seen??? 😭😭
    Like I know its not that deep but i just wanted to sag that.

  • @robhoneycutt
    @robhoneycutt Před 8 měsíci +1

    There's somewhat of a corollary with manufacturing where, years ago (well before Tesla), some auto companies attempted to create fully autonomous factories. They quickly discovered, no matter what level of precision in their engineering, there was a certain level of nuance that machines were just incapable of reproducing relative to what a human could achieve. Musk tried the same at Tesla, not fully understanding the lessons previously learned, and also failed. Companies building autonomous vehicles are learning this exact same lesson and spending, as Gary points out, $100B+ learning that lesson yet again.
    I think companies digging into so-called AI are also going to have to relearn these same lessons. LLM's are fascinating. They can't do what humans do. I think AGI will eventually happen, but my own guess is it's a century in the future. We'll eventually have autonomous vehicles and factories too. Those are probably at very least a decade or two (maybe 3) in the future.
    Suffice to say, what tech companies are attempting to compete with is a few hundred million years of evolution, and that's going to take a while to surpass. I also think there's a level of bravado and over-confidence that actually inhibits their capacity to achieve such tasks.

    • @GioGio14412
      @GioGio14412 Před 8 měsíci

      Automáting factories wasn't possible when technology wasn't advanced enough. It happens to every technology they take some development to achieve certain objectives, language or drawing couldn't be automated before and everyone thought it was imposible and we needed 100 years

  • @cowboyuniverse7258
    @cowboyuniverse7258 Před 11 měsíci +1

    Finally someone to talk to that’s not totally agreeing everything said.

  • @beratnabodhi
    @beratnabodhi Před 11 měsíci

    I read an article several months back about studies to determine the safest/least affected region of the U.S. and the scientists determine it was northern Vermont.

  • @peterpodgorski
    @peterpodgorski Před 11 měsíci +3

    19:46 this is basically the "color scientist" philosophical thought experiment done in reality. It's generally true that all humans do is mix and connect information, in some very, very remote sense similarly to a generative model, but there's one huge difference - stuff made by other people is not the only source of input. There's also our lives, experiences, walks in a park and shitty days at work. And also our internal life, which a model is also devoid of. So yeah... That argument is absolutely stupid.

  • @gogongagis3395
    @gogongagis3395 Před 11 měsíci

    Be honest, Adam! Was the video description written by ChatGPT? It absolutely loves describing people as “esteemed”.

  • @prettyflyforacompsci7725
    @prettyflyforacompsci7725 Před 11 měsíci +1

    In order for self driving cars to work, they would need to start installing data-gatherers on pretty much every new car for years until they have every reasonable outlier found. Still won't be perfect, but at least that would get them a good enough data set to work with.

  • @bgiv2010
    @bgiv2010 Před 11 měsíci +2

    "It's not the new technology that's scary. It's the way the technology is owned and managed that's scary."
    Yes! This is exactly what Luddism is all about!

  • @JoeyDCote
    @JoeyDCote Před 11 měsíci +1

    "Build elder care robots", wow, that is a flashback to Roujin Z.

  • @EatTheRichAndTheState
    @EatTheRichAndTheState Před 11 měsíci

    love the conversation, and love your points, though i didn't like the misrepresentation of anarchism, it was still a great talk

  • @MonsterPrincessLala
    @MonsterPrincessLala Před 11 měsíci

    My dude, you are going from Baltimore to NYC without stopping in Philly😢😢😢

  • @jomo9454
    @jomo9454 Před 11 měsíci +8

    Lately I've been seeing Teslas suddenly brake and start to swerve with nothing ahead of them close enough to necessitate that kind of maneuver, and especially when there are barrels or the stripes are screwy due to construction.

  • @Stumdra
    @Stumdra Před 11 měsíci +3

    10:01 Gary Marcus is not well informed on the current technology. GPT-4 is very well able to differentiate between "own a Tesla" and "owns Tesla" (the company). He should read up on the transformer architecture which is responsible for providing the necessary context. Seeing him make such basic misjudgments makes you wonder how much you can trust his other assessments.

    • @adamestrada7610
      @adamestrada7610 Před 11 měsíci

      Source?

    • @Stumdra
      @Stumdra Před 11 měsíci +2

      @@adamestrada7610 This is something you can check for yourself without having to take anyone else's word for it. I just tried it with ChatGPT Plus. The answer clearly shows that GPT-4 can distinguish the two concepts.
      *Q: Does Elon Musk own a Tesla?*
      A: Yes, as of my knowledge cutoff in September 2021, Elon Musk, the CEO of Tesla Inc., owns multiple Tesla vehicles. It's common for owners and top executives of car companies to use vehicles produced by their own company. However, for the most current information, please refer to the most recent sources available.

  • @Reinkai1
    @Reinkai1 Před 11 měsíci +2

    What does the term "paperclip" mean in this context?

    • @gemmapeter7173
      @gemmapeter7173 Před 11 měsíci +6

      "The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when it is programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design." from Wikipedia