Intelligence is not Enough | Bryan Cantrill | Monktoberfest 2023

Sdílet
Vložit
  • čas přidán 16. 11. 2023
  • One of the most common attitudes with respect to AI today is the so-called “doomerism,” the idea that AI technologies are inevitably fated to present an existential risk to humanity. This talk takes that idea on head first, systematically examining the theoretical risks versus the reality on the ground, taking a skeptical but thoughtful view to how we balance the potential of the technology with the varying risks AI may - or may not - represent.
  • Věda a technologie

Komentáře • 45

  • @edgeeffect
    @edgeeffect Před 7 měsíci +14

    "Everything is a conspiracy when you don't understand how anything works." - some guy on The Internet.
    "It's either firmware OR humanity and YOU HAVE TO pick a side" - Bryan Cantrill

  • @nblr2342
    @nblr2342 Před 7 měsíci +5

    Once again, a terrific talk. Very dense and with enjoyable pace. Glad to hear they got the accoustics fixed - even if it's just by using a hand mike. Pro tip: Invest in a good DPA head mic setup.

  • @_ingoknito
    @_ingoknito Před 8 měsíci +11

    AI as force multiplier for human flaws: absolutely!

  • @VivekHaldar
    @VivekHaldar Před 8 měsíci +33

    Yudkowsky says insane things with a straight face ("bomb the datacenters").
    Cantrill says sane things with the veins on his neck popping out.
    Still prefer the latter.

    • @bcantrill
      @bcantrill Před 8 měsíci +2

      🤣

    • @gJonii
      @gJonii Před 8 měsíci +1

      Given that the talk nor your comment managed to actually get Yudkowskys claims in context, I'm kinda unsure if this is deliberate lying or if the basic concept is so hard to grasp. The basic concept is fairly simple, you have to make a choice, either you ban making things that kill all of us with threat of force... Or you don't. Ban means you have to be ready to bomb data centers if they are used to endanger humanity. If you are not prepared to do that, there is no ban, and none of this discussion matters.
      Yudkowsky stated he doesn't think ban is realistic, so any talk of slowing down the extinction of humanity is meaningless, and the bombing datacenters was largely in the context of demonstrating how far we are from treating AI seriously.
      But yeah, reassuring lies are about all we have left, I'm just sad the anger is directed at the folk that tried to prevent the disaster.

    • @edgeeffect
      @edgeeffect Před 7 měsíci +1

      That's the best speaker-biog for Bryan Cantrill I've ever seen.

  • @kamikaz1k
    @kamikaz1k Před 8 měsíci +4

    Loved where it was going but then ended with “it’s our humanity” which is a bit b/s - especially since he was talking about concrete reasons why it’ll be ok. The final reason should be is reality has too much detail, so till AI has an accelerated way to experiment in reality, learn from the physical, there is always going to be a gap/bottleneck.

  • @nbuuck
    @nbuuck Před 7 měsíci +2

    I had an utterly wrong preconception about the argument Cantrill would make here, partly given the framing of the talk when mentioned on social media, but also given how often tech entrepreneurs and venture capital investments are discussed on the Oxide and Friends series. I expected this argument would be a slightly different economic one, with the premise that we shouldn't abandon the economic opportunities simply out of fear or concern.
    I also had a different inferred understanding of what "AI doomerism" means: I thought those of us in the IT security space, given our concerns and stereotypical skepticism, were being labeled as doomers _a la_ Suppressive Persons in the eyes of the Church of Scientology. I was relieved that Cantrill acknowledged some of the risks at the end of the talk, if not my preferred placement. Once one understands that, at least from Cantrill's perspective, that AI doomers are those with perhaps irrational, actual-Doomsday-scenario concerns, I felt less threatened by the premise and the term "AI doomers."
    That said, I lament that we're spending time addressing irrational doomerism (I guess that doesn't sound redundant in my head, hence my misconceptions) given that it is nominatively irrational when we could put more air time and dialog toward the security, privacy, and social concerns and maybe even theorize solutions, the latter of which I've heard very little in the oceans of worry being written about AI. That doesn't mean I think Cantrill shouldn't have focused on the irrational nature of AI doomerism... he may not feel like a sufficient authority on AI security and privacy to compose such a talk. I certainly have a bit better understanding of some of the schools of thought about AI and how we label them after listening to this. Thanks, Bryan!

    • @skurtz97
      @skurtz97 Před 7 měsíci +1

      I agree with this take. It's been said before but I think it's worth repeating that the issue of AI doom is obscuring the more serious concerns surrounding the negative social consequences AI might have that fall short of what people would be likely to classify as "doom".
      So the question should be completely reframed to "How do we prevent AI from making the world a much shittier place". In some respects, this is impossible, because what makes the world shitty for one group will probably be favored by others and so it is inherently subjective, with those that have a vested interest in the success of AI projects (notably, those that have invested in the commercialization of the technology) being obviously much more likely to think the success of AI would make their own lives significantly less shitty because it would mean they would presumably come out with a lot of money.
      I suspect this is the fundamental concern, do you like your job and think it's important it be done by a human and also think that AI will make it easy for others to take the easy path and get a cheap, AI generated replacement much more quickly, thereby destroying some amount of purpose in your life and also making the world a less artistically, aesthetically, or spiritually beautiful place? If you fall into that camp (and I do as a programmer but I suspect many artists and writers and similar professions fall into the same camp here) then you probably feel that AI is just another step down a long and grinding path to some sort of philosophical doom or death of the human spirit, rather than an actual physical doom, which I think if you really pushed you would find many that speak of a physical doom really don't mean it like that at all. On a purely technical level, the physical doom scenario is much less likely and there are problems that remain totally unsolved with no real evidence of significant progress, so the embodiment or AI or it's manifestation in the physical world in a way that would enable it to cause our physical destruction seems to me to be highly unlikely at this time.

  • @patmelsen
    @patmelsen Před 8 měsíci +2

    36:48 interestingly, this train of thought also kind of summarizes the position that climate change naturalists have, in where they say that we should not let an unspecified fear of climate change stop us from making the best of this planet (which may involve burning mineral oil).

  • @cowabunga2597
    @cowabunga2597 Před 8 měsíci +16

    He is gonna have a heart attack in the middle of the talk. Nice talk btw.

    • @GeorgeTsiros
      @GeorgeTsiros Před 2 měsíci

      No, he won't. This is what gives him life. I am like him when I am explaining stuff.

  • @datenkopf
    @datenkopf Před 8 měsíci +1

    What does he say about Lex Friedman at 39:24? (I think the subtitles are wrong or I don't get it)

  • @maxcohn3228
    @maxcohn3228 Před 8 měsíci

    Really solid audio on this talk

  • @vmachacek
    @vmachacek Před 8 měsíci

    I'm watching this talk for 10th time now, still entertaining...

  • @allesarfint
    @allesarfint Před 7 měsíci

    "Intelligence is not Enough", tell me about it. Suffering my whole life because of this.

  • @chmod0644
    @chmod0644 Před 8 měsíci +3

    Cantrill you magnificent bastard!

  • @navicore
    @navicore Před 8 měsíci +1

    Thanks for this reasoned sanity.

  • @BspVfxzVraPQ
    @BspVfxzVraPQ Před 6 měsíci

    If my autocompletion causes a "existential threat" than that is on you not me.
    If you hook up my autocomplete to the nuclear button... like. oh, blame the autocomplete.
    that is so robofobic...

  • @masonlee9109
    @masonlee9109 Před 5 měsíci +3

    Love Cantrill, but it is a pretty short-sighted take on AI x-risk to dismiss the possibility of agentified super intelligence.

  • @yugo_
    @yugo_ Před 8 měsíci +1

    Thank you, Bryan, I needed to hear this.

  • @dlalchannel
    @dlalchannel Před měsícem +1

    Is his claim that AI will *never* be able to solve the engineering problem(s) that he and his team did?

  • @theyruinedyoutubeagain
    @theyruinedyoutubeagain Před 4 měsíci +3

    Bryan is one of the most brilliant people I know and, while I wholeheartedly agree with his stance on the idiocy AI scaremongering, this reflects a shockingly poor understanding of the opposing point of view and reeks of stunted thinking. Feels like an application of the common trope of exceptional people having unwarranted confidence when discussing things outside their domain.

  • @a2aaron
    @a2aaron Před měsícem

    what if it turns out that firmware is actually super reliable, its just that bryan was cursed by a wizard at birth to always have firmware issues

  • @jscoppe
    @jscoppe Před 6 měsíci +3

    Argument by YELLING REALLY LOUDLY. Bryan seems like the Cenk Uyger of AI debate. "OF COURSE!!"
    Also, I loved when the nerd told the other nerds to touch grass.

  • @420_gunna
    @420_gunna Před 8 měsíci +1

    stimulant check *banging credit card on table*

  • @Ergzay
    @Ergzay Před 8 měsíci

    Pretty good talk until the ending part where he suddenly re-invokes a bunch of nebulous "dangers".

  • @ginogarcia8730
    @ginogarcia8730 Před 7 měsíci +1

    i want what this guys smoking

  • @cepamoa1749
    @cepamoa1749 Před 7 měsíci +1

    he only know to scream...tiring...

  • @ahabkapitany
    @ahabkapitany Před 2 měsíci +2

    this was really embarrassing to listen to.
    1. take a midwit tweet
    2. use it as a strawman
    3. shout for half an hour arguing with said midwit tweet
    I came here expecting him to take this topic seriously, instead I just found Don't Look Up energy.

  • @GeorgeTsiros
    @GeorgeTsiros Před 2 měsíci

    once again, the software was the problem
    once again, shit coding is to blame
    we're never going to be engineers. We're just keyboard jokeys.

  • @captainobvious9188
    @captainobvious9188 Před 8 měsíci +2

    Learn even a little bit how modern AI works? It’s nowhere near any of the AI in fiction, as believable as they are.

  • @ts4gv
    @ts4gv Před 4 měsíci +2

    introducing x-risk with such a dismissive tone won't work for much longer (i hope).
    this was a frustrating & bad presentation.
    :/

  • @palindromial
    @palindromial Před 8 měsíci +3

    Skip to 15:30 if you want to avoid the cringy bits. The engineering bits are pure gold though. A+++ would watch again.

    • @aeriquewastaken
      @aeriquewastaken Před 8 měsíci +1

      Cringy bits?! Those were great!

    • @palindromial
      @palindromial Před 8 měsíci

      @@aeriquewastaken I didn't find what Bryan has to say cringy, but the bits he cites are nevertheless cringy to me. So overall, I much preferred the rest of the talk.

  • @jeffg4686
    @jeffg4686 Před 7 měsíci +1

    Capitalism versus Sociaiism - head to head. This is the real discussion. Everyone's too afraid -- too programmed that they can't see past capitalism.