Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania

Sdílet
Vložit
  • čas přidán 12. 03. 2023
  • Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:
    * We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, and they only improve slowly and incrementally
    * There are different levels of abstraction with regards to intelligence and knowledge. A machine that can reason very fast may not have the specific knowledge necessary to know how to do important things
    * We may be erring in thinking of intelligence as a general quality, rather than as more domain-specific.
    Hanania presents various arguments made by AI doomers, and Hanson responds to each in kind, eventually giving a less than 1% chance that something like the doomsday scenario imagined by Eliezer Yudkowswky and others will come to pass.
    He also discusses why he thinks it is a waste of time worrying about the control problem before we know what the supposed superintelligence will even look like. The conversation also includes a discussion about why so many smart people seem drawn to AI doomerism, and why you shouldn’t worry all that much about the principal-agent problem in this area.
    A transcript of the conversation is available here: richardhanania.substack.com/p...
    Subscribe to our CZcams channel: / @cspi
    Follow CSPI on Twitter: / cspicenterorg
    Subscribe to the CSPI Podcast: www.cspicenter.com/podcast
    Read our research at www.cspicenter.com
    _ _ _
    The Hanson-Yudkowsky AI-Foom Debate www.amazon.com/Hanson-Yudkows...
    Previous Hanson appearance on CSPI podcast
    - audio: www.cspicenter.com/p/18-how-t...
    - transcript: richardhanania.substack.com/p...
    Eric Drexler, Engines of Creation www.amazon.com/dp/0471575186?...
    Eric Drexler, Nanosystems www.amazon.com/Nanosystems-P-...
    Robin Hanson, “Explain the Sacred” www.overcomingbias.com/p/expl...
    Robin Hanson, “We See the Sacred from Afar, to See It the Same.” mason.gmu.edu/~rhanson/Sacred...
    Articles by Robin on AI alignment:
    - “Prefer Law to Values” (October 10, 2009) www.overcomingbias.com/p/pref...
    - “The Betterness Explosion” (June 21, 2011) www.overcomingbias.com/p/the-...
    - “Foom Debate, Again” (February 8, 2013) www.overcomingbias.com/p/foom...
    - “How Lumpy AI Services?” (February 14, 2019) www.overcomingbias.com/p/how-...
    - “Agency Failure AI Apocalypse?” (April 10, 2019) www.overcomingbias.com/p/agen...
    - “Foom Update” (May 6, 2022) www.overcomingbias.com/p/foom...
    - “Why Not Wait?” (June 30, 2022) www.overcomingbias.com/p/why-...

Komentáře • 6

  • @vincentduhamel7037
    @vincentduhamel7037 Před rokem +4

    Excellent podcast. My favorite from CSPI as of yet. Thanks!

  • @RaphShirley
    @RaphShirley Před rokem

    Likewise saying to an enemy in a peace deal 'don't make me regret this' wouldn't necessarily work.

  • @TheDeamonLo
    @TheDeamonLo Před rokem

    I see this more as a binary path before us. Either we can create an artificial general intelligence that is for all intents humanlike but just smarter. Or we can't. In the ladder case, maybe we can create AIs that aren't meaningfully smarter or more self aware than us, but offer greater amalgamation like a corporation but can "think" faster. In which case we probably don't have much to worry about (existentially, anyway). But if we can create something that sees us like we see ants, then we are by definition doomed at some point.