Robin Hanson on Predicting the Future of Artificial Intelligence

Sdílet
Vložit
  • čas přidán 24. 07. 2024
  • Robin Hanson joins the podcast to discuss AI development and safety.
    Timestamps:
    00:00 Introduction
    00:49 Robin's experience working with AI
    06:04 Robin's views on AI development
    10:41 Should we care about metrics for AI progress?
    16:56 Is it useful to track AI progress?
    22:02 When should we begin worrying about AI safety?
    29:16 The history of AI development
    39:52 AI progress that deviates from current trends
    43:34 Is this AI boom different than past booms?
    48:26 Different metrics for predicting AI
    Social Media Links:
    ➡️ WEBSITE: futureoflife.org
    ➡️ TWITTER: / flixrisk
    ➡️ INSTAGRAM: / futureoflifeinstitute
    ➡️ META: / futureoflifeinstitute
    ➡️ LINKEDIN: / future-of-life-institute
  • Věda a technologie

Komentáře • 21

  • @dgeorgaras4444
    @dgeorgaras4444 Před rokem +3

    This slow takeoff of AI seems to be the same logic that Kodak or Blockbuster video used to keep their funding models unchanged.

  • @ZippyLeroux
    @ZippyLeroux Před rokem +1

    Dude, ok check it out... I have terrible memory problems and I'm subscribed to a million channels, and so every day I load up my feed and I add all the videos I wanna watch to respective playlists. Anyway for e.g. my podcast playlist is like 300 videos long, so I'm months behind, with memory problems.
    This preamble is to inform you that every time I come across one of your videos in the feed, I think, "wtf is this? It looks... culty, or spiritual.... Did I subscribe to this out of curiosity and then forget? What are they pedaling?"
    I take full responsibility for this problem being on my end, along with the fact that I have no idea how to properly name or describe it, nor do I have any proposed solution. It's just something about the thumbnail design in conjunction with the channel name. Every time this happens I have to look through your videos list find credible guests and I go, "Oh ok David Chalmers, that's why." And then by the time a new video appears in my feed I have to go through this whole thing again, lol!
    As I say it's my problem and thus far I'm still enjoying your excellent videos. Perhaps by some conditioning process over time I will learn to recognize the videos like I do for Brian Keating, or Lex Fridman. But if what I'm saying makes sense in the context of any other feedback you've received, then by all means, feel free to experiment with your thumbnail design.

  • @AleksOniszczak
    @AleksOniszczak Před rokem +9

    Well this didn’t age well

  • @Dan-dy8zp
    @Dan-dy8zp Před 5 měsíci

    This came out *6 days* before Chat-GPT.

  • @soniasilva9637
    @soniasilva9637 Před rokem +3

    So basically, Robin Hanson is going by the rule of "the more you fuck around the more you find out ". A misalligned AI is not a bu-bu , it's game over for mankind. His analogy with car accidents is downright misdirection. A car can't take over the world, an AI can. A car isn't orders of magnitude smarter than the smartest humans. Besides, you can pop the hood and see what's going on. An AI is a black box. They have no idea how or why it does what it does, including code the AI itself writes, they have no clue what it is. They just throw the dice and hope for the best. As Connor Leahy would say, "Don't do stupid " .

  • @carlosmendes7
    @carlosmendes7 Před rokem +6

    This way of dealing with possible future problems, waiting for more "incontrovertible" evidence... how different is this from what climate negationists do?

    • @carlosmendes7
      @carlosmendes7 Před rokem +1

      I acknowledge, of course, the fact that many anticipated problems never happen. See en.wikipedia.org/wiki/Great_horse_manure_crisis_of_1894

    • @carlosmendes7
      @carlosmendes7 Před rokem +1

      Oh, and great talk! Thanks for that!

    • @Dan-dy8zp
      @Dan-dy8zp Před 5 měsíci +1

      Waiting around regarding AI alignment is a lot more dangerous that global warming, I'd say.

  • @travisfitzwater8093
    @travisfitzwater8093 Před rokem +1

    Everything will be different (or ought to be) by 2037.

  • @robs0070
    @robs0070 Před rokem +1

    The problem of this guys arguments is he is thinking in a linear growth versus exponential growth of AI. The AI networks with trillion function to 500 trillion won’t take the same time as it took a single function to a trillion function neural networks. (500 trillion function is equivalent to human level intelligent). You can not extrapolate the future progress based on the past due to its exponential nature of growth.

  • @Sporkomat
    @Sporkomat Před rokem +1

    His arguments and analogies are surprisingly not very logical.

  • @sgramstrup
    @sgramstrup Před rokem +4

    Interesting that we fear a future AI that doesn't align with humanity's goals, when we already use an economic system - Capitalism - that doesn't align with humanity's goals..

    • @Alex-fh4my
      @Alex-fh4my Před rokem

      Connor Healy made a good point in saying that we are just stupid. "Imagine the world if the average person had 200iq". Capitalism is a stupid system for a stupid society, that is vaguely better aligned to human values than some other awful systems

    • @lysander3846
      @lysander3846 Před rokem +1

      No matter what system is in place it aligns with someone's goals. You might want to be more specific.

    • @gregoryn3780
      @gregoryn3780 Před rokem +2

      What are the humanity goals that you have in mind here? Capitalism is the same as freedom of cooperation... freedom seems to work pretty remarkable, in my opinion.

  • @ramakrishna5480
    @ramakrishna5480 Před rokem +1

    My suggestion is to get more optimistic guests

    • @Alex-fh4my
      @Alex-fh4my Před rokem +1

      "optimistic" more like 'better aligned' with logical thinking

    • @Dan-dy8zp
      @Dan-dy8zp Před 5 měsíci

      My suggestion is humanity should be a little more pessimistic. At least regarding AI safety.