Katja Grace on the Largest Survey of AI Researchers

Sdílet
Vložit
  • čas přidán 13. 03. 2024
  • Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at aiimpacts.org/.
    Timestamps:
    0:20 AI Impacts surveys
    18:11 What AI will look like in 20 years
    22:43 Experts’ extinction risk predictions
    29:35 Opinions on slowing down AI development
    31:25 AI “arms races”
    34:00 AI risk areas with the most agreement
    40:41 Do “high hopes and dire concerns” go hand-in-hand?
    42:00 Intelligence explosions
    45:37 Discontinuous progress
    49:43 Impacts of AI crossing the human-level intelligence threshold
    59:39 What does AI learn from human culture?
    1:02:59 AI scaling
    1:05:04 What should we do?
  • Věda a technologie

Komentáře • 6

  • @martinnielsen2498
    @martinnielsen2498 Před měsícem

    Fantastic episode! 👏 Katja Grace’s insights on the potential risks and opportunities of AI are incredibly valuable. Her ability to articulate complex ideas clearly and accessibly makes this a must-watch for anyone interested in the future of technology. Thank you for highlighting such important perspectives and for a thorough discussion that truly illuminates the critical aspects of AI development. Greetings from Sweden 🇸🇪👋

  • @PauseAI
    @PauseAI Před 2 měsíci +3

    The aiimpacts surveys are perhaps the most useful studies that exist for convincing politicians that they need to urgently act. Thank you for that, Katja!
    Some of the stats from the latest one that we use all the time:
    - 86% believe the control problem is real and important
    - Average p(doom) is 14 to 18% (depending on how you phrase the question)
    As Katja says around 25:20, most people would be shocked by these numbers. It's beyond insanity that we're still allowing these companies to risk all our lives by building increasingly large digital brains.

  • @blahblahsaurus2458
    @blahblahsaurus2458 Před 2 měsíci

    To me one of the most obvious factors that make AI dangerous is that it can be copied infinitely to run on more hardware. So towards the end of the interview 52:00 you describe a scenario in which your productivity is 10% that of an AI, but that still doesn't necessarily mean you can't contribute to the economy (at a much reduced salary).
    Yes it will. It's an artificial, external constraint to say that your boss can't use "as much AI as they want". Maybe they will get price gouged by the provider of AI. But since AI is much cheaper to run than a human (who beyond the energy requirements of the brain, still needs housing, transportation, healthcare, 16 hours off a day, two days off a week, etc.) - it is likely that your boss and the provider will eventually be able to agree on a price that is slightly less than your salary.
    But even ignoring that - if your boss can't get as much AI as they want, the provider DOES have as much AI as THEY want. If your boss is in such a bad situation that they must rely on human labor for the indefinite future, they won't be in business for long.

  • @nowithinkyouknowyourewrong8675
    @nowithinkyouknowyourewrong8675 Před 2 měsíci +2

    Experts might be our best predictors, but our best might still be junk. Historically, how well were experts at predicting new things?

  • @dancingdog2790
    @dancingdog2790 Před 2 měsíci

    So much nervous laughter 😞