Darren McKee on Uncontrollable Superintelligence

Sdílet
Vložit
  • čas přidán 3. 06. 2024
  • Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
    Timestamps:
    00:00 Uncontrollable superintelligence
    16:41 AI goals and the "virus analogy"
    28:36 Speed of AI cognition
    39:25 Narrow AI and autonomy
    52:23 Reliability of current and future AI
    1:02:33 Planning for multiple AI scenarios
    1:18:57 Will AIs seek self-preservation?
    1:27:57 Is there a unified solution to AI alignment?
    1:30:26 Concrete AI safety proposals
  • Věda a technologie

Komentáře • 4

  • @k14pc
    @k14pc Před 6 měsíci +5

    great guest. fwiw my views essentially match his. will definitely check out the book

  • @AbsoluteDefiance
    @AbsoluteDefiance Před 6 měsíci +2

    Very sharp fellow.

  • @dougg1075
    @dougg1075 Před 5 měsíci +2

    Since when does the human race care of that much about safety? Especially when there is money to be made or a goal to be accomplished

  • @Pearlylove
    @Pearlylove Před 2 měsíci

    Listened 30 minutes, and no real info yet, just that ppl are so dumb that hard to explain GI to them? Ppl are not that dumb, but they might lose interest at this pace, so hopefully you soon explode with info the next minutes!