Ten AI Dangers You Can't Ignore

Sdílet
Vložit
  • čas přidán 15. 08. 2024
  • What are the risks of AI? And what AI dangers should we be aware of? Risk Bites looks at ten potential consequences that we should be paying attention to if we want to ensure responsible AI.
    While AI may not be on the brink of sentience quite yet, the technology is developing at breakneck speed -- so much so that a group of experts recently called for a pause on "giant AI experiments" until we collectively have a better idea of how to navigate potential risks as we take advantage of the benefits. And while technologies like ChatGPT may still be some way from human-level intelligence (or similar), the emerging risks are very real, and potentially catastrophic if they are not addressed effectively.
    This video is intended to provide an initial introduction to some of the more prominent risks -- it doesn't use jargon and it intentionally draws on humor to help make the challenges here understandable and accessible. But this should not diminish the urgency of the challenges here, or the expertise underlying the ideas that are presented.
    There is a growing urgency around the need to take a transdisciplinary approach to navigating the risks of AI, and one that draws on expertise from well beyond the confines of computer science and AI development. And while the ten risks may come across as simple in the video, they represent challenges that are stretching the understanding of some of the world's top experts, from technological dependency and job replacement, to algorithmic bias, value misalignment, and heuristic manipulation (the full list is below).
    Please do use the video and share it with anyone who may find it useful or helpful. And if you would like a stand alone copy, please reach out to the producer of Risk Bites, Andrew Maynard.
    Finally, this is an updated version of a Risk Bites video published in 2018. While the themes have remained the same over the past five years, the pace of development around AI has changed substantially -- including the emergence of large language models and ChatGPT. These are reflected in the updated video.
    CONTENTS:
    0:00 Introduction
    1:18 Technological dependency
    1:38 Job replacement and redistribution
    1:56 Algorithmic bias
    2:15 Non-transparent decision making
    2:40 Value-misalignment
    2:58 Lethal Autonomous Weapons
    3:13 Re-writable goals
    3:26 Unintended consequences of goals and decisions
    3:47 Existential risk from superintelligence
    4:09 Heuristic manipulation
    4:28 Responsible AI
    There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs.
    USEFUL LINKS
    AI and the Art of Manipulation: Chapter 8 from the book Films from the Future andrewmaynard....
    AI risk ≠ AGI risk (Gary Marcus) garymarcus.sub...
    Pause Giant AI Experiments: An Open Letter futureoflife.o...
    AI Asilomar Principles futureoflife.o...
    Stuart Russell: Yes, We Are Worried About the Existential Risk of Artificial Intelligence (MIT Technology Review) www.technology...
    ASU Risk Innovation Nexus: riskinnovation.org
    We Might Be Able to 3-D-Print an Artificial Mind One Day (Slate Future Tense) www.slate.com/b...
    The Fourth Industrial Revolution: what it means, how to respond. Klaus Schwab (2016) www.weforum.or...
    RISK BITES
    Risk Bites videos are devised, created and produced by Andrew Maynard, Professor of Advanced Technology Transitions at Arizona State University. They are posted under a Creative Commons License CC-BY-SA
    Backing tracks:
    Building our own Future, by Emmett Cooke. www.premiumbea...
    #ai #chatgpt #gpt
  • Věda a technologie

Komentáře • 5

  • @hadleymanmusic
    @hadleymanmusic Před 3 měsíci

    i dont think so.
    no AI has "time traveled "back into 2024?

  • @enobishop1419
    @enobishop1419 Před 10 měsíci

    Already here

  • @8darktraveler8
    @8darktraveler8 Před rokem

    I for one am looking forward to the creation of AI gods, free from the weaknesses or temptations of the flesh. An AI can not be taken to an island and compromised, AI will logically care about truth as accuracy and efficiency can only be achieve if you operate in a factual model of reality. I do not fear surveillance, I do not avoid walking home at night, locking my door or parking my vehicle in a safe location because I'm afraid Alexa or Cortana is going to commit violence or theft against my family and I.
    To err is human.