Rebecca Gorman | This House Believes Artificial Intelligence Is An Existential Threat | CUS

Sdílet
Vložit
  • čas přidán 21. 10. 2023
  • Rebecca Gorman speaks as the Second Proposition speaker on the motion in the Debating Chamber on Thursday 12th October 2023.
    The rapid growth in the capabilities of AI have struck fear into the hearts of many, while others herald it as mankind's greatest innovation. From autonomous weapons to cancer-curing algorithms to a malicious superintelligence, we aim to discover whether AI will be the end of us or the beginning of a new era.
    ............................................................................................................................
    Rebecca Gorman
    Rebecca Gorman is the Founder and CEO of Aligned AI, an IT consulting group working to make sure AI functions in alignment with human ethics and values. She was named in REWork's Top 100 Women Advancing AI in 2023, nominated for VentureBeat's Women in AI Award for Responsibility and Ethics in AI, and is a member of Fortune's Founders Forum.
    Thumbnail Photographer: Nordin Catic
    ............................................................................................................................
    SUBSCRIBE for more speakers:
    / @cambridgeunionsoc1815
    ............................................................................................................................
    Connect with us on:
    Facebook: / thecambridgeunion
    Instagram: / cambridgeunion
    Twitter: / cambridgeunion
    LinkedIn: / cambridge-union-society

Komentáře • 16

  • @singingway
    @singingway Před 5 měsíci +1

    Her points: 1. AI does not currently always do what we intend it to do. Her example is that AI applications meant to add entertainment to social media, instead cause death in teenagers. 2. Ai systems have been deployed at scale for 20 years. Some people have benefitted, some have been harmed. 3. Machine learning. Doesn't work in edge cases. 4. Example if we allow AI to decide when to fire nuclear missiles. 5. It is not built for the purposes to which it is being deployed. 6. The key is to build it such that it follows our instructions and then give it good instructions.

  • @The7dioses
    @The7dioses Před 4 měsíci +3

    Can someone please explain how these systems kill teenagers? Hello?

    • @cleitondecarvalho431
      @cleitondecarvalho431 Před 4 měsíci

      be pacient, you'll know how when the tv news start noticing it.

    • @The7dioses
      @The7dioses Před 4 měsíci +2

      @@cleitondecarvalho431 I don't watch TV news. Since you already know how, why don't you share what you know instead?

    • @maxmustermann7794
      @maxmustermann7794 Před 3 měsíci +1

      Encouraging criminal behaviour or even suicide. GPT-J already did that last year. Which does not mean any human can be convinced to such behaviour but it certainly is possible as you can find out by numerous articles about what happened when someone did commit suicide after talking to a chatbot for months. I do not watch the news either, but I like to understand what ever sparks my curiosity.
      Long story short, a man in Belgium ended his life after GPT-J encouraged him over the course of months.
      Something, that could maybe have been prevented, maybe not, but it surely did not discourage the person to take these actions, on the contrary as I said above.
      Search for it, you'll even find the transcript of their conversations which I can describe with one simple word. Horrifying.

    • @The7dioses
      @The7dioses Před 3 měsíci

      @@maxmustermann7794 Thank you for the valuable information and for responding. I will definitely look into this.

    • @singingway
      @singingway Před 3 měsíci +1

      Algorithms can lead a person down a path of illogic "driving straight to the bottom of the brain stem" as it controls what the user sees in response to the users comments, choices and actions. Young minds are particularly vulnerable to being influenced to be dissatisfied with the Self, insecure, unsure, confused about self identity (who am I and what do I really believe?) And teens who have taken drastic actions or self harm have sometimes been shown to be heavy media users made depressed by that media consumption.

  • @jimhiggs6281
    @jimhiggs6281 Před 9 měsíci +4

    Now her we can listen to!

  • @AntonioVergine
    @AntonioVergine Před 5 měsíci +1

    No, the point Is not that AI is safe if it does exactly what we asked it to do. The point is that we do not know what AI understood about our values and intentions. So AI, if instructed to do so, could solve the world hunger, but at the cost of something else we did not expect.
    The problem is that we can't know "the reasoning" behind the AI choices. So we can't know if the reasoning of the AI is flawed when AI is more intelligent than us.
    Example? In chess, you will see AI doing very bad moves. But you consider them bad just because you're not smart enough to see the full picture, while AI is.
    In a similar way, we will give autonomous powers to AI, but we will not be able to be sure that the final results of what we will ask will not carry a threat for humanity.

  • @MrMick560
    @MrMick560 Před 6 měsíci +2

    Can't say she put my mind at ease.

  • @richmacinnes4173
    @richmacinnes4173 Před 6 měsíci +1

    9 billion people on the planet, it only takes 1 person to make a mistake, at best..guaranteed someone will use it for their own goals, and within hours of being released

  • @danremenyi1179
    @danremenyi1179 Před 4 měsíci

    Poor