Morality and Artificial Intelligence: The Science and Beyond | Devin Gonier | TEDxAustinCollege

Sdílet
Vložit
  • čas přidán 21. 08. 2024

Komentáře • 36

  • @kirstenviesitle
    @kirstenviesitle Před 3 lety +6

    This is the first time I have seen a man in technology speak honestly about this, there are issues from replacing real jobs, to the danger posed but machines, thank you Devin.

  • @franklinpgarner
    @franklinpgarner Před 3 lety +5

    This is such a big conversation, as I know many people that I would never want to make moral decisions on others. And that makes me wonder that of the right people are involved, maybe the rules about morality can be considered better.

  • @dustinprichards
    @dustinprichards Před 3 lety +6

    This is making me rethink all the elements of AI, how can we give these choices to technology!

  • @jeffrosenberg1561
    @jeffrosenberg1561 Před 5 lety +11

    Great talk, great topic. Glad smart people are thinking about these critical issues.

  • @iKicker
    @iKicker Před 5 lety +2

    I think the important distinction is some value systems can be learned and some constraints are hard coded. What to teach and what to hard code? What he is discussing is the most difficult issue with artificial intelligence.

  • @zacksuchodolska
    @zacksuchodolska Před 3 lety +2

    Morality is forever changing. Our perspectives, priorities, everything - always changing. So I can see how an algorithm could show a robot or AI or whatever between our standard right and wrong, but I dare say everything else gets a bit murky. Humans aren't even great at figuring it out!

  • @harryduganesq
    @harryduganesq Před 3 lety +4

    Machines can get better over time, and humans have fallicies - but they have to learn from humans, and we are corrupt!

    • @devingonier4014
      @devingonier4014 Před 3 lety

      Humans get better too, but not as quickly. There is a great book, Thinking fast and slow that covers many of the fallacies you mention. It will be interesting to see which ones make there way into the computers we train.

    • @harryduganesq
      @harryduganesq Před 3 lety +1

      @@devingonier4014 um have you seen America recently, you really think humans get better?

  • @channelcannes
    @channelcannes Před 3 lety +2

    all you have to do is look at movies and other forms of media to see that our morality has been chipped away at for a very long time... still, the idea of genetic algorithms sounds very interesting.

    • @devingonier4014
      @devingonier4014 Před 3 lety

      Genetic algorithms are interesting and in certain domains are making a comeback!

    • @channelcannes
      @channelcannes Před 3 lety

      @@devingonier4014 yes true, well presented talk.

  • @eferrari96
    @eferrari96 Před 3 lety

    Users should be given choices so that the code will run according to their own moral beliefs/preferences. That should be specified before using the program by the users in some kind of settings.

  • @kchdlk
    @kchdlk Před 4 lety +2

    Amazing, thank you. You made me think about Battlestar Galactica newer series again, when the cylons were the topic. Because in the series years ago people made them (organic androids) in theirs image. Like god created people in his image. But... Humans - what if we are AI created in omage of our creators? To create humanity? Kind of subjective morality? We will never discover that and thats amazing :)

  • @ericpham8205
    @ericpham8205 Před 3 lety

    sensor improvement needed with less windows like using more solid than magnetic unless magnetic with properly shielded. electronic shielding of processor and take sound out of sensor and keep bimetal sensor in sensor more for example as guide in sensor technology but when automate weapon system then no stopping possible

  • @creativityisall122
    @creativityisall122 Před rokem +1

    Is AI a threat to human morality

  • @SolaceEasy
    @SolaceEasy Před 3 lety +2

    I would rather have a philosopher describe morality than a computer nerd. I think he's missing an important point. Morality is the weighing of various values against each other in the situation at hand. The hard part is finding universal values. The values for humanity may not be the values for AI, or the values for the life that AI is programmed to honor. After all humans will be temporary in the scheme of life. AI may find it morally necessary to severely restrict humanity to protect the life we all are supposed to value. We, the remaining humans, will be objects for scientific study. Why DID they do that?

  • @PilgrimMission
    @PilgrimMission Před rokem

    If human beings are sinners, how will they be able to teach machines to be saints?

  • @SuccessMindset2180
    @SuccessMindset2180 Před 28 dny

    1. These alleged vehicle deaths’ analogies with morality and AI are seem weird
    2. People in hurry are usually tend to be less moral

  • @ericpham8205
    @ericpham8205 Před 3 lety

    i think with colission tech could save more life but it is not yet perect but better than without. however the war time come it could quicken the defeat or victory or number of war death and so we must chose just like we did when we inventd gun powder

  • @MG-fr3tn
    @MG-fr3tn Před měsícem

    AI charged with saving humanity, 😂.
    When was the last time you saw a scientist look out the window or act like a role model.
    Reductive and hurbus.

  • @jacquelinebtoccigailhelena5184

    lol non devil's are Aliens ??