Machine intelligence makes human morals more important | Zeynep Tufekci

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
    TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
    Find closed captions and translated subtitles in many languages at www.ted.com/translate
    Follow TED news on Twitter: / tednews
    Like TED on Facebook: / ted
    Subscribe to our channel: / tedtalksdirector
  • Věda a technologie

Komentáře • 215

  • @twstdelf
    @twstdelf Před 7 lety +33

    Wasn't sure where she was going at first, but she landed it nicely - well done - and an important point for sure!

  • @fburton8
    @fburton8 Před 7 lety +40

    Well worth the standing ovation, I'd say!

  • @TheHeavyModd
    @TheHeavyModd Před 7 lety +5

    A great talk, for once! I thought this was going to be one of those philosophical talks with minimal objectivity and high subjectivity, but instead I was pleasantly surprised by the examples and evidence for her argument she provided.
    This was truly eye-opening and interesting. Thank you, ms. Tufekci!

  • @AbhishekNigam
    @AbhishekNigam Před 7 lety +6

    One of the most important and best talks I have ever seen. Highlights some grave mistakes in our all powerful , all great system.

  • @lollsazz
    @lollsazz Před 7 lety +22

    I wonder who downvoted this... this is both true and important

  • @philtripe
    @philtripe Před 7 lety +15

    wow...15:45 really got me with the "lethal autonomous weapon" at the end...heres the people that make the programs telling us its not perfect and can never be perfect

  • @nivolord
    @nivolord Před 7 lety +30

    Tip: Don't watch too many of these videos. Machine learning algorithms might judge you dangerous for their survival, or worse, they might try and sell you philosophical books on moral decisions.

  • @locouk
    @locouk Před 7 lety +35

    How bizarre, I was watching Fox News live on you tube earlier today noticing the racist and hate comments in the chat thread scroll up, I left one comment saying "Google have an algorithm that identifies hate comments and logs the users." Several of the "keyboard warriors" instantly cleaned up their act.

    • @DeoMachina
      @DeoMachina Před 7 lety +3

      This happened.

    • @DILINGER0
      @DILINGER0 Před 7 lety +1

      lol

    • @sj8948
      @sj8948 Před 7 lety +6

      I'll take things that didn't happen for 500, Alex.

  • @e1iason
    @e1iason Před 7 lety

    Incredibly insightful and meaningful -- a discussion that doesn't accompany traditional conversations of machine intelligence.

  • @fatmaaydn9290
    @fatmaaydn9290 Před 6 lety +2

    great talk, great topic. Tebrikler Zeynep Tüfekçi!

  • @buzz10014
    @buzz10014 Před 7 lety +7

    this was an incredible video. impressive perspective and she was 100% right. what she is talking about will without a doubt be the future we will all belong to, and ...(gulp) be controlled by. so we better realize that the software we write today will be used against or for us in the future. we must insure that our ethics programed into machine learning must be suitable for all people to live by comfortably and fairly. reasonably and kindly.

  • @mannyverse6158
    @mannyverse6158 Před 7 lety +64

    Unchecked algorithms are ruining the world. This is a profound talk

  • @freediscussions3743
    @freediscussions3743 Před 7 lety +1

    Great talk Zeynep! Thank you

  • @ShortsHound
    @ShortsHound Před 7 lety

    Thought provoking ! ... and a well presented treatise of scrutiny ... opens some interesting lines of reasoning out what form the scrutinizers may take

  • @tarekabuaita666
    @tarekabuaita666 Před 7 lety +2

    great presentation and a beautiful personality.

  • @joetaylor486
    @joetaylor486 Před 7 lety

    Outstanding! I have no other adequate words.

  • @gnarlin4964
    @gnarlin4964 Před 7 lety +15

    and this is why all software must respect user freedom.

  • @Alex-sx8et
    @Alex-sx8et Před 7 lety

    J'attends la vostfr avec impatience ☺

  • @crimsoncorsair9250
    @crimsoncorsair9250 Před 7 lety +34

    I sometimes doubt that Humans have any morals left..

    • @vaibhavgupta20
      @vaibhavgupta20 Před 7 lety

      Crimson Corsair why do you feel that way?

    • @MaskofPoesy
      @MaskofPoesy Před 7 lety +5

      Left? Left from what? Middle ages? If anything we're improving on every way. Humanity is a concept we aspire to and nurture, not a physical attribute we lose since we were born.

    • @Sakhmeov
      @Sakhmeov Před 7 lety +1

      They do. And computing can help. What we call "goodness" is actually pretty much interchangeable with "efficiency", just overlaid somewhere down the line with some idiosyncratic concept of "fairness". This is totally modelable. And if I understand the game theory right, it also means that "good" is an emergent trait.
      However if you want to look at the source of "evil", look at the SJW undertones here. "Messy value-laden human affairs" end up not being about efficiency, thus precisely the step away from ultimate good. And this is validated and codified in law, rather than the goal of giving people the truth as objectively and scientifically as possible, and sticking to it. It's the system that we currently run the West on.

    • @DeoMachina
      @DeoMachina Před 7 lety +3

      "Hey guys we might have to make some difficult choices about how we program these machines"
      "OMG EVIL SJW"
      gb2/reddit

    • @DeoMachina
      @DeoMachina Před 7 lety

      Sakhmeov
      Fuzzy about the details? She gave you a specific scenario that really happened, with the people it really happened to.
      Hippy dippy moral argument? I guess, if you think "This is immoral" is something only hippies say. But there's a clear pragmatic argument here too.

  • @kght222
    @kght222 Před 7 lety +2

    6:10 when it comes to the data that the machine might use to hire someone, that data was input by a human. its accuracy is itself subjective but the computer doesn't know that. as humans we can recognize where the machine would have deficiencies like that and not use it for things like that until we have fed it objective information (security cameras are a thing ;P).

  • @CurlyChrizz
    @CurlyChrizz Před 7 lety

    Very important topic to talk about!

  • @rogeliomoisescastaneda7396

    I totally agree, computing machines, as any other tool humanity has created, it's just an extension of our capabilities, not a replacement.

  • @RosellArriolaEvangelist
    @RosellArriolaEvangelist Před 7 lety +5

    So right on point, so good!

  • @AhmedAbdAllahSalem
    @AhmedAbdAllahSalem Před 4 lety +2

    an excellent talk by a beautiful person

  • @allanlam7669
    @allanlam7669 Před 7 lety +3

    has anyone done any work on a 'neural net' model? I heard this term mentioned in a book on neuroscience as a ways forward, I guess as it currently stands as theoretical neuroscience.
    I love the idea of machine learning and ethical algorithms. There is an example of this in early education settings. Using a set of questions as Zeynep describes running through 3 or 4 topics, personal response and consequences, parents response and consequences, legal response and consequences, and then with that score, deciding on the best course of action. Say if the situation were ambiguous like a child turning up at a costume party dressed in military uniform. What would the other parents think. Let's weigh the score for each element. total score. decision.
    What Zeynep is looking for I guess is the difference between context dependant decisions such as choosing between a rotten apple or a rotten pair (both probably full of extra antioxidants btw), and context independent decisions, that is ones based on that black box she mentions. And the auditing of such is her main impetus. The clear definition of the differences between the two modes of decisions is what she requires.
    And thus the auditing of said black boxes, or rather, unbalanced taxonomies, as the studies and data entered into the machine are of one aspect of a whole, rather than a full gamut. Wow, actually, so I guess what she has identified here is an engram model for extracting information from a black box. If one asks the right questions, to the black box, one can expose the limitations of said black boxes data, and hence be able to conduct further research, or perform it oneself (the research).
    Brilliant talk Zeyneb. It has further inspired me to continue my journey in computation and was very entertaining and insightful. You are a brave woman!

    • @michellelam5717
      @michellelam5717 Před 7 lety

      Allan Shing Wai Lam

    • @pieter2627
      @pieter2627 Před 7 lety +2

      This 'black box' is the identification of the 'neutral network model' and it is a very popular AI. Google's 'deep dream' is a way that we can barely get a look inside this box so far, but things are still very complex surrounding the internal understanding of it (as she mentioned).

  • @GTaichou
    @GTaichou Před 6 lety

    Code a "show your work" output.
    What do we do when someone comes to a conclusion we don't understand? We ask them how they arrived at it; and often get an answer even if it's messy and doesn't make sense. I understand everything is easier said than done, but if we can program a computer to learn, why not program a computer to explain its logic. If this, if this, if this, if this all at once, then that ad, that application, that identifier.

  • @zionformulasmagicas
    @zionformulasmagicas Před 6 lety

    Incredible!

  • @stemfactory7312
    @stemfactory7312 Před 6 lety +1

    5:11, It's just something we'll have to figure out together.

  • @stemfactory7312
    @stemfactory7312 Před 6 lety

    1:16, Just faces, I assumed vocal analysis would be a part of it too
    Maybe the networks could broadcast that info during the next presidential debates

  • @stephanesurprenant60
    @stephanesurprenant60 Před 7 lety +1

    I believe that the point of this talk extends beyond computer algorithms. Many people do not sufficiently appreciate the power they have over the lives of others. The executive who turned her back on the expert here, likely because her questions and doubts were uncomfortable, likely behaves like this routinely. I like to say, contra the Godfather, that nothing truly is business and everything is personal.

  • @MrGrapha
    @MrGrapha Před 7 lety

    it is inportant what you talk about

  • @irethoronar34
    @irethoronar34 Před 4 lety +2

    Good the hear Turks on TED. Tebrikler :)

  • @florbz5821
    @florbz5821 Před 7 lety

    I had no idea this was a thing! Is it just in America or everywhere in the world? Do employers not interview people anymore?

  • @leo959
    @leo959 Před 7 lety

    the title explains the entire show

  • @vikramardham
    @vikramardham Před 7 lety +3

    Following her argument, one can draw similar conclusions about our own
    human brain, our brain is a black box too and we are just starting to
    understand its functioning, does it mean that humans can't make any subjective decisions? (well, we probably can't but the question is, should we not?)
    Nevertheless, an interesting talk. Raises incredibly valid points but takes the anecdotal evidences to an exaggerated level and makes a huge leap towards the end in drawing conclusions. We have to remember, machine intelligence is still at its very infancy and making predictions about what role it can / cannot play in our lives is rather a terrible idea.

  • @stemfactory7312
    @stemfactory7312 Před 6 lety

    7:12, That's what I'm hoping for.

  • @RahulOne1
    @RahulOne1 Před 7 lety

    Super cool. I really worried when Google's AI defeated the GO Champion.

  • @1rkthevar
    @1rkthevar Před 7 lety +1

    exactly, she is 100℅ right

  • @alexthekunz
    @alexthekunz Před 7 lety +6

    So we need an algorithm that checks to see if other algorithms are biased. :p

  • @Meir017
    @Meir017 Před 7 lety +9

    Person of Interest...

  • @MB-fh1dc
    @MB-fh1dc Před 7 lety

    we should build programs that could decrypt and explain to us how these algorithms are working

  • @stemfactory7312
    @stemfactory7312 Před 6 lety

    4:57, that's probably how it thinks of us.

  • @steverubio6072
    @steverubio6072 Před 5 lety

    She is 69 years old, but still deliver it powerfully.

  • @caseyharrington4947
    @caseyharrington4947 Před 7 lety +3

    If you've given it non-bias data then it's objective. You've just made an argument for prejudice

    • @Ramblingroundys
      @Ramblingroundys Před 7 lety +7

      Basically the problem isn't the machine learning system. The problem is what you teach it (or don't teach it). If you feed it data about top corporate performers, but the history has been prejudice, then the data is going to be prejudice and the machine will therefore be prejudice. In the ind, the computer system isn't the problem, it's the human element.

    • @Illlium
      @Illlium Před 7 lety

      But the history hasn't been prejudiced, it just hasn't been equitable, unless the systems were pulling data from the 60's, and I highly doubt that. Of course you can find data points that are completely out of whack, like the one she presented, but on average the program is probably going to still be more accurate than a human, which she even admits, although it is marginal in that case. The point she brought up at the end though was a very good one, I don't think this can be used as means of dispensing lethal force, or any kind of force at all for that matter, unless it's absolutely immaculate, because it's a very dangerously easy way of absolving responsibility.

    • @caseyharrington4947
      @caseyharrington4947 Před 7 lety

      The problem faced here isn't the slim to none prejudice data we would give these machines but the 'prejudice' that these machines would learn on their own that we are not capable of. To use her examples we as employers can't tell which candidate will be pregnant in a year or who are prone to depressive disorders, machines would be. It wouldn't be our system per sae it would be a new system.
      I for one am a huge fan of the fictional robotic rule of never being used to kill a human. The counter argument to this would be military use, taking the humanity out of war. Which I suppose in part is a good thing since we're over populated anyway hahaha

  • @AISOCIETY
    @AISOCIETY Před 4 lety +1

    does that means it is dangerous to put my thoughts online?

  • @JuanToFear
    @JuanToFear Před 7 lety

    Thank goodness this was a reasonable argument on artificial intelligence and not "The robots are taking over!" again... 😧

  • @dahyuhun
    @dahyuhun Před 7 lety +2

    agreed :)

  • @beshr1993
    @beshr1993 Před 6 lety +1

    But why don't we just let AI do its thing then check for the results, and if we don't like them we can impose certain pre-programmed rules on the AI.
    For example, if we find the AI is weeding out people with potential for depression (to use her example), we can impose a pre-programmed rule on the AI to not weed out people who could potentially get depressed in the coming 3 years or whatever.
    My point is that we should not stop progress just because we fear the potential consequences. In fact, history has shown that science will progress anyway regardless of our fears. Instead, I say we go ahead with progress in an iterative trial and error manner like I explained in the example above.

  • @panchri
    @panchri Před 7 lety

    9:53 Eddi von RBTV?!

  • @FlavioAmoedoFilho
    @FlavioAmoedoFilho Před 7 lety +6

    Computers are just tools.
    You wont ask if a hammer has feelings, so as you wont ask a computer what is it thinking about human issues.

    • @AnimalAce
      @AnimalAce Před 7 lety +1

      Just a really complicated hammer....that's usually smarter than us.

    • @georgeglez9872
      @georgeglez9872 Před 7 lety +2

      Flavio Amoedo human brains are just tools also...

    • @FlavioAmoedoFilho
      @FlavioAmoedoFilho Před 7 lety

      George Glez Humans, and many others animals, can't live without a brain. So I think is more than just a tool.

    • @troutdaletim
      @troutdaletim Před 6 lety

      Nefarious people program computers

  • @Overonator
    @Overonator Před 7 lety +2

    The speaker cites an anecdotal example of how the algorithm screwed over a black woman with no priors and favored a white man with priors. Unless there is evidence that there is a systemic bias in the algorithm to do this over and over, have the people who found this problem forgotten that these algorithms are probabilistic? Meaning that there will be a certain amount of false positives as in the anecdote? Whoever thinks that you can design a system without reflecting the subjective judgements and values of its creators, needs a remedial lesson in philosophy.

  • @IDislikeTheNewYoutube
    @IDislikeTheNewYoutube Před 7 lety +5

    Find me two people that agree on every subject of morality and you may just have a salient point.

    • @IDislikeTheNewYoutube
      @IDislikeTheNewYoutube Před 7 lety

      Not sure what you are going after here but my point was that morality was a human invention of the purely hypothetical and we all are going to have our own conflicting definition and limit. It's basically all bullshit.

    • @gbiota1
      @gbiota1 Před 7 lety

      I Dislike the new CZcams I don't think morality is purely an invention, I think there is lots of evidence that it is an evolved trait shared by most *animals* but not all forms of life. The way the impulse is channeled does usually involve some type of engineering, as is seen in religions. The fact that there are things that are alive without moral systems, indicate you could have intelligent life without it too, so long as the mechanism of its creation is not biological evolution. Morality is a crutch used by nature to allow groups of non relatives to form. If an intelligence is operative in creating a new type of life, no such crutch is necessary.

    • @IDislikeTheNewYoutube
      @IDislikeTheNewYoutube Před 7 lety

      Intriguing point about other life not necessarily creating it, but you are mashing together concepts of all types of social paradigms and concepts of "getting along" within a group or society. Morality has some spillover into most venues of lets call it "tribal" behavior and interactions between humans but it's not the only effect in play.
      Nor to my original point, is it all together important, meaningful or universal as folks make it out to be. We each have our own definitions and delineations that shift daily and situationally so most conversations about morality are simply moot.

    • @Winchestro
      @Winchestro Před 7 lety

      Our morality is based in instincts, but most of it comes from and is the inevitable consequence of our general intelligence. We can also always override our instincts if we want, and it's even required as our instincts are specific to our own evolutionary history and merely shortcuts for specific situations, most of which aren't even relevant any more. The ability to solve any kind of problem and even reconfigure ourselves to become anyone else. I'd argue there is a core morality that is inevitable for any general intelligence, no matter where it came from and what hardware it's running on.

    • @Winchestro
      @Winchestro Před 7 lety

      If there's a truly objective core to morality people wouldn't need to agree on it. Just like people don't need to agree on math. And it also really doesn't matter this much what we do here on earth as long as we don't do something stupid like fucking up our climate or start a nuclear war. We basically won earth, there's no point in further struggle.
      But going forward and looking at this quite big universe, those questions become more relevant. Math would be a great way to communicate with other forms of general intelligence, as it's something they may very well have discovered independently. An objective core to morality would also be tremendously helpful in that regard.
      That's one of the most important questions AI will hopefully help us to answer. In a universe so tremendously huge it would be essential for our long term survival to figure out if we can expect to meet lots of potential friends or inevitable enemies. If we should head for other stars or look for the darkest and most remote places to hide in.

  • @keithbell9348
    @keithbell9348 Před 6 lety +1

    Notice the reaction of her co-worker. The reality of the problems in her efforts was too much to bear so she immediately "ran away" as fast as she could. Not just insulting, but you know the potential of the damage it could produce also disturbed her.
    Doubtful that any "perfect" machine could have explained what she said in this presentation as great as she did.
    Unless of course an "imperfect" HUMAN programmed it to say it...

  • @ManicMindTrick
    @ManicMindTrick Před 7 lety +1

    At the end we are going to have to give over control as the digital systems are so much better than the human brain at collecting data and make correct decisions. We are just seeing the beginning of this trend and it's no surprise the systems are pretty unsophisticated and does beginner mistakes at this point. As time pass by it's going to be obvious not handing over control is foolish, just like it's going to look foolish driving yourself and not let your self driving system make all the decisions which make you statistically 1000% more safe on the road.
    I can can def see a future were the world is coordinated and controlled by a single super AI.
    We can just hope such a thing find value in human life in the end as it evolve far beyond human intelligence.

    • @Winchestro
      @Winchestro Před 7 lety

      Single super AI seems reasonable until you realize in reality there's a thing called "latency". If I were to take one part of your brain and pull it out I would slow down your entire thought process. You'd be better off without it entirely very quickly. We will most likely have a bunch of local installations who will be more or less like humans with hopefully a little less of our inherited evolutionary bullshit.

    • @ManicMindTrick
      @ManicMindTrick Před 7 lety

      Winchestro
      Considering the human brain is able to produce general intelligence at roughly 100 m/s speed I don't think the speed of light in a digital system is going to be a factor here...
      I'm not sure I follow your point of latency in machine substrate.

    • @Winchestro
      @Winchestro Před 7 lety

      You don't need to compare it to humans, but other AI. It wouldn't really make sense to talk about it in terms of general AI as it doesn't exist. So let's talk about humans. Our brain goes great lengths and "wastes" a lot of processing power on sophisticated prediction algorithms just so it can hide latency ( especially visual ) from our consciousness. This is probably because the speed at which we perceive time is attributed more to latency than to processing power. It's such a huge advantage to perceive time to pass slower that it's even worth dedicating processing power on "fake" input for us to work with.

    • @troutdaletim
      @troutdaletim Před 6 lety

      Watch "Colossus The Forbin Project" and wonder then how close we actually are.

  • @Vikingofriz
    @Vikingofriz Před 7 lety +1

    I think there's no problem really. We can make these algorithms 'more moral', by changing the weights of so-called neurons, but the question is 'Is it worth it?'. I mean if a program shows that pregnant woman is more likely to be depressive then it is true. It's just statistics, we can't argue with that. And usually programmer need this true information, not that with moral etc. But if it's necessary it can be changed to the way we want it to work, it can take into account everything we call moral

    • @voxlz
      @voxlz Před 7 lety +1

      The problem is the consequences. What if every algorithm decides you are not good at working, and therefor you never get hired? If we don't think about this, some people may never get jobs just because some bias tells them you make less money. No program should exclude people because of bias, just like we humans should not exclude because of age, gender or chance of depression. Because it's just a chance, nothing more.

    • @Jarb2104
      @Jarb2104 Před 7 lety +1

      +Torben Nordtorp
      The problem comes when you blackbox the answer into a simply yes or no from the computer.
      If it showed a very detailed description of what it evaluated, the pondering it gave each evaluation, and the conclusion it got, then the person hiring can make a more informed decision as to ether or not a persons should be hired.
      And no, the person talking here is making a BS argument when she says we don't know what is going on behind the scenes. Because we do know what is going on, and the people asking for the software should request this information and should deny the use of any software where the company denies these information.

    • @Vikingofriz
      @Vikingofriz Před 7 lety

      ***** that was just a mistake, wrong algorithm, so that i think it was silly to show this in video as an example of the problem

    • @Vikingofriz
      @Vikingofriz Před 7 lety

      ***** omfg, dude, where can you see BIASES in my words? it's not biases, it's just truth that is proved by statistics

  • @mc780
    @mc780 Před 6 lety

    7:20 9:28 9:55

  • @heylisten917
    @heylisten917 Před 7 lety

    Again, people complaining about other people bias because... they are not their own bias. Which are ,off course, perfectly moral and don't need to be question. What some call bias is someone else moral and vice versa.

  • @5xmasterx548
    @5xmasterx548 Před 5 lety

    If ur in Demets class then good job

  • @swordwaker7749
    @swordwaker7749 Před 7 lety

    OK for open ended question we may program a computer to answer in different angles and knowing to teach procedure of thinking by splitting in parts and in can interconnect and split thinking so now they can make machines procedures that's logical like humans okay for example
    "what is ten plus nine plus tweenty?"
    the system can connect and split and by hundred hours of computational
    translation system to 10+9+20 there's a little misspelled but real AI should be able to do that
    now with calculation system and these are memorized and they can create or destroy system 10+9+20=39
    then it translate back
    now by hours of speaking with human it detected that the program isn't supposed to show it's translation system so it comes to calculation and here's a thing it may search how to calculate if asker doesn't know but if asker know this then let's change what's 5748963121547*783146952 so this question is hard for human to calculate so instead it'd say
    "this question is too hard for calculation by human" or whatever if against another machine it can transfer the calculation procedure
    to name the procedure it might takes dictionary and experience of using it
    for people here I think it's right to hire or pick out
    it's just important to insert unbiased values or just make them ourselves google has algorithm that must be connected to show ads on proper age gender or whatever
    and the failure is everyone's because everyone also have different way of thinking so connection might solve all this

  • @MatheusPB
    @MatheusPB Před 7 lety

    sorry but nowadays, many many many people from abroad thinks Buenos Aires is the Federal Capital of Brazil. No trouble actually Watson says Toronto is a US city....

    • @eFrog27
      @eFrog27 Před 7 lety

      Idk who you know from abroad, but I don't know anybody who doesnt know the capital of Brasil. LOL

  • @duyminh5219
    @duyminh5219 Před 2 lety

    Some notes: AI decision making --> we can not outsource ethical responsibilities to intelligent machines (at the end of the video)

  • @Heavenlydreamer
    @Heavenlydreamer Před 7 lety

    #Human #morals, the subjective decisions. But of the complex control of life's viewing of intelligent, of self machine. as explain in the body. If only 2 be left to it's own cosmic outsources of responsibilities. AI it understands self physics. the Q is, but those it still dream. for the end of life's black hole is the sleeping of the eyes that close. : ] sorry I con-put with worlds.

  • @kathywolf4558
    @kathywolf4558 Před 7 lety

    AI MUST learn there are humans who are not violent immoral people who seek the destruction of others. Not all humans hate other humans and want to do harm for greed, power etc.... AI must be able to determine the difference, especially the AI that is being developed for military activities. What happens when AI does not distinguish between aggressive violence and a population that is not aggressively violent?

  • @youtubeuserbg
    @youtubeuserbg Před 5 lety +4

    The moment when you realize, Ted Kaczynski was right all along! lol

  • @mariateresavergara4090
    @mariateresavergara4090 Před 7 lety +5

    Can we replace lawyers and judges with a computer? It will be much cheaper.

    • @pianotube2163
      @pianotube2163 Před 7 lety

      maria teresa Vergara if you like the piano🎹🎼please check my account🎼

  • @Ultrajuiced
    @Ultrajuiced Před 7 lety

    Maybe the people with higher risk of depression have that risk because there are such machine learning "algorithms" that deny them jobs?

  • @leunamtzam
    @leunamtzam Před 6 lety +1

    42...

  • @mzatmaca
    @mzatmaca Před 7 lety

    1:53 nice laughing

  • @BlastofFreshAir
    @BlastofFreshAir Před 7 lety

    should design a computer to compute philosophy before we create ones for war etc and see what it finds

  • @danthedingo
    @danthedingo Před 3 lety

    I think the goals is to shut out depressing people from the market. Why would you want that?

  • @Ka9radio_Mobile9
    @Ka9radio_Mobile9 Před 6 lety +4

    I think she's super cute! :-)

  • @4relevants
    @4relevants Před 7 lety

    It's not hard to create a system which can explain its own decision process and then improve the algorithm. Let's not put PC vs AI.

  • @ChisanguMatome
    @ChisanguMatome Před 7 lety

    I for one welcome your machine overlords and hope they are merciful.

  • @arianna2243
    @arianna2243 Před 3 lety

    shouldn't machines be programmed to understand that some issues, espesially human morality and mortality are too complex. Some human issues need human resolution, in these cases, it would be best for machines to alert the proper individuals, multiple people, for accountability.
    ultimately, machines make sense; they are logical and constant, but binary may be the 1 thing humans are not.

  • @corescopeplays2789
    @corescopeplays2789 Před 7 lety

    Computers gonna rule da worldddddddddd

  • @prathameshsonar
    @prathameshsonar Před 6 lety +2

    facial expressions :)

  • @LeonidasGGG
    @LeonidasGGG Před 7 lety

    "Oh my God!" Change!" - fearmongering talk. All machines fail, that's why we keep building better ones... Live with it.

  • @danthedingo
    @danthedingo Před 3 lety

    Bro everyone wants to complain about the ethics but nobody wants to put in the work. As if it were easy to build algos . Corps are hiring people to build fancy stuff as fast as possible and putting ethics to the side because obviously money. You need to either appeal to the money side of these people or go build better programs.

  • @OZAN220
    @OZAN220 Před 7 lety +1

    AS BAYRAKLARI AS AS AS

  • @ldohlj1
    @ldohlj1 Před 7 lety +1

    Not to disrespect, but didn't enjoy her presentation style...

  • @Klayhamn
    @Klayhamn Před 7 lety

    Westworld brought me here.

  • @huseyinaslan1690
    @huseyinaslan1690 Před 7 lety

    please turkish language

  • @yang4420
    @yang4420 Před rokem

    牛逼

  • @user-ke9nh6pw5t
    @user-ke9nh6pw5t Před 7 lety +1

    I'm first

  • @duckdumbsmartpplimnotbored5175

    26 855th view
    1001st like
    162nd comment

  • @amarnour8959
    @amarnour8959 Před 7 lety

    third

  • @DrMateen36
    @DrMateen36 Před 7 lety +14

    dem thighs

  • @evionlast
    @evionlast Před 7 lety

    Steven universe's dog copter

  • @abhimanyukarnawat7441
    @abhimanyukarnawat7441 Před 7 lety

    stop it,we'll die.

  • @MedvedPrevedPoka
    @MedvedPrevedPoka Před 7 lety

    Nonsense. There are ML algorithms which are not "black boxes". There are clustering ML algorithms which does not need a learning data, so being biased is not an issue. She brings up examples of poor ML algorithms, with obvious flaws - it does not mean there are no good ones. It is ridiculous how her speech is biased, if you consider its theme. More over, if human ethics is not solvable, then there is no point in using it in the first place, if it is solvable then its not a problem for ML algorithms to use it, its just a matter of time.

  • @garrikcook5940
    @garrikcook5940 Před 7 lety +1

    "Do we really want a machine learning HR robot that can leave out depressed people?" me - "Well yeah, if it naturally weeds out the undesirables, then of course. This is called market Darwinism, the natural state of capitalism."

  • @TonyStark-lt7uv
    @TonyStark-lt7uv Před 6 lety

    This topic isn’t that meaningful, and it has nothing to do with computer or machine learning. You are suggesting how to make a bias-free decision when human ourselves are coded to be biased. Every single human that lives and lived are biased, so is everything we invented, and so is your presentation. Understanding that bias are unavoidable and not necessarily bad, we only need to learn how to live with bias instead of saying “bias are bad and we need to invent machine to make unbiased decision”. Thinking machine makes better decisions than human is just dumb, if a company hire people by machine, it will not be very far from bankruptcy, Trust me.

  • @MrC0MPUT3R
    @MrC0MPUT3R Před 7 lety

    Moist

  • @MarkMagill
    @MarkMagill Před 7 lety

    Machines will displace workers at a much faster rate. Time do take up homesteading!

  • @drackar
    @drackar Před 7 lety

    I came in to ask who's morality she's talking about...but in reality, this isn't about morality at all.

  • @neotronextrem
    @neotronextrem Před 7 lety

    Morals....are for the weak.
    Morals are a structure, made by Society to work more effective.
    The Problem is: Our morals are old.
    Not relevant anymore.
    We dont Need morals anymore.

    • @Hotsnown
      @Hotsnown Před 7 lety +8

      Edgy

    • @CinereousDove
      @CinereousDove Před 7 lety +8

      If you make decisions - even the decision to blindly follow a computer like some god - you have a moral. And not taking resposibility for the morals you decide upon, that´s what´s weak.

    • @neotronextrem
      @neotronextrem Před 7 lety

      Xenophone Give you that, but I think your phylosophy is called ethics.
      You say a Unmoral Person is not loyal to his believes, I say: Sure he is, just in a more rational way.

    • @CinereousDove
      @CinereousDove Před 7 lety

      *philosophy and ethics is *a philosophical discipline, not a philosophical school/movement. And (bluntendly apparrent as you will see) I am leaning towards exsistenzialism.
      You mean an amoral Person - a Person who has no morals? And no thats not what I meant - I meant claiming you have no morals is unauthentic since in every act you take, you construct an idea of how to act simply by the way you act. - So I would say rather than someone is not loyal to his believes, he´s not "loyal" (that´s kinda a problematic word to use in this context, but english isn´t my mother tounge and I won´t look this up right now) to himself, insofar he is constituted through the actions he/she takes.
      And how can there be a more rational way of being loyal to your principles? A more direkt way probably - but taking the "direkt way" no matter what is already a principle...
      Rationality is a very abitrary Moral construct anyway (I tought you don´t like those...) used to make something seem "nessesary" and not take responsibility.
      btw. If you read up to this point, congratulations.

    • @neotronextrem
      @neotronextrem Před 7 lety

      Xenophone We see morals different then.
      I see morals as a human construct to build on, even if not completely convienced by it.
      But I can only say. Yes you are right, I dnt argue against my own believes.

  • @Ben_D.
    @Ben_D. Před 6 lety

    Its an interesting talk, as most TEDs are. But cherry picking examples like those two criminals, is pretty weak. A sample of two people is bad science. It may or may not be accurate. But the method is bad.

  • @natham10
    @natham10 Před 7 lety

    She is treating computers as self-learning creatures that can change our moral values. And, although this can be true, machines are controlled by us! So, if controlled correctly, artificial intelligence could be used for unbiased decisions! I get her point, although i dont understand why she is creating an idea that we are not "under control" of those systems.

    • @georgeglez9872
      @georgeglez9872 Před 7 lety +3

      Natham Coracini some AI systems are already self-learning "creatures" and cant be controlled by us.

    • @stell4you
      @stell4you Před 7 lety +3

      No one at DeepMind understands how AlphaGo managed to win.

  • @ghostfifth
    @ghostfifth Před 7 lety

    wait they took her out to lunch because they didnt want her there or something? that sounds like a benefit.

  • @ramrod60th30
    @ramrod60th30 Před 5 lety

    If computers can tell if we're lying we would have no more politicians we would have no more marriages nobody could talk to anybody on CZcams wouldn't life be boring