The era of blind faith in big data must end | Cathy O'Neil

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more -- but they don't automatically make things fair. Mathematician and data scientist Cathy O'Neil coined a term for algorithms that are secret, important and harmful: "weapons of math destruction." Learn more about the hidden agendas behind the formulas.
    The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more.
    Follow TED on Twitter: / tedtalks
    Like TED on Facebook: / ted
    Subscribe to our channel: / ted
  • Věda a technologie

Komentáře • 594

  • @dragoncurveenthusiast
    @dragoncurveenthusiast Před 5 lety +280

    5:51 "Algorithms don't make things fair... They repeat our past practices, our patterns. They automate the status quo.
    That would be great if we had a perfect world, but we don't."
    The perfect summary of the talk

    • @coolbuddyshivam
      @coolbuddyshivam Před 4 lety

      That's bullshit. There was an experiment with AlphaGo. The AI was set to compete with itself and it started generating new patterns after sometime. Her Arguments are all over the place. While Black Box algorithms should be banned but algorithmic biases are removed from AI after sometime automatically as it learns more datasets.

    • @edawg792
      @edawg792 Před 4 lety +11

      @@coolbuddyshivam I don't think you understand the kinds of algorithms she's talking about. AlphaGo is not remotely the same thing and you can not generalize patterns you observe in that very limited use case to algorithms in general.

    • @galacticthreat1236
      @galacticthreat1236 Před 3 lety +1

      I agree, if algorithms are just echos (pun-intended) of how we think, how does that represent the maker or user of it?

    • @AmoghSarangdhar
      @AmoghSarangdhar Před 3 lety +3

      Thank you saved my 13 minutes

  • @animefreek217
    @animefreek217 Před 6 lety +43

    CZcams is a great example of an algorithm getting it wrong.

    • @kawaii_princess_castle
      @kawaii_princess_castle Před 2 lety

      Of course we need to be very careful with the videos the CZcams algorithm shows us. I use Socrates three questions before choising one video
      Is this 100% true?
      Is this good?
      Is this useful?

  • @ryusm92
    @ryusm92 Před 6 lety +154

    Her message is, as she said, "a blind faith towards the algorithm only sustains the status quo". This is not promoting any feminist or sjw ideas, it's an inconvenient truth that we should wake up to.

    • @sTL45oUw
      @sTL45oUw Před 5 lety +5

      Well it suggests that there's something wrong with the status quo and that's the core of SJW ideology.

    • @berettam92f
      @berettam92f Před 5 lety +21

      @@sTL45oUw what you are referring to is called progression and anyone who rejects the notion could go to the stone age and be happy with it.

    • @spidermonkey8430
      @spidermonkey8430 Před 3 lety +8

      i just dont like how she used this to push her leftist agenda

    • @thatgui88
      @thatgui88 Před rokem +1

      CZcams has this problem. The algorithms rec videos based on what you watch and keeps it that way. The site does not rec video outside of your views

    • @bogdanmirzakabilov
      @bogdanmirzakabilov Před 9 měsíci

      Calling SJW a progression. Do you still think so?@@berettam92f

  • @pphuangyi
    @pphuangyi Před 4 lety +44

    I worked as a graduate math teaching assistant at a very ethnically diverse university. I am not proud of it but I should admit that came to the US with an unfounded idea that some ethnic groups are not as good at math as other groups. However, what I found out through firsthand experience was that I was very wrong. I only find one thing that is correlated with an increment in math proficiency that is how much you are willing to try to master the subject. It is the most valuable lesson for me as a potential math teacher. And I am also glad for myself that I was able to be open and humble, and didn't perpetuate my unfounded idea through my mental algorithm to differentiate my students.

    • @jhonshephard921
      @jhonshephard921 Před 2 lety +3

      there is also the issue of who is enrolling in STEM classes and the cost barrier to entry. Before I started spending my own money on the stuff I had been using to for both school and personal projects, I didn't know it was that expensive. Right down to the gaming laptop I was using vs some crappy dell ones other students used(and yes some did have a desktop at home or in a dorm). Since we know there is a historical economic gap among races in the US dating back to stuff like the Tulsa massacre by the KKK, we should know why there are fewer black students in Computer Science with us. And I admit I was racist because I didn't like that those people weren't garbing the opportunities offered in this field and joining me in class.

    • @thatgui88
      @thatgui88 Před rokem +2

      So you basically admit you had a bias against certain races? Thank you for confirming that, I'll keep an eye out for racist teachers

  • @mnaftw
    @mnaftw Před 6 lety +87

    I work with big data and the exact same algorithms she's talking about - and she's right. This isnt perfect data we work with and the code isnt made by divine objective superhumans - its just us - a team of overworked underpaid data scientists who are all flawed human beings and honestly, the kinds of ways you are pressured to simplify or fix code at the last moment, you don't always have the time, the computing power or the straight up permission of higher ups to do the job 100% perfectly every time. Data can be a good tool, but it's as imperfect as anything out there. Maths and statistics arent gods, they are developed by humans and are less perfect than you might think. Also, maths isnt objective or subjective - it isnt a complex living mind, and we are far from making it one. We cant even agree on what that mind should behave like, let alone how to make it. So just, trust big data as much as you trust any salesman, for example.

    • @doubled6490
      @doubled6490 Před 6 lety +2

      yes because data is wrong.
      lol

    • @DinhoPilot
      @DinhoPilot Před 5 lety +1

      @@doubled6490 Underpaid? More like swimming in money... 100k at least

    • @nanox25x
      @nanox25x Před 4 lety

      @@DinhoPilot More like overpaid.

  • @berettam92f
    @berettam92f Před 6 lety +77

    surprised to see so many dislikes, and shocked to find out the reasons behind the dislikes.

    • @doubled6490
      @doubled6490 Před 6 lety +5

      facts are awful reason.

    • @yunhaozou8012
      @yunhaozou8012 Před 4 lety

      알고리즘이나 빅데이터가 잘못한 것 아이고 그 상황에(발표에 나온 사례들) 알고리즘이나 빅데이터를 써야 하지 않는다고 생각합니다.

    • @spidermonkey8430
      @spidermonkey8430 Před 3 lety +3

      i just disliked it because she used this to push her leftist agenda. otherwise she made some good points.

    • @theheeze
      @theheeze Před 3 lety +5

      @@spidermonkey8430 what are some ways algorithms create outcomes that the "right" is opposed to?

    • @bodysuitguy
      @bodysuitguy Před 7 měsíci +1

      Fat Karen with blue hair trying to tell people what to do. Thumbs down

  • @admagnificat
    @admagnificat Před 6 lety +4

    Thank you. As someone who works in a very human field where so-called "Value Added Measures" (VAM) are used to rate the vast majority of employees, I can corroborate that this practice can lead to some very, very unexpected and very, very unjust outcomes.
    I think that people are starting to realize this now, but I'm not sure how ratings will be handled as we move forward -- especially when the rating systems are often encoded into state law (which means that they can be very hard to change, and can stick around long after their fairness has been called into question).

  • @ricardopickman
    @ricardopickman Před 6 lety +47

    "Weapons of math destruction" is one of the most enlightening concept I've heard in the last times. Thank you Cathy O'Neil!

  • @FunkyBukkyo
    @FunkyBukkyo Před 6 lety +12

    Watch until the end, there's a conclusion and recommendations... Algorithm audit, data integrity check, feedback, etc...

  • @petalss5325
    @petalss5325 Před 6 lety +11

    My mind is blown. So glad I clicked and thanks for such an insight!!!!

  • @wymanspace4173
    @wymanspace4173 Před 6 lety +13

    Strange to see the number of dislikes.
    Good insight on the digital world.

    • @orientalmemes6964
      @orientalmemes6964 Před 2 lety

      Probably people who are judging the content of her TED talk by her appearance. Pathetic

  • @addy1gautam
    @addy1gautam Před 6 lety +3

    It's funny how people are assuming she's against algos. She's nt against algos but against BLIND FAITH in them! They shd be used only for assistance, not as the final word.

  • @mrjdavidt
    @mrjdavidt Před 2 lety +5

    Literally wrote a paper about ethics in AI and used this argument as the base for my research. Instructor gave me an F and said racial bias and discrimination in healthcare systems has nothing to do with AI 🤦🏾‍♂️.
    Had to resubmit my paper, still waiting on the results. 🤷🏾‍♂️

    • @kombuchas4684
      @kombuchas4684 Před 4 měsíci +1

      Are you kidding me? This was 2 years ago. What happened after? I am fuming for past you.

  • @roriksavant
    @roriksavant Před 6 lety +19

    They make an excellent point! We put way too much faith in numbers we see.
    Edit: Majority of dislikes, there's a misleading number right there. Did the majority of people watch and deem the video bad? Or did they click the video with a preconception of the lecturer and mathematician, and then dump their hate on them in any way they could?

    • @LuxiBelle
      @LuxiBelle Před 6 lety +2

      I disliked because I read your comment.

    • @bogdanmirzakabilov
      @bogdanmirzakabilov Před 9 měsíci

      I disliked the video because it's just bla-bla-bla. Here's the plot - badly designed algos make lifes of some people worse and more unfair. Oh my.

  • @xerxes666
    @xerxes666 Před 6 lety +3

    ✨EXCELLENT.. .Video ! Thank you!!✨🌟💫✨🎉🎊🎉🎉🎊🎉🌹🌹🌹🌹🌹💗🌹🌹🌹🌹✨🌟💫✨🎊🎉🎊✨🌟💫✨

  • @jaytsecan
    @jaytsecan Před 2 lety +1

    My favorite Ted video - so important to think about this in our present age and culture

  • @ViniciusClaro2012
    @ViniciusClaro2012 Před 5 lety +2

    Its important to differ algorithms from models.
    The models have the concept and the algorithms are part of models.
    Models include entities and rules, but algorithms follow these rules.

  • @Red-sh8wl
    @Red-sh8wl Před 6 lety +2

    This is awesome . Something new to think about !

  • @antbryant1
    @antbryant1 Před 6 lety +5

    Wow, excellent and she is spot on!

  • @YagamiKou
    @YagamiKou Před 6 lety +2

    i do programming at university and all the points she brings up are very common, but we're also taught ways around them....

    • @pookz3067
      @pookz3067 Před 2 lety

      There is no general technique that fixes all such biases and misuses of data. The techniques are applied in set ways based on what you think of/experience of yourself and the field.

  • @andreariecken8536
    @andreariecken8536 Před 6 lety +1

    Ms. O'Neil work open my mind to a deeper understanding of what's happening now days. Not the laugheable game of getting ads in facebook but the serious unethical world that we are feeding while using all the tolls new era gave us. it is sooooo scary! i hope smart (and honest) people will find soon a better way of keeping human being natural being.

  • @tthecooljose4674
    @tthecooljose4674 Před 6 lety +5

    algorithms will never account for the inherent randomness, non intuitiveness of real world scenarios. Hence, all those predictive facebook ads might very well be simply mimicing parasites for advertisers

  • @CullenCraft
    @CullenCraft Před 6 lety +245

    I'm just curious where all these statistics are coming from if algorithms cannot be trusted

    • @MrGiovaneGuerreiro
      @MrGiovaneGuerreiro Před 6 lety +41

      I don't take her message as "we cannot trust algorithms", but as an alert to the risks of someone not using them properly.

    • @johannesstandoft760
      @johannesstandoft760 Před 6 lety +37

      Her point is that you should look at statistics with the same skepticism and critisism as you whould do news or anything else. Being aware that statistics can be manipulated or even be unintentionally biased because of how data was collected is an important critical thinking skill.

    • @thealayaseverus
      @thealayaseverus Před 6 lety +3

      From the past. They are not made to predict the future, they are made to highligh peaks and lows and trends. And nobody can make a sober decision based on that =/ Not even the algorithm

    • @Wombat7777777
      @Wombat7777777 Před 6 lety +4

      algorithms use statistics or data in general to predict stuff. Statistics and data are just as biased as algorithms a problem which stems from their creators, us.

    • @summertime69
      @summertime69 Před 6 lety +2

      Giovane Guerreiro but that's it, isn't It? If someone uses them wrong. Algorithms are a science and if you use that science poorly, you get bad results. Math and science are tools that can be used for ill or for good. She srates that all these algorithms are bad and math is scary. Weapons of math destruction? Bah!. She said bad people use neutral tools and we should stop using the tools because she can't think of a time when algorithms are good.

  • @benmaharaj6854
    @benmaharaj6854 Před 2 lety

    I once worked for a company taking calls from customers. They used to judge the customer's satisfaction by a follow up call to the customer and an automated survey. Sure customers who were angry were more likely to take that survey but you were hearing it from them. Then they decided to turn this over to an algorithm that listened to the calls and scored the worker based on that. It was utter garbage, demonstratively inaccurate. A customer could profusely thank you for your help at the end of the call and this junk code would say they had a bad experience. We employees and local management had no access to the algorithm, very little data on what it was actually looking for, we were just supposed to trust the process. What it came up with factored into our performance scores and ultimately our raises. It wasn't long before I left.

  • @koayact
    @koayact Před 6 lety +1

    Good work on Data Ethics! Plants thinking seeds for those whom may not know but holding on to false assumptions, used to sustain confidence in the system. Like Oneself-Check-Ownself.

  • @monicahhinga6726
    @monicahhinga6726 Před rokem +3

    This is so amazing. There is so much need of taking into consideration the results produced by AI algorithms

  • @shooshydooshy
    @shooshydooshy Před 6 lety +185

    Wow! I'll admit, when I saw the thumbnail I was afraid this would be some sort of weird speech from the deep depths of tumblr. Then I listened, no bias in between and... she's right! Machines work like they're told to work. And who tells a machine how to work? Humans. So if that human doesn't think about their own prejudices, or about past prejudices as well... the machine won't, either. And the algorithm that human made will act accordingly to how it was told to act.

    • @DeusExAnonymus
      @DeusExAnonymus Před 6 lety +13

      Shoosh like when google uses its algorithms to silence political discent?

    • @jamisonleckenby2355
      @jamisonleckenby2355 Před 6 lety +9

      Shoosh I don't have time for a full response but I felt the same. Now all I can say is look at how CZcams is censoring people in Myanmar and tell me that this lady doesn't have a good point on the situation of secretive algorithms.

    • @DeusExAnonymus
      @DeusExAnonymus Před 6 lety +1

      Jamison Leckenby the idea that ted would intentionally post a video that criticized CZcams is laughable. This was in no way about CZcamss algorithms.

    • @jamisonleckenby2355
      @jamisonleckenby2355 Před 6 lety +3

      DeusEx Anonymus They didn't specifically call out CZcams. While specific algorithms were criticized the point was all algorithms can't be trusted simply because someone else made them. I pointed this out myself using CZcams as an example.

    • @loukas371
      @loukas371 Před 6 lety +14

      Shoosh finally someone who focused on the point of the talk instead of stereotyping her... You are exactly right.

  • @solanahWP
    @solanahWP Před 5 lety +11

    Well, she spoke about obscure algorithms targeting voters in 2015! (That vid is still on CZcams). Long before you-know-who was elected. So, basically, she called out Cambridge Analytica even before it was (fake or not) news.

  • @_infinitedomain
    @_infinitedomain Před 6 lety +3

    Great talk, important critical thinking

  • @sagniksinha5831
    @sagniksinha5831 Před 9 měsíci +1

    Extensive checks are done in use cases In insurance and finance industry. Examples like credit card offer and insurance premium are not correct. Checking by gender is extensively tested

  • @unklarnamenpflicht
    @unklarnamenpflicht Před 2 lety

    Great talk, thanks!

  • @docskate4312
    @docskate4312 Před 6 lety +7

    The era of blind faith in big data should end.
    But it won't.
    The stuff this speaker spoke about is the key to making big money in our days, where making ends meet has become more and more difficult. Algorithms are the tools to extract more money out of people and will always be shaped for this purpose. All aspects of the matter, socially and also politically can be broken down to this one goal.
    Money.
    If we don't change - why not picture our species ending up in some skynet-like crap?
    Control over data is an ongoing war not just since yesterday.
    Here in Germany I sometimes get the feeling that we have already lost this fight by simply obeying and just continue walking our path, consuming all offers and giving away all of our personal data thankfully.
    And yes - I use the internet, but hate social media.

  • @nileab5717
    @nileab5717 Před 6 lety +73

    Just to remind you that she holds a PhD in math from Harvard and she is not biased or anti science at all. She might sound like feminists and post modernists but don't forget she is talking about fairness. I'm not mathematician but I know there is always a level of subjectivity in modeling/algorithm and it comes from variables used or omitted, methods used vs alternative methods, etc.

    • @torvaderon
      @torvaderon Před 6 lety +4

      When it comes to reality you should consult a physicist not a mathematician.

    • @SummaPlusANumberGrrr
      @SummaPlusANumberGrrr Před 6 lety +4

      Sure, but she's just very bad at forming a strong and consistent argument. Therefore it's not clear what her message is, which is surprising since the "TED way of talking" is to have one very clearly shared message. In this talk, there's tons of half-thoughts and half-examples, and therefore her "big conclusions" appear on wobbly ground.

    • @doubled6490
      @doubled6490 Před 6 lety +5

      PhD in math = god and unquestionable overlord of our pity race we call humans

    • @nileab5717
      @nileab5717 Před 6 lety +7

      Double D I'm way more iconoclastic and pessimistic and critical than you ever was and will be. I mentioned her math degree to show that she is not a dummy philosopher or sociologist talking about math. She is a expert mathematician talking about math.

    • @doubled6490
      @doubled6490 Před 6 lety +3

      except there is not math in the video,
      Please stop being such a dummy.

  • @ViniciusClaro2012
    @ViniciusClaro2012 Před 5 lety

    Another point is the difference between weak AI and strong AI. This is important to evaluate the dimension and influence of these kind of algorithm aborded by O'Neil.

  • @jacobomunozcervero6971

    So what are key implications of a data-driven strategy for managing?

  • @mikehart4172
    @mikehart4172 Před 6 lety +1

    While some of the terminology and presentation of Cathy's argument were polarizing for many viewers (explained by the like/dislikes ratio). Her point about how data collection and interpretation can be skewed is a valid point and needs to addressed. While I cannot say I am a good source to rely on, I am an informed person who understands the validity of correcting skewed data. I support her view on having more oversight over the input-output of these algorithms since many companies will not change them unless someone can prove that it is losing them money or force them to correct it.
    As for the presentation itself Ms. O'Neil should have went for a more objective/ more informative title to increase viewer-ship and prevent political/social bias from factoring. This problem also arose from her dress and terminology causing viewers to ignore her point which could have been remedied by her having more xp talking publicly (but she is a scientist not a speaker) so that she could properly trim and present her point without appearing nervous or bias.

  • @carlycarlucci1301
    @carlycarlucci1301 Před 3 lety +1

    I’ve now seen her in at least two documentaries! Persona on HBO Max being the latest.

  • @ViniciusClaro2012
    @ViniciusClaro2012 Před 5 lety +3

    The most important I heard from O'Neil os that algorithm "include" opinions.
    The algorithms may be analised by psicoanalists... [¿]

  • @johnperry5933
    @johnperry5933 Před 4 lety +2

    Possible victim of unethical insurance practices brought to light .My wife and bought a convenience store on a corner of the city that had the second highest crime rate. To be fair the downtown area in general had systiccaly low crime. My wife and I applied for Obama care because she was leaving the private sector to work with me in the store . Within 3 months after being on Obama care our coverage increased 68% for no reason. We couldn't afford to stay on this coverage so she went back to work in the private sector. This left us with a gapping hole with or management. After running the business by myself for 18 month I ran the business into the ground and my health started to suffer. The algorithm the insurance company use to assesse are risk, creater the very problem it was designed to protect against.

  • @KateLate____
    @KateLate____ Před 6 lety +5

    There's an assumption here that a single algorithm is being used, and that it's simple enough to be written on a piece of paper.
    Algorithms are very complex now, and cannot be evaluated by "eyeballing" them. Whoever developed them would need to disclose their full data set and their approaches. Maybe this should be done. But it's not as simple as sending an email.

  • @daanush468
    @daanush468 Před rokem +1

    Anyone who found the Ted Talk to be good, definitely needs to go through her book, "Weapons of Math Destruction". A worthy, concise read that covers a lot of sectors where algorithms are biased.

  • @Shub99
    @Shub99 Před 6 lety

    Her book came out in 2016, Weapons of maths Destruction...strange that an important understanding of this importance , her talk is one year later ...

  • @helenateresinhareinehrstof874

    We need to learn more about algorithms, they can do amazing things and also very dangerous things

  • @Jaiven
    @Jaiven Před rokem

    Couldn't agree more. "Algorithms don't have systems of appeal.", we need to change that.

  • @vishualee
    @vishualee Před 6 lety +1

    "Doing Data Science" brought me here! :D

  • @leomintu2351
    @leomintu2351 Před 3 lety +1

    Wish she was one of my professor!

  • @bri4njeff3rs0n
    @bri4njeff3rs0n Před 3 lety +1

    9:28 Most people confuse US racism and thoughts of supremacy to mean Neo-nazism; most of the time it doesn't, as that is only the extreme end of the spectrum. The concepts refer to outcomes based on widespread attributions of both conscious and primarily unconscious propensities for treatment based on visual cues relative to someone's background. That is either a sense of someone deserving full services, being given unearned trust, and being shown respectful, appreciative, friendly interaction OR contrarily any thoughts of unworthiness, seeking excessive control over the behavior of someone (mistrust), corner cutting when servicing someone, and/or dismissal of someone based on traditional visual stereotypes.

  • @annjuurinen9484
    @annjuurinen9484 Před 6 lety +5

    This is excellent. Many of the crappy decisions I see everyday in buisness are based on terrible data. Not only that people that depend on them use them are basically lazy. Logic is about laziness. Using a system instead of actually thinking.
    We didn't get this far using data or algorithms. We used our collective and individual brains and experience. People are lazy thinkers. They prefer systems because they don't want to begin to actually think.
    Logic is about excluding data. Not including data or information, leads to all the really poor decision making around me. It is also about the past, not present , not the future. The past. And it is a terrible narrowing of human thought.

    • @doubled6490
      @doubled6490 Před 6 lety

      please read the top comments of this video: they should change your mind a little.

  • @diyarawat7634
    @diyarawat7634 Před rokem

    the best ted talk on data. truly inspiring

  • @VikashKumar-kw7de
    @VikashKumar-kw7de Před 6 lety

    just Amazing

  • @akplayer007
    @akplayer007 Před 5 lety +4

    Why would any sane person want to work at fox news.

    • @spidermonkey8430
      @spidermonkey8430 Před 3 lety +3

      ofc a blue haired liberal would call out fox news whenever they can.

  • @ashajjar
    @ashajjar Před 3 lety

    it is not the problem of the algorithm ... it is what you feed it that is causing the problem ... in my early days in university (101 computer programming I guess it was) the instructor once said, this machine is GIGO machine ... if you feed it garbage in it will produce garbage out.
    Of course, she does have a great point here, however what I am stressing here is that our modern human societies are so ideological that we are not even able to recognise it anymore

  • @josht9518
    @josht9518 Před 6 lety +3

    She's like a mathematical Immortal Technique

  • @brandonswitzer3907
    @brandonswitzer3907 Před 6 lety

    At first i was like your wrong but after some listening she has a point.

  • @KateLate____
    @KateLate____ Před 6 lety

    One big bias in how many and liked and dislikes a videos gets I that we can see the results before we vote, before we even watch the video. If I see a lot of dislikes, I'm biased before I even start watching, looking for flaws. And I react more harshly when I see something that is a flaw.

  • @arthipex8512
    @arthipex8512 Před 6 lety

    To be honest, the point she's making is correct. Bias in input caused by humans will cause bias in output. However, doesn't an algorithm that was biased in such a way correspond more to our human nature? The solutions it might come up with might not always be the best, but they are for sure more "human" in their nature.

  • @CalLadyQED
    @CalLadyQED Před 6 lety

    It prompts the question, but it does not beg the question

  • @vorlonagent
    @vorlonagent Před 6 lety

    This sounds like a job for a corpus collossum. No single algorithm is perfect. But three different thinking methods might work together. to create better results than any one individually.

  • @MacoveiVlad
    @MacoveiVlad Před 6 lety

    It highlights some dangers talked about in other TED Talks but the tone and "activism" irks me. Overall the message is correct, we should not let computer do dumb stuff. If we input garbage the computer outputs garbage. But that is why smart people tackle this problem. If society reaches a point when it depends to much on stupid algorithms lawmakers should intervene but in this scenario we likely have stupider lawmakers as they are a part of the society that allowed those stupid algorithms to happen. So we need to be careful but if it happens we ar screwed.

  • @singularitybound
    @singularitybound Před 6 lety

    Man I think people just looked right at the title lol...
    Which it isn't anything about.. Its ironic really because doing so is kinda her exact point.

  • @Ayplus
    @Ayplus Před 6 lety

    Made perfect sense to me.

  • @user-el6zt2ox3e
    @user-el6zt2ox3e Před 6 lety

    確かにアルゴリズムを通すと客観的で有ると思ってしまう。AIが出した事に対し盲信するのではなく、本当に正しいのかを人間はこれからずっと考えていかなくてはならない。

  • @TheWarrrenator
    @TheWarrrenator Před 6 lety +1

    I agree with the argument and premise presented by the speaker, but her challenges in public speaking style left something to be desired and made this video tedious to watch. O'Neil is most likely a brilliant writer and researcher, but she was probably just nervous. I love that her outfit matches her hair! But those shoes.... If it wasn't for the importance of the subject matter, this should have been a TEDx talk. This is a huge problem we need to solve and the mathematical tools we use need to be open source in order for it to be made sure that it is used correctly by anyone and not just by corporations and their "secret sauce."

  • @malek6021
    @malek6021 Před 6 lety

    Big data is just a tool, like a knife and indeed that can be dangerous or amazing depend of or level consciouness and wisdom.

  • @rohitkohok885
    @rohitkohok885 Před 6 lety

    EXCEPTIONAL.............

  • @tresenpartychef1891
    @tresenpartychef1891 Před 6 lety +6

    Is she cosplaying Sadness from Inside Out?

  • @kunstturndiego
    @kunstturndiego Před 6 lety +1

    A "Normative Data Inquiry panel "could be implemented and every algorithm will be vetted for racist, sexual, economic biases. The jury panel members could be the loophole for big data again. Dystopia...

  • @saikee2404
    @saikee2404 Před 2 lety

    this needs more views

  • @timothytam3203
    @timothytam3203 Před 6 lety

    i believe what she said is right. However those isolate examples can not stop the fact which humankind have to live with data. If human want to make percise fairness around whole word, we will pay a immense and unlimitated cost on it. in conclusion, she does claimed a very right truth, but it is not helpful to build a better sociaty.

  • @asianpursuasion6202
    @asianpursuasion6202 Před 6 lety +2

    Shouldn’t blind faith end everywhere?

  • @alexanderliu9376
    @alexanderliu9376 Před 4 lety +6

    This has aged unexpectedly well

  • @Suitswonderland
    @Suitswonderland Před 6 lety

    its the world that made me a loser, not because I am a fucking loser or anything, its the system, brb dying my hair again.

  • @hauptmannbalalaika
    @hauptmannbalalaika Před 3 měsíci

    When wrong data get into a computer, including government computer, you are done for, because you cannot correct it. That is not progress.

  • @EANTYcrown
    @EANTYcrown Před 6 lety

    Being fair the point she is making is not wrong is just, in my opinion, poorly expressed due to its deliberate political agenda wich clouds a solid argument, Badly designed algorithms will lead us to bad results (not the most insightful idea but a logical one nonetheless).
    As a result i think that a more useful conclusion is that we must all work together in order to make sure that the algorithms we use are properly designed, so we can manage to make them not repeat the mistakes of our past, but rather to improve in the future, defining with greater accuracy what we deem as success and improving our methods, as a result eventually achieve a methodology that decides based only on the expected results and capabilities of people, instead of who they are.

  • @Gameboob
    @Gameboob Před 2 lety

    Algorithms are not necessarily objective. They can have errors that create random outputs (like the teacher ratings), they don't automatically produce fair outcomes but rather yield more of what has already worked (in the case of Fox News' hiring practices and predicting crime).

  • @SuperMsmystery
    @SuperMsmystery Před 3 lety

    I am a data scientist and part of my job is to build a dynamite can't be used for building a bomb.
    I do algorithm audits

  • @maya-amf3325
    @maya-amf3325 Před 3 lety

    I read her book, Weapons of Math Destruction. Fairly good book with many convincing examples.
    But I feel this presentation didn't have much substance to it. It felt like just one grievance after the other without much supporting substantiation.

  • @GingerGingie
    @GingerGingie Před 6 lety

    My kids are little, so I tend to think in toddler movie terms... but this beautiful woman totally reminds me of "sadness" from the "Inside Out" movie, who (is totally blue, love it!) ends up being the key to a healthy interpretation and processing of what happens in a persons' life. She's spot on and a brilliant speaker. Thank you so much for another wonderful TED talk!

  • @OBtheamazing
    @OBtheamazing Před 6 lety +1

    The algorithms she is talking about are designed to do one thing only. Look at correlations.
    Ah I see you have has two car wrecks. Statistically speaking people that have had two car wrecks are more likely to have a third, so your insurance goes up.
    Algorithms look at one thing and see if there is a correlation in another thing. It isn't causation but it is significant. Everyone uses these. Stereotypes are a great example, they may not represent everyone, but they go represent a fair portion of that population. Just another algorithm.
    The hard part is to use enough data to get enough correlations and make it more accurate. Compare 2 people. One is black, another is Asian. Who is more likely to succeed. statistics would probably say the Asian if you only use that data. Now add more data. The black went to college and the Asian didn't. Now the black has a statistically higher chance of succeeding. It is all about the data, usually the more relevant data, the better.

  • @FunkyPrince
    @FunkyPrince Před 6 lety

    "Nobody in NYC had access to that formula, no one understood it" If no one has access to it, it doesn't mean no one understood it.
    Anyway she's just covering only one machine learning's branch, the supervised one, while we actually have more learning algorithms like the unsupervised and reinforcement ones.
    Learning algorithms have been thought to simulate natural learning processes, a bias system is essential to learn. Humans do exactly the same, she's talking about humans training algorithms with wrong bias, at this point, wouldn't it be the same to let machines decide?

  • @mrjdavidt
    @mrjdavidt Před 2 lety

    Last update
    Apparently my professor for this course DOES consider racial bias in AI a serious issue; however, the sources I provided were lacking.
    *lesson learned
    Be specific to the point of boilerplate explanations

  • @dcphillips1991
    @dcphillips1991 Před 6 lety +2

    I'd argue the algorithms themselves are objective, it's the objectives of the creators that are subjective.

  • @davidlikhovodov2854
    @davidlikhovodov2854 Před 6 lety

    I saw the caption and picture and immediately knew this will be a fun ride.

  • @oldones59
    @oldones59 Před 15 dny

    The marketing of algorithms isn't only intimidation. It's also predatory behavior by people and entities that want to take something from others.

  • @shreeyatyagi
    @shreeyatyagi Před 5 lety

    oh yeah!

  • @hadanamahanah7961
    @hadanamahanah7961 Před 6 lety

    Isn't it a great thing then that Fox News doesn't use a learning algorithm for the hiring process

  • @Synthminator
    @Synthminator Před 6 lety +189

    Postmodernism 101, today on TED!

  • @kylekeogh2731
    @kylekeogh2731 Před 6 lety

    Big data says not to match your hair with your sweater with your shirt....

  • @Wegnerrobert2
    @Wegnerrobert2 Před 6 lety

    I dont really see her point...
    She is effectively stating that machine learning algorithms/big data usage is being badly implemented.
    An algorithm should only be used knowing what it can do and what it cant do, how it was trained and what it didnt see during training.
    But this seems like a very basic and obvious concept to me. In the cases where it was violated I'm dure the people that did it would have done a better job if it was just that easy. But even if the algorithm wasnt "perfect" it probably returned better results than previous methods.
    It's not like human judgement isnt frequently biased too, so thinking by using algorthms we can expect objectivity is naive but that's ok, everything is a progress and the quality of big data processing will only increase.
    This talk is way to aggressive for my taste (weapons of math destruction cmon)

  • @veradragilyova3122
    @veradragilyova3122 Před 6 lety

    BRAVOOOO!!!!!!!!! I cannot cheer enough this overdue discussion!

  • @imapollard7857
    @imapollard7857 Před 6 lety

    She's right.

  • @rhysmcgreal8786
    @rhysmcgreal8786 Před rokem

    I can be truthful in this modern age. I am biasedly discriminated because I am a male. My problem is we shouldn't look at genda. I am all for Equal rights but equal of opportunity. Not outcome x

  • @quanghoancu4154
    @quanghoancu4154 Před 6 lety

    it affects someone's life, so don't do it carelessly.

  • @farvision
    @farvision Před 6 lety

    What should Faux Nudes do to turn over a new leaf? Shut down.

  • @yunhaozou8012
    @yunhaozou8012 Před 4 lety

    大数据所提供的信息不会百分之百正确,算法所给的答案也是一样,讲演者一直在讲那些个例,他确实存在,但不代表全部,他也许在这个算法中是错误答案,但是他也许在另一套算法中是正确答案。并且,人工智能终究是人类创造出来的东西,不会去带有情感去思考,这一点我们已经非常清楚。所以讨论一下讲演者说的那几个例子,并不是算法或者大数据的错误,而是在那种情况下,是不应该是用算法或大数据做出判断。错的并不是大数据或者算法,在什么情况下来应用数据算法才是关键吧。so,dislike

  • @skoky76
    @skoky76 Před 6 lety

    Algorithmology studies?

  • @JanPospisilArt
    @JanPospisilArt Před 6 lety

    0:38 That's not how you use "begs the question". That's not what it means. Stop.

  • @NizarElZarif
    @NizarElZarif Před 6 lety +1

    The problem is not the learning algorithm, SVM, neural network, HMM and others are meant to be meant to give the highest accuracy classification or prediction. it is based on the data, generally, the larger the dataset the more accurate the result would be. however, the problem might be how the data was gathered. These Machine learning algorithm aren't meant to replace human, but to make their work easier. for example a algorithm can reduce the time of doctor to give a diagnosis. or help facilitate the amount of interset a bank or an insurance might have. i never heared someone was fired because an algorithm told them so.
    so the main point is how we gathere the data rather the Machine leanrning algorithm selfselves. and generally, the most popular datasets are clear on how they gather the information

  • @harrybarber3255
    @harrybarber3255 Před 6 lety +1

    I can't understand why this video is so disliked, this talk brings up a fascinating problem I had never considered and should be looked into on a wide scale. Just a shame no solution was offered into how to improve the situation other than scrutiny of algorithms, maybe a way of selecting parameters from unbiased sources?

    • @torvaderon
      @torvaderon Před 6 lety

      Disagreement is actually the best thing that can happen to a good idea, because it can show how good it really is. If you dismiss critics because of the skin/hair color, politics of its originator you are in the wrong. The only difference that should matter to you is the one between reason and unreason.

  • @kashatnick
    @kashatnick Před 6 lety

    And the problem is that she complains about over simplified metrics being an incomplete model of the world, yet her presuppositions are based on the very same over simplification, things like the wage gap are based entirely on crude algorithms comparing apples to oranges. On policing, the crime stats are simply damning once you look at them, simply the prevalence of suspects willing to shoot back at the police at rates several times more than other groups will skew the data in a way I'm sure she wouldn't accept.
    She's right in a way, but I'm sure her ilk simply wish to skew the data to paint an incomplete picture in the way they prefer.