The era of blind faith in big data must end | Cathy O'Neil
Vložit
- čas přidán 19. 06. 2024
- Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more -- but they don't automatically make things fair. Mathematician and data scientist Cathy O'Neil coined a term for algorithms that are secret, important and harmful: "weapons of math destruction." Learn more about the hidden agendas behind the formulas.
The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more.
Follow TED on Twitter: / tedtalks
Like TED on Facebook: / ted
Subscribe to our channel: / ted - Věda a technologie
5:51 "Algorithms don't make things fair... They repeat our past practices, our patterns. They automate the status quo.
That would be great if we had a perfect world, but we don't."
The perfect summary of the talk
That's bullshit. There was an experiment with AlphaGo. The AI was set to compete with itself and it started generating new patterns after sometime. Her Arguments are all over the place. While Black Box algorithms should be banned but algorithmic biases are removed from AI after sometime automatically as it learns more datasets.
@@coolbuddyshivam I don't think you understand the kinds of algorithms she's talking about. AlphaGo is not remotely the same thing and you can not generalize patterns you observe in that very limited use case to algorithms in general.
I agree, if algorithms are just echos (pun-intended) of how we think, how does that represent the maker or user of it?
Thank you saved my 13 minutes
CZcams is a great example of an algorithm getting it wrong.
Of course we need to be very careful with the videos the CZcams algorithm shows us. I use Socrates three questions before choising one video
Is this 100% true?
Is this good?
Is this useful?
Her message is, as she said, "a blind faith towards the algorithm only sustains the status quo". This is not promoting any feminist or sjw ideas, it's an inconvenient truth that we should wake up to.
Well it suggests that there's something wrong with the status quo and that's the core of SJW ideology.
@@sTL45oUw what you are referring to is called progression and anyone who rejects the notion could go to the stone age and be happy with it.
i just dont like how she used this to push her leftist agenda
CZcams has this problem. The algorithms rec videos based on what you watch and keeps it that way. The site does not rec video outside of your views
Calling SJW a progression. Do you still think so?@@berettam92f
I worked as a graduate math teaching assistant at a very ethnically diverse university. I am not proud of it but I should admit that came to the US with an unfounded idea that some ethnic groups are not as good at math as other groups. However, what I found out through firsthand experience was that I was very wrong. I only find one thing that is correlated with an increment in math proficiency that is how much you are willing to try to master the subject. It is the most valuable lesson for me as a potential math teacher. And I am also glad for myself that I was able to be open and humble, and didn't perpetuate my unfounded idea through my mental algorithm to differentiate my students.
there is also the issue of who is enrolling in STEM classes and the cost barrier to entry. Before I started spending my own money on the stuff I had been using to for both school and personal projects, I didn't know it was that expensive. Right down to the gaming laptop I was using vs some crappy dell ones other students used(and yes some did have a desktop at home or in a dorm). Since we know there is a historical economic gap among races in the US dating back to stuff like the Tulsa massacre by the KKK, we should know why there are fewer black students in Computer Science with us. And I admit I was racist because I didn't like that those people weren't garbing the opportunities offered in this field and joining me in class.
So you basically admit you had a bias against certain races? Thank you for confirming that, I'll keep an eye out for racist teachers
I work with big data and the exact same algorithms she's talking about - and she's right. This isnt perfect data we work with and the code isnt made by divine objective superhumans - its just us - a team of overworked underpaid data scientists who are all flawed human beings and honestly, the kinds of ways you are pressured to simplify or fix code at the last moment, you don't always have the time, the computing power or the straight up permission of higher ups to do the job 100% perfectly every time. Data can be a good tool, but it's as imperfect as anything out there. Maths and statistics arent gods, they are developed by humans and are less perfect than you might think. Also, maths isnt objective or subjective - it isnt a complex living mind, and we are far from making it one. We cant even agree on what that mind should behave like, let alone how to make it. So just, trust big data as much as you trust any salesman, for example.
yes because data is wrong.
lol
@@doubled6490 Underpaid? More like swimming in money... 100k at least
@@DinhoPilot More like overpaid.
surprised to see so many dislikes, and shocked to find out the reasons behind the dislikes.
facts are awful reason.
알고리즘이나 빅데이터가 잘못한 것 아이고 그 상황에(발표에 나온 사례들) 알고리즘이나 빅데이터를 써야 하지 않는다고 생각합니다.
i just disliked it because she used this to push her leftist agenda. otherwise she made some good points.
@@spidermonkey8430 what are some ways algorithms create outcomes that the "right" is opposed to?
Fat Karen with blue hair trying to tell people what to do. Thumbs down
Thank you. As someone who works in a very human field where so-called "Value Added Measures" (VAM) are used to rate the vast majority of employees, I can corroborate that this practice can lead to some very, very unexpected and very, very unjust outcomes.
I think that people are starting to realize this now, but I'm not sure how ratings will be handled as we move forward -- especially when the rating systems are often encoded into state law (which means that they can be very hard to change, and can stick around long after their fairness has been called into question).
"Weapons of math destruction" is one of the most enlightening concept I've heard in the last times. Thank you Cathy O'Neil!
Watch until the end, there's a conclusion and recommendations... Algorithm audit, data integrity check, feedback, etc...
My mind is blown. So glad I clicked and thanks for such an insight!!!!
Strange to see the number of dislikes.
Good insight on the digital world.
Probably people who are judging the content of her TED talk by her appearance. Pathetic
It's funny how people are assuming she's against algos. She's nt against algos but against BLIND FAITH in them! They shd be used only for assistance, not as the final word.
Literally wrote a paper about ethics in AI and used this argument as the base for my research. Instructor gave me an F and said racial bias and discrimination in healthcare systems has nothing to do with AI 🤦🏾♂️.
Had to resubmit my paper, still waiting on the results. 🤷🏾♂️
Are you kidding me? This was 2 years ago. What happened after? I am fuming for past you.
They make an excellent point! We put way too much faith in numbers we see.
Edit: Majority of dislikes, there's a misleading number right there. Did the majority of people watch and deem the video bad? Or did they click the video with a preconception of the lecturer and mathematician, and then dump their hate on them in any way they could?
I disliked because I read your comment.
I disliked the video because it's just bla-bla-bla. Here's the plot - badly designed algos make lifes of some people worse and more unfair. Oh my.
✨EXCELLENT.. .Video ! Thank you!!✨🌟💫✨🎉🎊🎉🎉🎊🎉🌹🌹🌹🌹🌹💗🌹🌹🌹🌹✨🌟💫✨🎊🎉🎊✨🌟💫✨
My favorite Ted video - so important to think about this in our present age and culture
Its important to differ algorithms from models.
The models have the concept and the algorithms are part of models.
Models include entities and rules, but algorithms follow these rules.
This is awesome . Something new to think about !
Wow, excellent and she is spot on!
i do programming at university and all the points she brings up are very common, but we're also taught ways around them....
There is no general technique that fixes all such biases and misuses of data. The techniques are applied in set ways based on what you think of/experience of yourself and the field.
Ms. O'Neil work open my mind to a deeper understanding of what's happening now days. Not the laugheable game of getting ads in facebook but the serious unethical world that we are feeding while using all the tolls new era gave us. it is sooooo scary! i hope smart (and honest) people will find soon a better way of keeping human being natural being.
algorithms will never account for the inherent randomness, non intuitiveness of real world scenarios. Hence, all those predictive facebook ads might very well be simply mimicing parasites for advertisers
I'm just curious where all these statistics are coming from if algorithms cannot be trusted
I don't take her message as "we cannot trust algorithms", but as an alert to the risks of someone not using them properly.
Her point is that you should look at statistics with the same skepticism and critisism as you whould do news or anything else. Being aware that statistics can be manipulated or even be unintentionally biased because of how data was collected is an important critical thinking skill.
From the past. They are not made to predict the future, they are made to highligh peaks and lows and trends. And nobody can make a sober decision based on that =/ Not even the algorithm
algorithms use statistics or data in general to predict stuff. Statistics and data are just as biased as algorithms a problem which stems from their creators, us.
Giovane Guerreiro but that's it, isn't It? If someone uses them wrong. Algorithms are a science and if you use that science poorly, you get bad results. Math and science are tools that can be used for ill or for good. She srates that all these algorithms are bad and math is scary. Weapons of math destruction? Bah!. She said bad people use neutral tools and we should stop using the tools because she can't think of a time when algorithms are good.
I once worked for a company taking calls from customers. They used to judge the customer's satisfaction by a follow up call to the customer and an automated survey. Sure customers who were angry were more likely to take that survey but you were hearing it from them. Then they decided to turn this over to an algorithm that listened to the calls and scored the worker based on that. It was utter garbage, demonstratively inaccurate. A customer could profusely thank you for your help at the end of the call and this junk code would say they had a bad experience. We employees and local management had no access to the algorithm, very little data on what it was actually looking for, we were just supposed to trust the process. What it came up with factored into our performance scores and ultimately our raises. It wasn't long before I left.
Good work on Data Ethics! Plants thinking seeds for those whom may not know but holding on to false assumptions, used to sustain confidence in the system. Like Oneself-Check-Ownself.
This is so amazing. There is so much need of taking into consideration the results produced by AI algorithms
Wow! I'll admit, when I saw the thumbnail I was afraid this would be some sort of weird speech from the deep depths of tumblr. Then I listened, no bias in between and... she's right! Machines work like they're told to work. And who tells a machine how to work? Humans. So if that human doesn't think about their own prejudices, or about past prejudices as well... the machine won't, either. And the algorithm that human made will act accordingly to how it was told to act.
Shoosh like when google uses its algorithms to silence political discent?
Shoosh I don't have time for a full response but I felt the same. Now all I can say is look at how CZcams is censoring people in Myanmar and tell me that this lady doesn't have a good point on the situation of secretive algorithms.
Jamison Leckenby the idea that ted would intentionally post a video that criticized CZcams is laughable. This was in no way about CZcamss algorithms.
DeusEx Anonymus They didn't specifically call out CZcams. While specific algorithms were criticized the point was all algorithms can't be trusted simply because someone else made them. I pointed this out myself using CZcams as an example.
Shoosh finally someone who focused on the point of the talk instead of stereotyping her... You are exactly right.
Well, she spoke about obscure algorithms targeting voters in 2015! (That vid is still on CZcams). Long before you-know-who was elected. So, basically, she called out Cambridge Analytica even before it was (fake or not) news.
Great talk, important critical thinking
Extensive checks are done in use cases In insurance and finance industry. Examples like credit card offer and insurance premium are not correct. Checking by gender is extensively tested
Great talk, thanks!
The era of blind faith in big data should end.
But it won't.
The stuff this speaker spoke about is the key to making big money in our days, where making ends meet has become more and more difficult. Algorithms are the tools to extract more money out of people and will always be shaped for this purpose. All aspects of the matter, socially and also politically can be broken down to this one goal.
Money.
If we don't change - why not picture our species ending up in some skynet-like crap?
Control over data is an ongoing war not just since yesterday.
Here in Germany I sometimes get the feeling that we have already lost this fight by simply obeying and just continue walking our path, consuming all offers and giving away all of our personal data thankfully.
And yes - I use the internet, but hate social media.
Just to remind you that she holds a PhD in math from Harvard and she is not biased or anti science at all. She might sound like feminists and post modernists but don't forget she is talking about fairness. I'm not mathematician but I know there is always a level of subjectivity in modeling/algorithm and it comes from variables used or omitted, methods used vs alternative methods, etc.
When it comes to reality you should consult a physicist not a mathematician.
Sure, but she's just very bad at forming a strong and consistent argument. Therefore it's not clear what her message is, which is surprising since the "TED way of talking" is to have one very clearly shared message. In this talk, there's tons of half-thoughts and half-examples, and therefore her "big conclusions" appear on wobbly ground.
PhD in math = god and unquestionable overlord of our pity race we call humans
Double D I'm way more iconoclastic and pessimistic and critical than you ever was and will be. I mentioned her math degree to show that she is not a dummy philosopher or sociologist talking about math. She is a expert mathematician talking about math.
except there is not math in the video,
Please stop being such a dummy.
Another point is the difference between weak AI and strong AI. This is important to evaluate the dimension and influence of these kind of algorithm aborded by O'Neil.
What do you mean? could you elaborate?
So what are key implications of a data-driven strategy for managing?
While some of the terminology and presentation of Cathy's argument were polarizing for many viewers (explained by the like/dislikes ratio). Her point about how data collection and interpretation can be skewed is a valid point and needs to addressed. While I cannot say I am a good source to rely on, I am an informed person who understands the validity of correcting skewed data. I support her view on having more oversight over the input-output of these algorithms since many companies will not change them unless someone can prove that it is losing them money or force them to correct it.
As for the presentation itself Ms. O'Neil should have went for a more objective/ more informative title to increase viewer-ship and prevent political/social bias from factoring. This problem also arose from her dress and terminology causing viewers to ignore her point which could have been remedied by her having more xp talking publicly (but she is a scientist not a speaker) so that she could properly trim and present her point without appearing nervous or bias.
I’ve now seen her in at least two documentaries! Persona on HBO Max being the latest.
The most important I heard from O'Neil os that algorithm "include" opinions.
The algorithms may be analised by psicoanalists... [¿]
Possible victim of unethical insurance practices brought to light .My wife and bought a convenience store on a corner of the city that had the second highest crime rate. To be fair the downtown area in general had systiccaly low crime. My wife and I applied for Obama care because she was leaving the private sector to work with me in the store . Within 3 months after being on Obama care our coverage increased 68% for no reason. We couldn't afford to stay on this coverage so she went back to work in the private sector. This left us with a gapping hole with or management. After running the business by myself for 18 month I ran the business into the ground and my health started to suffer. The algorithm the insurance company use to assesse are risk, creater the very problem it was designed to protect against.
There's an assumption here that a single algorithm is being used, and that it's simple enough to be written on a piece of paper.
Algorithms are very complex now, and cannot be evaluated by "eyeballing" them. Whoever developed them would need to disclose their full data set and their approaches. Maybe this should be done. But it's not as simple as sending an email.
Anyone who found the Ted Talk to be good, definitely needs to go through her book, "Weapons of Math Destruction". A worthy, concise read that covers a lot of sectors where algorithms are biased.
Her book came out in 2016, Weapons of maths Destruction...strange that an important understanding of this importance , her talk is one year later ...
We need to learn more about algorithms, they can do amazing things and also very dangerous things
Couldn't agree more. "Algorithms don't have systems of appeal.", we need to change that.
"Doing Data Science" brought me here! :D
Wish she was one of my professor!
9:28 Most people confuse US racism and thoughts of supremacy to mean Neo-nazism; most of the time it doesn't, as that is only the extreme end of the spectrum. The concepts refer to outcomes based on widespread attributions of both conscious and primarily unconscious propensities for treatment based on visual cues relative to someone's background. That is either a sense of someone deserving full services, being given unearned trust, and being shown respectful, appreciative, friendly interaction OR contrarily any thoughts of unworthiness, seeking excessive control over the behavior of someone (mistrust), corner cutting when servicing someone, and/or dismissal of someone based on traditional visual stereotypes.
This is excellent. Many of the crappy decisions I see everyday in buisness are based on terrible data. Not only that people that depend on them use them are basically lazy. Logic is about laziness. Using a system instead of actually thinking.
We didn't get this far using data or algorithms. We used our collective and individual brains and experience. People are lazy thinkers. They prefer systems because they don't want to begin to actually think.
Logic is about excluding data. Not including data or information, leads to all the really poor decision making around me. It is also about the past, not present , not the future. The past. And it is a terrible narrowing of human thought.
please read the top comments of this video: they should change your mind a little.
the best ted talk on data. truly inspiring
just Amazing
Why would any sane person want to work at fox news.
ofc a blue haired liberal would call out fox news whenever they can.
it is not the problem of the algorithm ... it is what you feed it that is causing the problem ... in my early days in university (101 computer programming I guess it was) the instructor once said, this machine is GIGO machine ... if you feed it garbage in it will produce garbage out.
Of course, she does have a great point here, however what I am stressing here is that our modern human societies are so ideological that we are not even able to recognise it anymore
She's like a mathematical Immortal Technique
At first i was like your wrong but after some listening she has a point.
One big bias in how many and liked and dislikes a videos gets I that we can see the results before we vote, before we even watch the video. If I see a lot of dislikes, I'm biased before I even start watching, looking for flaws. And I react more harshly when I see something that is a flaw.
To be honest, the point she's making is correct. Bias in input caused by humans will cause bias in output. However, doesn't an algorithm that was biased in such a way correspond more to our human nature? The solutions it might come up with might not always be the best, but they are for sure more "human" in their nature.
It prompts the question, but it does not beg the question
This sounds like a job for a corpus collossum. No single algorithm is perfect. But three different thinking methods might work together. to create better results than any one individually.
It highlights some dangers talked about in other TED Talks but the tone and "activism" irks me. Overall the message is correct, we should not let computer do dumb stuff. If we input garbage the computer outputs garbage. But that is why smart people tackle this problem. If society reaches a point when it depends to much on stupid algorithms lawmakers should intervene but in this scenario we likely have stupider lawmakers as they are a part of the society that allowed those stupid algorithms to happen. So we need to be careful but if it happens we ar screwed.
Man I think people just looked right at the title lol...
Which it isn't anything about.. Its ironic really because doing so is kinda her exact point.
Made perfect sense to me.
確かにアルゴリズムを通すと客観的で有ると思ってしまう。AIが出した事に対し盲信するのではなく、本当に正しいのかを人間はこれからずっと考えていかなくてはならない。
I agree with the argument and premise presented by the speaker, but her challenges in public speaking style left something to be desired and made this video tedious to watch. O'Neil is most likely a brilliant writer and researcher, but she was probably just nervous. I love that her outfit matches her hair! But those shoes.... If it wasn't for the importance of the subject matter, this should have been a TEDx talk. This is a huge problem we need to solve and the mathematical tools we use need to be open source in order for it to be made sure that it is used correctly by anyone and not just by corporations and their "secret sauce."
Big data is just a tool, like a knife and indeed that can be dangerous or amazing depend of or level consciouness and wisdom.
EXCEPTIONAL.............
Is she cosplaying Sadness from Inside Out?
A "Normative Data Inquiry panel "could be implemented and every algorithm will be vetted for racist, sexual, economic biases. The jury panel members could be the loophole for big data again. Dystopia...
this needs more views
i believe what she said is right. However those isolate examples can not stop the fact which humankind have to live with data. If human want to make percise fairness around whole word, we will pay a immense and unlimitated cost on it. in conclusion, she does claimed a very right truth, but it is not helpful to build a better sociaty.
Shouldn’t blind faith end everywhere?
I'll take your word that it should.
This has aged unexpectedly well
its the world that made me a loser, not because I am a fucking loser or anything, its the system, brb dying my hair again.
When wrong data get into a computer, including government computer, you are done for, because you cannot correct it. That is not progress.
Being fair the point she is making is not wrong is just, in my opinion, poorly expressed due to its deliberate political agenda wich clouds a solid argument, Badly designed algorithms will lead us to bad results (not the most insightful idea but a logical one nonetheless).
As a result i think that a more useful conclusion is that we must all work together in order to make sure that the algorithms we use are properly designed, so we can manage to make them not repeat the mistakes of our past, but rather to improve in the future, defining with greater accuracy what we deem as success and improving our methods, as a result eventually achieve a methodology that decides based only on the expected results and capabilities of people, instead of who they are.
Algorithms are not necessarily objective. They can have errors that create random outputs (like the teacher ratings), they don't automatically produce fair outcomes but rather yield more of what has already worked (in the case of Fox News' hiring practices and predicting crime).
I am a data scientist and part of my job is to build a dynamite can't be used for building a bomb.
I do algorithm audits
I read her book, Weapons of Math Destruction. Fairly good book with many convincing examples.
But I feel this presentation didn't have much substance to it. It felt like just one grievance after the other without much supporting substantiation.
My kids are little, so I tend to think in toddler movie terms... but this beautiful woman totally reminds me of "sadness" from the "Inside Out" movie, who (is totally blue, love it!) ends up being the key to a healthy interpretation and processing of what happens in a persons' life. She's spot on and a brilliant speaker. Thank you so much for another wonderful TED talk!
The algorithms she is talking about are designed to do one thing only. Look at correlations.
Ah I see you have has two car wrecks. Statistically speaking people that have had two car wrecks are more likely to have a third, so your insurance goes up.
Algorithms look at one thing and see if there is a correlation in another thing. It isn't causation but it is significant. Everyone uses these. Stereotypes are a great example, they may not represent everyone, but they go represent a fair portion of that population. Just another algorithm.
The hard part is to use enough data to get enough correlations and make it more accurate. Compare 2 people. One is black, another is Asian. Who is more likely to succeed. statistics would probably say the Asian if you only use that data. Now add more data. The black went to college and the Asian didn't. Now the black has a statistically higher chance of succeeding. It is all about the data, usually the more relevant data, the better.
"Nobody in NYC had access to that formula, no one understood it" If no one has access to it, it doesn't mean no one understood it.
Anyway she's just covering only one machine learning's branch, the supervised one, while we actually have more learning algorithms like the unsupervised and reinforcement ones.
Learning algorithms have been thought to simulate natural learning processes, a bias system is essential to learn. Humans do exactly the same, she's talking about humans training algorithms with wrong bias, at this point, wouldn't it be the same to let machines decide?
Last update
Apparently my professor for this course DOES consider racial bias in AI a serious issue; however, the sources I provided were lacking.
*lesson learned
Be specific to the point of boilerplate explanations
I'd argue the algorithms themselves are objective, it's the objectives of the creators that are subjective.
I saw the caption and picture and immediately knew this will be a fun ride.
The marketing of algorithms isn't only intimidation. It's also predatory behavior by people and entities that want to take something from others.
oh yeah!
Isn't it a great thing then that Fox News doesn't use a learning algorithm for the hiring process
Postmodernism 101, today on TED!
I, too, watch Jordan Peterson's lectures.
RonMD Do Dominance hi-arky
You don't even know what you're talking about.
+Brenda Rua
Clean your room.
Martín Varela 😂👌🏻
Big data says not to match your hair with your sweater with your shirt....
I dont really see her point...
She is effectively stating that machine learning algorithms/big data usage is being badly implemented.
An algorithm should only be used knowing what it can do and what it cant do, how it was trained and what it didnt see during training.
But this seems like a very basic and obvious concept to me. In the cases where it was violated I'm dure the people that did it would have done a better job if it was just that easy. But even if the algorithm wasnt "perfect" it probably returned better results than previous methods.
It's not like human judgement isnt frequently biased too, so thinking by using algorthms we can expect objectivity is naive but that's ok, everything is a progress and the quality of big data processing will only increase.
This talk is way to aggressive for my taste (weapons of math destruction cmon)
BRAVOOOO!!!!!!!!! I cannot cheer enough this overdue discussion!
OMG, nice to meet u here Vera~
I am Coolio(Shiqi)
She's right.
I can be truthful in this modern age. I am biasedly discriminated because I am a male. My problem is we shouldn't look at genda. I am all for Equal rights but equal of opportunity. Not outcome x
it affects someone's life, so don't do it carelessly.
What should Faux Nudes do to turn over a new leaf? Shut down.
大数据所提供的信息不会百分之百正确,算法所给的答案也是一样,讲演者一直在讲那些个例,他确实存在,但不代表全部,他也许在这个算法中是错误答案,但是他也许在另一套算法中是正确答案。并且,人工智能终究是人类创造出来的东西,不会去带有情感去思考,这一点我们已经非常清楚。所以讨论一下讲演者说的那几个例子,并不是算法或者大数据的错误,而是在那种情况下,是不应该是用算法或大数据做出判断。错的并不是大数据或者算法,在什么情况下来应用数据算法才是关键吧。so,dislike
Algorithmology studies?
0:38 That's not how you use "begs the question". That's not what it means. Stop.
The problem is not the learning algorithm, SVM, neural network, HMM and others are meant to be meant to give the highest accuracy classification or prediction. it is based on the data, generally, the larger the dataset the more accurate the result would be. however, the problem might be how the data was gathered. These Machine learning algorithm aren't meant to replace human, but to make their work easier. for example a algorithm can reduce the time of doctor to give a diagnosis. or help facilitate the amount of interset a bank or an insurance might have. i never heared someone was fired because an algorithm told them so.
so the main point is how we gathere the data rather the Machine leanrning algorithm selfselves. and generally, the most popular datasets are clear on how they gather the information
I can't understand why this video is so disliked, this talk brings up a fascinating problem I had never considered and should be looked into on a wide scale. Just a shame no solution was offered into how to improve the situation other than scrutiny of algorithms, maybe a way of selecting parameters from unbiased sources?
Disagreement is actually the best thing that can happen to a good idea, because it can show how good it really is. If you dismiss critics because of the skin/hair color, politics of its originator you are in the wrong. The only difference that should matter to you is the one between reason and unreason.
And the problem is that she complains about over simplified metrics being an incomplete model of the world, yet her presuppositions are based on the very same over simplification, things like the wage gap are based entirely on crude algorithms comparing apples to oranges. On policing, the crime stats are simply damning once you look at them, simply the prevalence of suspects willing to shoot back at the police at rates several times more than other groups will skew the data in a way I'm sure she wouldn't accept.
She's right in a way, but I'm sure her ilk simply wish to skew the data to paint an incomplete picture in the way they prefer.