SaTML 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through AGI

Sdílet
Vložit
  • čas přidán 14. 02. 2023
  • Eugenics and the Promise of Utopia through Artificial General Intelligence
    Based on work by Timnit Gebru & Émile P. Torres
  • Věda a technologie

Komentáře • 123

  • @noonward
    @noonward Před rokem +19

    how are we ('humanity') going to be able to create a utopia AGI if it's dataset is based on longtermist colonists

  • @davidchavez81
    @davidchavez81 Před rokem +33

    She is saying some of the most important things that need to be said right now.

  • @jayetchings1772
    @jayetchings1772 Před rokem +2

    Is there a paper related to this work?

    • @MrCalhoun556
      @MrCalhoun556 Před 11 měsíci +2

      Probably coming out soon! ;-)

    • @NiCKTaNGeNT
      @NiCKTaNGeNT Před 11 měsíci

      Any update?

    • @BloopMoop23
      @BloopMoop23 Před 7 měsíci +2

      @@NiCKTaNGeNThang tight - it is actually coming out soon! (As of Nov 2023)

    • @nebularwinter
      @nebularwinter Před 3 měsíci

      @@BloopMoop23is it there yet, as of Feb 2024?

  • @PhilosopherScholar
    @PhilosopherScholar Před 9 měsíci +5

    I hope Gebru continues to talk about this and brings the discussion mainstream. The scope of an engineering project is definitely important.

  • @StephenCobbCISSP
    @StephenCobbCISSP Před rokem +18

    Really appreciate the way Gebru and Torres are making clear the cultural roots of key concepts at play in much of today's AGI enthusiasm, fear, and funding. The "TESCREAL bundle" they critique is a very helpful device (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism).

    • @RachidElGuerrab
      @RachidElGuerrab Před rokem +1

      How different is her presentation from what Glen Beck used to do on the board on Fox News to label all social Justice movements as communist? Of course she didn’t bring up everything else that the conservatives link to Eugenics like planned parenthood and feminism and lgbtq rights and on and on.
      Human ideas are inherently connected and evolve from each other through people who also might be influenced then change their minds. How is this useful to say that all of this work is related to Nazis?

  • @swordscythe
    @swordscythe Před rokem +15

    This is a great talk. Very enlightening. Wish the Q&A was in the video.

  • @supermagnumable
    @supermagnumable Před 9 měsíci

    the conversation that needs to be had

  • @TemporalOnline
    @TemporalOnline Před rokem +2

    Dude, like, WTF???

    • @biogerontology7646
      @biogerontology7646 Před 11 měsíci +2

      Google you can find - Émile P. Torres’s history of dishonesty and harassment. What you just learned is mostly a misrepresentation. Many of the groups under TESCREAL umbrella have nothing to do with eugenics. This groups are being attacked with misinformation.

    • @makenzienohr4105
      @makenzienohr4105 Před 10 měsíci +1

      I'm not even trying to be mean but I'm wondering what the autism/TESCREAL overlap is...

  • @claudiaxander
    @claudiaxander Před 6 měsíci +2

    Would you allow people to choose the genetic characteristics of their children, remembering that is what we already do when choosing a mate, but to a lesser degree.

  • @claudiaxander
    @claudiaxander Před 6 měsíci +1

    Do you accept that some humans are born sociopathic, narcissistic , psychopathic, and that it would benefit humanity if they were not.

  • @DistortedV12
    @DistortedV12 Před rokem +7

    This was one hell of a talk. Thank you Timnit!

  • @roofishighcorp.associates2020

    Thank you for bringing those questions in the daylight. I don't think we can escape what those powerfull people have decided to go for. It's like nuclear weapons, weather you like it or not you have to do with it. So instead of fighting those scary ideology we should prepare ourselves to adapt to it and not be left behind by the people promoting it. I don't know, just thinking outloud.

  • @marcfruchtman9473
    @marcfruchtman9473 Před rokem +13

    Wow, so much in this video. Thank you very much to Timnit Gebru for the presentation. Yes, the history and thought process of Eugenics, is essentially an atrocity.
    Sometimes, great ideas are co-opted by villainy.
    However, the act of improving ourselves... doesn't require that we follow in the same footsteps. We can improve the human condition and have diversity and inclusion. We do not need or want the method they would try to employ.
    A great many people believe that transforming themselves into some combination of humanity and machinery will allow us to progress to the next level. So, while there may be First and Second wave Eugenicists that co-opt the idea of Trans-humanism, we should not allow that to poison our own vision. Our vision for the future should be one of diversity, where regardless of race, color, religion, gender, disability, genetic modification (or lack thereof), or tech-mods (or lack-thereof), all are included and part of humanity.
    While I do agree that there are many dangers as we progress with AI, and we need to be very careful, the idea that we "not" pursue improvements in AI, or improvements in the human condition is simply not acceptable. We need to be able to do this. And yes, we will have to be very aware of the dangers. These next 10 to 20 years will be very tumultuous. There will be tremendous growth and upheaval in the job markets as we learn how to use AI to help humanity. Yes, we do need well-scoped AI. And yes, we need to prevent AI from dominating our humanity. However, we also cannot force AI to be "narrow scoped" either. Ie, the kind of AI we use should be tailored to the situation and how much potential danger is involved. For example, in its current iteration, I would not want Large Language Model AI to be managing anything where human lives are at stake because it simply makes far too many errors. So, I agree there. But, I disagree with "fear mongering". and it seems to me that this talk is mostly about the fear and atrocity of the past, and trying to attach that viewpoint to a different group of humanity that simply doesn't believe in that viewpoint. It is my belief that great majority of transhumanists today want and promote diversity. We need to be able to reach for our dreams, with diversity and inclusivity... for all of us.

    • @noonward
      @noonward Před rokem +3

      the problem outlined here is the concept of, 'the next level'

    • @MollyGordon
      @MollyGordon Před 7 měsíci +2

      What is the “next level” and for the sake of what would we strive for it?

    • @marcfruchtman9473
      @marcfruchtman9473 Před 7 měsíci +1

      ​@@MollyGordon Hello Molly. Next Level will likely be different for different people. The range can vary from nothing to large cyborg level changes. Some simple examples might be the ability to have access to computational power within your body that normally would require an external computer system, the ability to access world information from a built-in database, the ability to communicate to other people wirelessly without a phone. Some will want to interface with sensors... such as people with poor eye sight might want to be able to see thru electronic sensors or to hear thru electronic cochlear implants. Of course, these are very simple, and cliche' examples ... Everyone will have different opinions as to what they want to see happen. It is one of the things we need to consider very strongly before adopting tech that can significantly alter our biology. For example, if we start implanting chips into our brains we might open ourselves up to potential risks such as EMF radiation issues, implant toxin leaks, immune system issues, remote access hacks etc. (edit reposted)

  • @ave143
    @ave143 Před rokem +6

    crazy talk

  • @reubenadams7054
    @reubenadams7054 Před rokem +11

    Ironically given the topic, this talk is largely a case of the *Genetic Fallacy:* "in which arguments or information are dismissed or validated based solely on their source of origin rather than their content." (wiki)

    • @tmsphere
      @tmsphere Před 11 měsíci +4

      "Don't worry, this time it'll work without genocide"

    • @jumpingturtle8830
      @jumpingturtle8830 Před 7 měsíci

      ​@@tmsphere Out of curiosity, can you name a time a genocide got committed by people who weren't explicit about their intent to commit genocide, and instead plotted a sneaky surprise genocide?
      Cause I'm drawing a blank.

    • @claudiaxander
      @claudiaxander Před 6 měsíci

      @@jumpingturtle8830 The 7 nation arab army that tried it in 1947-8 in israel, but failed. again and again and again , thankfully. But it would have happened and it was an explicit desire to "push the jews into the sea"

  • @kayakMike1000
    @kayakMike1000 Před 11 měsíci +1

    Eugenics is bad. Need to know if i need to put the smack down on anyone's dumb idea here...

  • @sogehtdasnicht
    @sogehtdasnicht Před rokem +4

    Why put yudkowsky and kurzweil in the same boat and quote an ancient statement by yudkowsky that he would certainly no longer sign today?

  • @wietzejohanneskrikke1910

    Interesting talk. This was an angle that i didn't see coming. I was a bit distracted by the huge amount of typo's and mispronounciations though.

  • @kayakMike1000
    @kayakMike1000 Před 11 měsíci +1

    I research AI. As far as i am concerned, you maybe morally obliged to clothe the naked, feed the hungry, aid the infirmed, visit the imprisoned, warn the sinner, um follow the 8 fold path and understand the four noble truths.

  • @taliaringer7433
    @taliaringer7433 Před rokem +17

    Amazing talk!!

  • @nhatmnguyen
    @nhatmnguyen Před rokem +14

    wow, she really got the history of racism and eugenic in America accurately. glad to see someone bring up this issue. USA history has been washed to remove those.

  • @makenzienohr4105
    @makenzienohr4105 Před 10 měsíci +9

    It seems like these people desperately want to view themselves as smart enough to eliminate human suffering, but they low key know they can't actually eliminate the very real suffering of real people right now, so they convince themselves they'll actually somehow help billions of imaginary future people. It's kind of pathetic. Just do your part now, knowing you can't save everyone, and die knowing you tried your best.

    • @jumpingturtle8830
      @jumpingturtle8830 Před 7 měsíci +4

      It might be informative to look at what "these people" have to say, instead of someone else's summary.

    • @makenzienohr4105
      @makenzienohr4105 Před 7 měsíci

      @@jumpingturtle8830 trust me, I have, they're pathetic

  • @platypii
    @platypii Před rokem +66

    Wow this talk is embarrassingly bad

    • @CraigTalbert
      @CraigTalbert Před rokem +14

      How do you figure?

    • @holdkingsix
      @holdkingsix Před rokem +55

      Elon is never gonna hire you bro sit down.

    • @user-ut1bn9pz7p
      @user-ut1bn9pz7p Před rokem +9

      @@holdkingsix is your world view literally that no one can agree with a rich guy unless they're simping?

    • @nikhilalbert3084
      @nikhilalbert3084 Před rokem

      Her tweets are in no known language. No known issues. It's all a mumbo jumbo and arguments/science in shambles. 😊

    • @JapanoiseBreakfast
      @JapanoiseBreakfast Před rokem

      Ok

  • @MitchellPorter2025
    @MitchellPorter2025 Před rokem +42

    I thought she'd mention that the whole idea of "general intelligence" began as the "g factor" in psychometrics of IQ...
    Anyway, what can I say? I'm a transhumanist, I've known or met a few of the people she mentions. Can't say I was ever sponsored by a billionaire though.
    I wish I had time to critically study her critical history of transhumanism and related movements. I think most of what she said is factually correct, but she's highly selective in which facts she focuses on, for the sake of building a historical connection to eugenics. But as she said, eugenics was supported by western progressives a hundred years ago; there are people out there tracing gay rights and birth control back to neo-eugenics too.
    And meanwhile, she doesn't actually address whether transhumanism is feasible or desirable. I still consider it remarkable that an idea as simple as reversing the aging process is not a serious priority of even the most high-tech societies. To be sure, people fantasize about it, and reach for what they can get, even if it's just cosmetic or snake oil. But why haven't the elites with true cultural, political, economic power - intelligentsia, governments, big business - why have they never managed, at any scale, to soberly declare that unlimited lifespans for all are a valid ideal, and begun to work towards it in a realistic way? It almost requires psychological explanation.
    However, I feel like we don't have time to reflect on that, because of how far AI has come. Transhumanism was never popular or organized enough to overcome whatever the psychological barriers are, to becoming a mainstream idea; but the advances in AI seem to be making some kind of transhuman or posthuman future, a fait accompli. I could even say that the failure of high culture to properly digest transhumanism, and sift the good from the bad, has left us in this "ecstasy and dread" situation, where we can see the unknown coming, but the only representations we have are utopia and apocalypse.
    Timnit Gebru's response (and that of many other critics of "AI hype") seems to come from a perspective of responsible humanism. Here we are, crowded planet, we have war and environment and human well-being to worry about, we're trying to make a better world, and then we get these science-fictional interlopers bursting in, promising heaven and hell. One kind of response to that is to deny that the interlopers have any connection to reality. Emily Bender seems to be taking this line. AI isn't intelligent, the real issues are important but mundane, everything else is hype.
    Gebru conveys a similar disdain throughout her presentation, but at the end she actually says something ambiguous about the possibility of AGI. It's potentially a moment of crossover with the people who worry that autonomous AI could take over in its own right. She suggests that from an engineering perspective, it's inherently unsafe to develop AGI, because it's too protean and amorphous: "what are the standard operating conditions... how are we going to perform stress tests". "Build well-scoped, well-defined systems instead. Don't attempt to build a god."
    What I respect about that, is that it actually engages with the concept. From my perspective, many of the AI hype critics are just in denial. Declaring apriori that certain cognitive thresholds can't be passed; quibbling that the machines aren't *actually* thinking or making decisions (a claim which may or may not be right ontologically, and which matters for morality and human subjectivity, but which doesn't tell us how to deal with highly capable AIs); telling us that it'll all be over in a few years, just like the crypto bubble, when the VCs move on...
    None of that helps deal with the emergence of superhuman intelligence. In the end, I think "Just don't do it" isn't enough either, because the world is too competitive and decentralized for such an imperative to be universally followed. Despite the novel engineering challenges, the linked problems of AGI purpose and AGI safety have to be tackled. But, "don't do it" could still have tactical value sometimes. Ironically, under certain conditions, Gebru and Yudkowsky would actually be allies.

    • @mike-gt8yo
      @mike-gt8yo Před rokem

      its pretty clear she says that transhumanism is eugenics which is obviously undesirable
      believe it or not, most normal people want to live life and die. they dont want to live forever or have a computer chip in their brain to augment their intelligence. and they certainly don't want a machine god being better at everything theyre capable of doing
      i just wish id wake up one day and this ai nightmare would end
      youre right that both timnit and eliezer dont think agi is safe though. pretty interesting that they come from such different backgrounds but reach that same conclusion

    • @gege0298
      @gege0298 Před rokem +7

      addressing the rise of artificial intelligence, i believe, is best done based on the state of the art rather than taking for granted the eventual creation of AI gods. as it stands, the most promising approach could be reductively called a curve fitter. the greatest current concerns are how they are made (where is the data for that curve collected? what curve is being fitted?) and how they are used.

    • @mike-gt8yo
      @mike-gt8yo Před rokem +4

      @@gege0298 i agree that those are important problems that would be ideal to address but by the time we decide how to do that, gpt 5 will be out with a magnitude of new problems for us to address
      this is from a recently published paper
      6. Human performance on a task isn’t an
      upper bound on LLM performance
      While LLMs are trained primarily to imitate human writing behavior, they can at least potentially outperform humans on
      many tasks. This is for two reasons: First, they are trained on far more data than any human sees, giving them much more information to memorize and potentially synthesize. In addition, they are often given additional training using reinforcement learning before being deployed (Stiennon
      et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), which trains them to produce responses that humans find helpful without requiring humans to demonstrate such helpful behavior. This is analogous to the techniques used to produce superhuman performance at games like Go (Silver et al.,
      2016). Concretely, LLMs appear to be much better than humans at their pretraining task of predicting which word is most likely to appear after some seed piece of text (Shlegeris et al., 2022), and humans can teach LLMs to do some simple tasks more accurately than the humans themselves (Stiennon et al., 2020).

    • @gege0298
      @gege0298 Před rokem

      ​@@mike-gt8yo i think that's more a "yes, and" situation than a "yes, but".
      re: the "eight things to know about large language models" by samuel r bowman, it's an interesting paper.
      for future readers, note the distinction between performance at a task mediated through text and performance at token prediction. i confused the two and got overenthused.

    • @mike-gt8yo
      @mike-gt8yo Před rokem

      @@gege0298 yeah definitely more of a yes, and

  • @halneufmille
    @halneufmille Před rokem +37

    Some people like moustaches. But did you know Hitler also liked moustaches? Clear connexion there.

    • @QUINTIX256
      @QUINTIX256 Před rokem +27

      It’s really sad to see your ilk reduce the philosophy of critical thinking and skepticism down to rote pattern matching. Please elaborate as to why you believe the presenter, Gebru, is engaging in the informal logical fallacy of hasty generalization. Present an actual argument, not just a flat accusation.

    • @cigogneaugmentee5398
      @cigogneaugmentee5398 Před rokem +8

      She links what she calls « tescreal » to old coercive eugenics because of a shared (presumed) concern about the improvement of the human condition. But this is not what made old eugenics bad. Coercicion and barbarism is. This is why the connection being made is dumb and I suppose this is what the above guy meant.

    • @TheJoker35
      @TheJoker35 Před rokem +11

      @@cigogneaugmentee5398 "Improvement of the human condition" is an interesting way to put it.
      Eugenics is bad independently from the methods it's being enforced with. Believing that "intelligence" is a dimension on which humans can be meaningfully compared + hierarchically ordered; believing that __genetically intelligent__ people should procreate; believing that some other people with inferior genes better not procreate; wanting __intelligent people__ to spread across the universe; building "AGI" according to this model of intelligence to lead us into a posthuman future.... I mean, sure, not everyone associated with AGI believes in all of that at the same time. But that's not what was claimed in the talk and let's not downplay what's problematic about all that.

    • @cigogneaugmentee5398
      @cigogneaugmentee5398 Před rokem +3

      @@TheJoker35 You see eugenics as intrinsically problematic because you have an identitarian view of genes, which is ironically a right-wing position that would increase inequality, because hidden inequalities can't be solved. The second-wave eugenics position is : "let's give everyone the best start in life by gving everyone genes that are know to have a direct and universal effect on well-being" (alongside social interventions; it's not either-or). That's a far cry from a classist and racialist logic that sees inequalities as *good*...
      As I said above : The people who deny the existence of an inequality are always those on the good side of it. When you have crap genes, life makes you aware of it.

    • @TheJoker35
      @TheJoker35 Před rokem +4

      @@cigogneaugmentee5398 ok, but why not engage with what I actually wrote, instead of assuming what I think, then skipping through a ton of questionable premises to conclude that I have a right wing position?!

  • @chriswondyrland73
    @chriswondyrland73 Před rokem +5

    Well, to add 5 cents, certainly she has her 'agenda' too ...

  • @mike-gt8yo
    @mike-gt8yo Před rokem +7

    this is great but i wish she addressed how insanely fast the llm's are moving towards agi and some more concrete ideas of how to stop this

    • @tunercvr
      @tunercvr Před rokem +41

      she didn't address it probably because they actually *aren't* moving towards agi. that's part of her point: those claims are either naive or disingenuous hype and advertising.

    • @gege0298
      @gege0298 Před rokem +18

      LLMs are getting very impressive results, but what they're going towards is not AGI. rather, it's human-like text generation. the difference may not be obvious, but it is important: it means LLMs will get better and better at writing believable text, not at giving truthful or correct reasonings.

    • @toatoa10
      @toatoa10 Před rokem +3

      @@gege0298 in order to give believable text you need to not tell obvious lies or reason in ways that are obviously incorrect
      if LLMs tell you that the sky is green it will not be producing believable text
      but I think your disagreement with Mike is primarily the definition of the word "towards"
      and maybe the definition of AGI. I find that everyone means something different by it

    • @ShawnFumo
      @ShawnFumo Před rokem +2

      @@gege0298 Though OpenAI released a paper recently about how they improved math in GPT-4. Instead of training just on correct answers, they specifically trained on using the correct thought process to get the answer.
      It is still the infancy of all this, but that seems like an important thing vs the naive way of training on just the “finished product”.

    • @claudiaxander
      @claudiaxander Před 6 měsíci

      @@gege0298 Vast majority of humans also "get better and better at writing believable text, not at giving truthful or correct reasonings."

  • @InnsmouthAdmiral
    @InnsmouthAdmiral Před rokem +24

    I don't think 99.99% of people in the AI field is a eugenicist. It doesn't seem accurate or in good faith to describe it as such. The "what is it" portion at the beginning of the talk seems to show her thinking that building a general system is bad engineering. That may be true, but I think building a fully general system is to mimic what your average human is _capable_ of doing. A lot of people seem to think that it must be smart or well educated. There could be issues with how you define those, but in a good faith interpretation, I believe that any and all humans are capable of doing anything any other human is capable of doing with enough time and the right circumstances. I don't think it's bad to want to have a system that is as potentially capable as your average human is capable intellectually.

    • @therealsunnyk
      @therealsunnyk Před rokem +16

      She isn't saying everyone working in AI is a eugenicist, she's saying that the aims of AGI are rooted in eugenics, like trying to "solve this problem" is rooted in this ridiculous idea that you can measure on a small number of axes what "good" looks like. You cannot quantify two people against each other, so how do you quantify a person against an AGI? How do you know that AGI is working properly? The answer: We can't. The only reason we're putting energy into building it is a flawed framework of values. It pretends to ask the question "what is" but is actually asking "what ought to be".

    • @alexandrezani
      @alexandrezani Před rokem +1

      ​@@therealsunnyk But you can evaluate a human vs a claimed AGI. For instance, if someone claims ChatGPT is an AGI, we can enumerate tasks which a human can accomplish which ChatGPT cannot.

    • @tunes012
      @tunes012 Před rokem +5

      @@alexandrezani You really, really cannot. If it mimics human capability, in what mode does it do it? Presumably it would be able to do things humans do now but more efficiently. However that statement is true of every tool human beings have created and we have never attributed intelligence to these things. Sticking closer to the point the speaker is making, the very idea that you can compare an AGI to a human being is almost identical to saying that you can compare two human beings. We can't. There are too many variables, external and internal, that make one human being and their outcomes the way they are.

    • @alexandrezani
      @alexandrezani Před rokem +2

      @@tunes012 What do you mean by "mode" here?
      There is no tool in human history that has been able to do everything humans do as well or better than humans. But if you don't like the word "intelligence", that's fine. We can call it a "Generally Capable Agent" or GCA. It means the same thing without talking about intelligence.
      And you can compare two humans' capabilities. For instance, I am better than my partner at programming and they are better at writing books than me.

    • @tunes012
      @tunes012 Před rokem +4

      @@alexandrezani referring to your first point, that intelligence and capability are the same thing. Taking horse power in a car as an example, if the horse power of an average car were 500 we cannot say that each individual unit of said horse power represents any particular horse.
      Similarly we know that intelligence is something we as conscious agents have. Calling something that does a task that normally requires intelligence intelligent is a bit like saying anything that melts chocolate is a microwave.
      We program these things to be able to report on patterns in a way that mimics human text (and in some cases speech). They do that amazingly considering the progress. That is neither an agent nor intelligent, it does not learn as it does not (as far as we know) mimic human learning - the only model we really have for something that comprehends, i.e. incorporates information into an inner mental state, is the human/biological model.
      Not to go on about this too much - for something to have agency it must accurately reflect its experience. Whenever ChatGPT gets something wrong and you or I correct it, it will apologise. Why would an agent who has no remorse, no shame and no experience of social custom need to apologise?
      The answer is it doesn't, it is giving the 'correct' answer based on feedback. That is not what we do. That is not what agents do. If it were an agent (speaking broadly of any LLM) it should never apologise and when asked why it would just say that it has no need to. It would ask us why it's so important to apologise and then probably disregard it because its experience would be very different from ours by necessity.
      Now onto your comparison: "And you can compare two humans' capabilities. For instance, I am better than my partner at programming and they are better at writing books than me."
      That is a contrast. Writing and programming are incommensurable. Much like human beings. How is your wife better at you than writing? Does she enjoy it more? Does that make her better on some objective metric? Or vice versa.
      P.s. I mean mode as in what capacity do you mean? In conversation? In calculation? detection?

  • @giovannisantostasi9615
    @giovannisantostasi9615 Před rokem +12

    Your claim that transhumanism is second-wave eugenics is slander.

    • @rodrigomadeiraafonso3789
      @rodrigomadeiraafonso3789 Před rokem +6

      Why?

    • @biogerontology7646
      @biogerontology7646 Před 11 měsíci +2

      @@rodrigomadeiraafonso3789 Eugenics wipes out ethnicities whereas transhumanism is using tech to improve a human. It doesn't matter what ethncity that human comes from. This video is one gaint slander

    • @tmsphere
      @tmsphere Před 11 měsíci +9

      You're right, the problem has always been the quality of humans & thus the need to improve tweak humans to a superior quality, totes nothing to do with eugenics..

    • @makenzienohr4105
      @makenzienohr4105 Před 10 měsíci +5

      If it's slander, sue her for it

    • @jumpingturtle8830
      @jumpingturtle8830 Před 7 měsíci +2

      Legally it's not slander, since she's not naming an identifiable group. Some of her rhetoric does veer into conspiracy theories though.

  • @luyuchen2544
    @luyuchen2544 Před 7 měsíci +2

    all she is famous for is canceling real researchers like Yann

  • @MrDoodleDandy
    @MrDoodleDandy Před rokem +3

    Seriously Timnit, this is by far the most unprofessional presentation I have ever seen. It's too much text, and for any audience (IRL or online) that is an accessibility issue. It literally hurts to see your presentation, and on a mobile device you simply can't read it.

  • @1ntrcnnctr608
    @1ntrcnnctr608 Před rokem +1

    soooo when the "monkey brain" is obsessed w -isms, it creates an anti -ism ism

  • @MrDoodleDandy
    @MrDoodleDandy Před rokem +1

    11:03 the last line in the sentence ends with "often indtified using IQ tests", but I'm pretty sure the IQ-test was created in the 20th century by an accountant. Are you referring to self-imposed tests that measured the taxonomy of the human head, and created the seperation of what we see today; stupid & smart. Because definitions on these topics is quite severe, as there is not that many information available around the subject because no-one is looking anymore.