Yet Another AI Blind Spot: Biased Images | The Big Take

Sdílet
Vložit
  • čas přidán 13. 06. 2023
  • As pressure mounts on lawmakers to regulate artificial intelligence, another problem area of the technology is emerging: AI-generated images. Early research shows these images can be biased and perpetuate stereotypes. Bloomberg reporters Dina Bass (www.bloomberg.com/authors/ACZ...) and Leonardo Nicoletti dug deep into the data that powers this technology, and they join this episode to talk about how AI image generation works-and whether it’s possible to train the models to produce better results.
    Read more: Humans Are Biased. Generative AI Is Even Worse (www.bloomberg.com/graphics/20...)
    Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: bloom.bg/3F3EJAK (bloom.bg/3F3EJAK)
    Have questions or comments for Wes and the team? Reach us at bigtake@bloomberg.net (mailto:bigtake@bloomberg.net).
    See omnystudio.com/listener (omnystudio.com/listener) for privacy information.
    Each weekday, The Big Take brings you one story-one big, important story. Host Wes Kosova talks to Bloomberg journalists around the world, experts and the people at the center of the news to help you understand what’s happening, what it means and why it matters. Money, politics, the economy and business, energy, the environment, technology-we cover it all on The Big Take.
    Subscribe to Bloomberg Podcasts: bit.ly/BloombergPodcasts
    Listen to more of The Big Take: • The Big Take
    #Bloomberg #Podcast #TheBigTake
    Visit us: www.bloomberg.com/podcasts
    Follow Bloomberg Podcasts on Twitter: / podcasts
    For coverage on news, markets and more: www.bloomberg.com/video

Komentáře • 2

  • @Granzteelvids
    @Granzteelvids Před 10 měsíci

    1:53 you just contradicted your point there. Also, you can add the tags of your diverse preferences. AI is a tool that takes in your input and you have to be hyperspecific. Diversity itself is subjective and you have to specifically add this to your input.