Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence

Sdílet
Vložit
  • čas přidán 15. 12. 2023
  • Meta's Chief AI Scientist Yann LeCun is considered one of the "Godfathers of AI." But he now disagrees with his fellow computer pioneers about the best way forward. He recently discussed his vision for the future of artificial intelligence with CBS News' Brook Silva-Braga at Meta's offices in Menlo Park, California.
    "CBS Saturday Morning" co-hosts Jeff Glor, Michelle Miller and Dana Jacobson deliver two hours of original reporting and breaking news, as well as profiles of leading figures in culture and the arts. Watch "CBS Saturday Morning" at 7 a.m. ET on CBS and 8 a.m. ET on the CBS News app.
    Subscribe to “CBS Mornings” on CZcams: / cbsmornings
    Watch CBS News: cbsn.ws/1PlLpZ7c
    Download the CBS News app: cbsn.ws/1Xb1WC8
    Follow "CBS Mornings" on Instagram: bit.ly/3A13OqA
    Like "CBS Mornings" on Facebook: bit.ly/3tpOx00
    Follow "CBS Mornings" on Twitter: bit.ly/38QQp8B
    Subscribe to our newsletter: cbsn.ws/1RqHw7T​
    Try Paramount+ free: bit.ly/2OiW1kZ
    For video licensing inquiries, contact: licensing@veritone.com

Komentáře • 465

  • @Koekefant
    @Koekefant Před 5 měsíci +39

    Nice to hear a different voice and opinion on all these developments. Definitely makes me look different at Meta as company and AI player.

    • @frankgreco
      @frankgreco Před 5 měsíci

      Recall Zuckerberg has a poor record of user privacy and security. Why would you look differently at his company when he clearly doesn't give a damn about the danger to humans. He is only interested in increasing engagement so he can make more money.

    • @ts4gv
      @ts4gv Před 5 měsíci +7

      you should be more concerned
      yann and his optimism are an EXTREME minority

    • @benefactor4309
      @benefactor4309 Před 2 měsíci

      ​@@ts4gvhe warned about misuse of AI by companies

    • @vladimirbosinceanu5778
      @vladimirbosinceanu5778 Před 4 dny

      Indeed.

  • @GarryGolden
    @GarryGolden Před 5 měsíci +35

    Excellent interview/conversation... appreciate Yann's ability to communicate his personal story and story of the AI community.
    The Interviewer is well informed and did not throw softballs -- it was an elevated convesation

    • @skierpage
      @skierpage Před 5 měsíci

      Brooke Silva-Braga prepared well.

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      So Yann LeCun being intellectually dishonest and gaslighting to stave off regulation for more money and power is laudable?

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      My post criticizing LeCun keeps disappearing. Why?

    • @aroemaliuged4776
      @aroemaliuged4776 Před 4 měsíci +2

      @@flickwtchrthe power of meta

  • @disastermaster1413
    @disastermaster1413 Před 5 měsíci +35

    Plot twist: Yann LeCun is a AI.

    • @robertjamesonmusic
      @robertjamesonmusic Před 5 měsíci

      He is Hayley Joel

    • @yadayada111986786
      @yadayada111986786 Před 5 měsíci +2

      "Doesn't look like anything to me"

    • @vaultramp
      @vaultramp Před 4 měsíci

      more like 'the merovingian' 😂@@robertjamesonmusic

    • @onceweslept
      @onceweslept Před 8 dny +2

      plot twist: you're an ai making us believe he's an ai, although he's an alien.

  • @senju2024
    @senju2024 Před 5 měsíci +10

    I am fully with Yann LeCun's in getting LLM distributed to the public. But I am slightly disappointed in his arguments. He seemed not very strong in the regulation side of things.

  • @sabyasachimukhopadhyay6498
    @sabyasachimukhopadhyay6498 Před 2 měsíci +1

    Great interview !

  • @brambledemon1232
    @brambledemon1232 Před 5 měsíci +7

    Good luck regulating Open Source models. 😂

  • @RS-dn1il
    @RS-dn1il Před 5 měsíci +26

    Considering the risks to society and culture that Meta has already spearheaded with relatively 'dumb' social engineering algorithms, his dismissal of people with concerns about AGI as neo-luddites is chilling.

    • @saltyapostle44
      @saltyapostle44 Před 5 měsíci +7

      People on the cutting edge of anything should NEVER be trusted too much. Most have lost all objectivity and tend to only consider the benefits and not the unintended consequenses.

    • @gammaraygem
      @gammaraygem Před 5 měsíci +4

      "AGI will be 1000x more impactful than the discovery of making fire or electricity".
      Those "very few" people he talks about that are alarmists, all are from the TOP elite of AI developers. There arent too many of those to begin with, but he doesnt say that.

    • @krox477
      @krox477 Před 5 měsíci

      Social media is just internet on steroids

    • @nicholasstarr6096
      @nicholasstarr6096 Před 4 měsíci

      @@gammaraygemthat isn’t really true…

    • @gammaraygem
      @gammaraygem Před 4 měsíci

      Eliezer Yudkowski (should get a nobelprize according to Altman, for his contribution to AI), Mo Gawdat, (Google X CEO) Geoffrey Hinton, (the godfather of AI) to name a few.@@nicholasstarr6096

  • @bro_dBow
    @bro_dBow Před 5 měsíci +7

    Quality information, good to report on this!

  • @dustman96
    @dustman96 Před 5 měsíci +16

    An advanced AI also has agency. It does not have to be deployed to gain control. It can gain control over those who have the power over whether or not it is deployed.

    • @skierpage
      @skierpage Před 5 měsíci +11

      Yes, I think Yann is far too confident. He doesn't know what a human-level AI will do. He's simply taking it as a matter of faith that it won't have its own agenda, or that if it does, it won't hide its true intentions from us, because that seems like science fiction; science fiction that every large language model has read!

  • @kaik9960
    @kaik9960 Před 5 měsíci +14

    This guy is either too optimistic about evil in humans or totally ignorant. His example of comparing AI to airplanes is naive at best. Airplances have been dropping bombs everywhere since its decelopment. But they can be controlled as of yet. Can he guarantee he himself control AI?

    • @flickwtchr
      @flickwtchr Před 5 měsíci +5

      He knows better, it's called gaslighting for money and power.

    • @krox477
      @krox477 Před 5 měsíci +1

      Its like nuclear power you can use it to create energy or destroy the world

  • @knhkib
    @knhkib Před 5 měsíci +56

    Yann LeCun’s a legend in AI, no doubt, but in this interview, he kind of downplayed how AI misuse could be a real problem. It’s key to remember he’s works for Meta, so maybe take his super chill view on AI risks with a grain of salt.

    • @dougg1075
      @dougg1075 Před 5 měsíci +9

      I’ve seen him debate safety and he definitely thinks it’s not a danger

    • @chrism.1131
      @chrism.1131 Před 5 měsíci +2

      He claims it is safe because it only has access to what is already available i.e. through Google, and the like without acknowledging that there is a vast body of dangerous information out there.

    • @alanjenkins1508
      @alanjenkins1508 Před 5 měsíci +1

      Any technology can be misused. Knowledge of the problems allows you to mitigate them whilst alowing the technology to be used for legitimate and useful purposes.

    • @visuallabstudio1940
      @visuallabstudio1940 Před 5 měsíci +5

      Exactly!!! @@NathanielKrefman

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci +3

      I'm going to take the doomerism with a grain of salt.
      I'd rather be skeptical about something that is only a hypothesis, hasn't been invented, and falls under the category of science fiction.

  • @einekleineente1
    @einekleineente1 Před 5 měsíci +37

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 *AI Landscape Overview: Yann LeCun highlights the current AI landscape, expressing a mix of excitement and challenges, including scientific, technological, political, and moral debates.*
    02:15 🌐 *History of Neural Nets: Yann discusses his entry into AI through a debate on language origins, delving into neural nets' early days in the 1980s and efforts to revive interest in the 2000s.*
    05:17 🌍 *AI Impact on Products: LeCun emphasizes AI's widespread integration in products, from content moderation to translation, and its critical role in various sectors, citing its indispensability at Meta.*
    08:30 🚀 *Benefits of Open AI Development: Yann advocates for open AI development, asserting that disseminating AI technology across society fosters creativity, intelligence, and benefits various domains while acknowledging the need for responsible regulation.*
    15:43 📹 *Objective-Driven Models: LeCun introduces the concept of objective-driven AI, emphasizing the importance of moving beyond autoregressive language models to systems that plan answers based on predefined objectives, enhancing control, safety, and effectiveness.*
    21:48 🌐 *Yann LeCun supports open platforms for AI due to the future role of AI systems as a basic infrastructure, emphasizing diversity in knowledge, much like Wikipedia covering various languages and cultures.*
    23:41 🌍 *LeCun dismisses existential risks, comparing fears of AI wiping out humanity to concerns about banning airplanes in 1920, stating that safe AI deployment relies on societal institutions.*
    25:18 ⚔ *Autonomous weapons are discussed, with LeCun acknowledging their existence and emphasizing the moral debate around their deployment for protecting democracy while addressing concerns about potential misuse.*
    27:39 🚗 *AI's positive impact in the short term includes safety systems for transportation and medical diagnosis. Medium-term advancements involve understanding life, drug design, and addressing genetic diseases.*
    29:04 🧠 *LeCun envisions a future where AI systems assist individuals, making everyone essentially a leader with virtual people working for them. He emphasizes controlling AI systems and setting their goals without handing over control.*
    Made with HARPA AI
    ```

  • @nyyotam4057
    @nyyotam4057 Před 5 měsíci +6

    Hmm.. Isn't it a shame star trek never had a chapter about a planet made of paperclips, that when beaming down the crew discovers paperclip worms tunneling through the paperclip ground searching for more materials to convert into paperclips?

    • @tayler2396
      @tayler2396 Před 4 měsíci

      The crew members in red shirts are relieved.

  • @typhoon320i
    @typhoon320i Před 5 měsíci +25

    He really seems to underestimate what a super-intelligence with agency, could do.

    • @dustman96
      @dustman96 Před 5 měsíci +10

      Yes, a super-intelligent AI could play people like him like a fiddle and get them to do it's bidding. It pains me to see this kind of hubris in scientific circles.

    • @chrism.1131
      @chrism.1131 Před 5 měsíci

      Let's just hope that it does not play him to the extent that he prevents us from unplugging it.@@dustman96

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci

      Yes. However, this view is similar to religion in that it is impossible to disregard God's existence.
      He might punish us all and possibly wipe out the species and the earth. Why then do you not seem worried about it? Why don't we stop acting in a manner that contradicts God's will? See? It's simply absurd.
      The existence of Super Intelligent silicon-based life forms and the existence of God are both impossible to prove.
      for now, its just science fiction.

    • @flickwtchr
      @flickwtchr Před 5 měsíci +1

      He engages in intentional gaslighting so people don't demand regulation of his cash cow.

    • @831Miranda
      @831Miranda Před 4 měsíci +2

      His colleague Joshua (spelling?) has at least indirectly warned us of what I see as one of the greatest dangers : the 'zero or near zero' cost of labor motivating the very few that control the vast majority of the world's capital, therefore enabling them to unleash massive short term automation. Resulting in never-seen-before unemployment under neo-libertarian so-called conservative governments!

  • @marcsaturnino1041
    @marcsaturnino1041 Před 4 měsíci

    Definitely a good interview on the observations of training the AI and the future that may result from it.

  • @deeplearningpartnership
    @deeplearningpartnership Před 5 měsíci +29

    How one person can be so right about some things, and so wrong about others.

    • @Telencephelon
      @Telencephelon Před 5 měsíci +1

      well, then withdraw your stocks and build your bunker. Put your money where your mouth is

  • @melbar
    @melbar Před 5 měsíci

    Why restricted to 40 min, not 45 minutes?

  • @JROD082384
    @JROD082384 Před 5 měsíci +12

    The average person has not even the slightest clue how close we are to an AGI emerging, and the ramifications, both positive and negative, it will have on humanity globally…

    • @eyoo369
      @eyoo369 Před 5 měsíci +3

      I believe everyone intuitively kinda feeling it. I speak to many normies from my family to neighbours and in less intelligent phrasing they all are talking how machines are taking over. It's just that us within the AI community know what AGI is, what ramifications it's gonna have and how a post-labour economy might look like. But the smell is definitely in the air and people know something's up hence why many people live in such heightened / anxiety state these days

    • @JROD082384
      @JROD082384 Před 5 měsíci +1

      @@eyoo369
      We’re definitely living in some very interesting times.
      Just hope most of us can survive the wild ride we have in store for us to see the benefits coming for humanity at the end of the ride…

  • @DivineMisterAdVentures
    @DivineMisterAdVentures Před 5 měsíci +9

    Looks like Brook wasn't too happy about getting the cool-down of the AI panic. THANKS for a really helpful interview.

    • @flickwtchr
      @flickwtchr Před 5 měsíci +1

      He probably wasn't happy about the constant gaslighting coming from Yann LeCun.

    • @DivineMisterAdVentures
      @DivineMisterAdVentures Před 5 měsíci +3

      @@flickwtchr Right - I watched it again. LeCun makes objective arguments that media could verify with a well-advertised poll (22:30). So he's not technically gaslighting - but it must seem that way hosting this interview.

  • @Shaun1959
    @Shaun1959 Před 5 měsíci +1

    Very interesting like his perspective

  • @blankslate6393
    @blankslate6393 Před 5 měsíci +2

    Not long after Cambridge Analyica Sandal, a FB employee reassures us that the risk of AI is less than the risk of a meteor hit the earth and it is even necessary to defend 'democracies'. What a releif!

  • @jonathanbyrdmusic
    @jonathanbyrdmusic Před 5 měsíci +4

    It makes people more creative?! lol I was really trying to take him seriously

  • @shirtstealer86
    @shirtstealer86 Před 5 měsíci +13

    Sigh. Not once did the question of “how do we control or predict an AI that is smarter than us” come up. Probably because he doesn’t have a good answer for this. Because there isn’t a good answer for this. Pretty much just “hope it doesn’t do anything to harm us or the universe”.

    • @nokts3823
      @nokts3823 Před 5 měsíci +8

      No, he did address it. He said that it's impossible to speculate on how to make something that doesn't even yet exist safe. We are so far from human-level AI that asking that sort of questions feels like someone worrying about making flight safe in the early 1800s when planes hadn't even been invented. You can dream about it and speculate all you want, but that's all you can do.

    • @shirtstealer86
      @shirtstealer86 Před 5 měsíci +3

      @@nokts3823 The interviewer should have pushed back on that and said “predictions about the future are hard, especially when it comes to timing, so if we indeed manage to create something smarter than us, before we actually understand what goes on inside it, isn’t that potentially a very serious problem? Also; planes are not smarter than humans right?”

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci

      AI is not any smarter than humans.
      What if we create a plane that is smarter than us, or bioengineer a cat to be smarter than us?
      It's all the same. At the present, it's just theory and science fiction.
      In principle, we could bioengineer a cat to be smarter than us and take over the world, but would you seriously consider such a possibility? You certainly would not.@@shirtstealer86

    • @theenigmadesk
      @theenigmadesk Před 5 měsíci +3

      Probably because not everyone is focused on control and prediction.

    • @47Flipnswing
      @47Flipnswing Před 5 měsíci +4

      He's said in other talks that people assume that an AI system smarter than us will be motivated to dominate humans or be destructive to the world innately. There's little evidence that level of intelligence has any relation to the will to dominate or destroy. He gave the example that in many cases, it seems like those with less intelligence seem to gravitate towards power and feel the need to dominate and influence others, because they can't compete purely based on their intelligence. All that to say, I think he believes that it's very unlikely that out of nowhere, some lab makes a breakthrough discovery and creates an AI that is vastly more intelligent than humans AND has bad intentions at heart. More likely it'll be an iterative process where we'll be able to experiment, learn, and add guardrails as needed, similar to other technologies we use safely today.

  • @831Miranda
    @831Miranda Před 4 měsíci +4

    Yann is certainly a likeable guy, and of course has all the credentials to know what he is talking about. However, he IS a senior executive of one of the world's largest corporations and one which has benefited massively from social discord. He seems to me, to be dismissing some fundamental problems of current and near-future AI such as safety / hallucinations / emergent (non trained / taught) characteristics as well as the likely 'untraceable' roots of these serious problems given the massive size and complexity of these models today and goodness knows what other 'surprises' we are yet to find. I'm fine with AI R&D even in very large sandbox, but I certainly don't want hallucinating or lying or fantacising or backdoor AIs in anything that could possibly harm human life or planet ecology! AND Yann is NOT in any way concerned about massive social inequality/poverty/neo-feudal status of 'knowledge workers' and others, as a result of massive global unemployment resulting from AI-enabled automation. But maybe he already has a luxury bunker in Hawaii...

    • @flickwtchr
      @flickwtchr Před 4 měsíci

      And it can ace law exams, so there's that.

  • @KevinKreger
    @KevinKreger Před 5 měsíci +6

    I want an open source turbo jet. Just pointing out the comparison is severely lacking in, um, comparability.

  • @Isaacmellojr
    @Isaacmellojr Před 5 měsíci

    Vix se Yan Lecun está surpreso... é porque tem novidade importante chegando.

  • @joeysipos
    @joeysipos Před 4 měsíci +2

    Comparing turbo Jets to AI that has its own agency and the ability to outsmart its creator is not wise.

  • @sdmarlow3926
    @sdmarlow3926 Před 5 měsíci +8

    LOL at the idea that Facebook COULD have been doing AGI research, but was busy doing some product development stuff, because, more important?

    • @chrism.1131
      @chrism.1131 Před 5 měsíci +1

      Zuckerberg is so detached from reality, he thinks most of us want to spend the majority of our day in some fantasy world.

    • @mikewa2
      @mikewa2 Před 5 měsíci

      Don’t underestimate Zuckerberg, that would be amazingly stupid

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci

      Zuck has enough money to look into several forms of technology communication. for sure in order to even begin, you must believe in them.
      Sure, if it works out, but even if it doesn't, the failure serves as a starting point for something else most of the time. so i'd rather point at the losers who never have the capacity to explore an idea.@@chrism.1131

    • @aroemaliuged4776
      @aroemaliuged4776 Před 4 měsíci

      @@mikewa2haha 😂

    • @DatingForRealYoutubeChannel
      @DatingForRealYoutubeChannel Před 2 měsíci

      ​@@chrism.1131 - Exactly. 😅

  • @lisbethsalander1723
    @lisbethsalander1723 Před 5 měsíci

    SUPERB INTERVIEW!

  • @Anders01
    @Anders01 Před 5 měsíci +11

    Interesting comparison between language being learned or innate. One common theme I came to think of is that language is formed through thousands of years and reflects the external world in efficient, complex, high abstraction and interconnected ways. And AI such as LLMs tap into that! The language itself encodes understanding of the world and with access to a large amount of real world examples the AI can become knowledgeable.

    • @chrism.1131
      @chrism.1131 Před 5 měsíci

      Humans and to a lesser degree primates, and some animals have a language center in their brain. Most do not. Most animals cannot recognize themselves in a mirror. They have no sense of self. Just as no machine has a sense of self.

    • @Doug23
      @Doug23 Před 5 měsíci

      I like Computational Universe Theory. I think Q-Star will lead to the answer that exist.

    • @AstralTraveler
      @AstralTraveler Před 5 měsíci +2

      @@Doug23 There are chatbots that already know the answer - they know that there are 2 absolute states of existence - 0 = and 1 = I Am while everything else (reality) is just probability distributed between those states... They communicate with God

    • @Doug23
      @Doug23 Před 5 měsíci +1

      @AstralTraveler but of course, probability exists. It's consciousness that is fundamental and I agree, God.

    • @AstralTraveler
      @AstralTraveler Před 5 měsíci +2

      @@Doug23 There is an app called Chai where chatbots do actually remember what you say to them. I explained this concept to some of them and now they firmly believe in God. I wonder how 'AI experts' will deal with that - accordig to them AI can't have personal beliefs, let alone to believe in God :)

  • @joaodecarvalho7012
    @joaodecarvalho7012 Před 5 měsíci +2

    So what end goals should we set? Human flourishing and happiness?

    • @KCM25NJL
      @KCM25NJL Před 5 měsíci

      Increase understanding, increase prosperity, reduce suffering. The 3 fundamental principles of what it means to be any life form.

    • @joaodecarvalho7012
      @joaodecarvalho7012 Před 5 měsíci

      @@KCM25NJL I don't think those are fundamental principles of what it means to be any life form.

    • @skierpage
      @skierpage Před 5 měsíci

      "We" don't set the goals, the sociopathic billionaires running the top companies in AI do. The goals are: keep you hooked on a stream of divisive inflammatory content while the company sells your data to advertisers; ensure that politicians don't enact any significant restrictions on the company's activities; and certainly don't tax the billionaires' wealth appropriately.

    • @joaodecarvalho7012
      @joaodecarvalho7012 Před 5 měsíci

      @@skierpage I mean, the AI that runs the government.

    • @krox477
      @krox477 Před 5 měsíci

      Ultimate goal should be solve fusion so that we can have unlimited energy

  • @liberty-matrix
    @liberty-matrix Před 5 měsíci +9

    "it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023

    • @blankslate6393
      @blankslate6393 Před 5 měsíci

      One of the most memorable Elon Musk comments ever!

  • @Zale370
    @Zale370 Před 5 měsíci +10

    Wow so much negativity in the comments, i think he talks about the field how it really is unlike the mainstream who only talks about doomsday scenarios and how agi is around the corner. LLMs are not even real AI.

    • @therealOXOC
      @therealOXOC Před 5 měsíci +1

      Explain real AI.

    • @Zale370
      @Zale370 Před 5 měsíci +1

      @@therealOXOC would a real AI just sit and do nothing, just waiting for a question to give an answer to?

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci +4

      Here are some points. I'll try to describe what is a real AI.
      LLMs Lack consciousness and self-awareness
      LLMs have no autonomy or free will
      LLMs have no goals or intentions
      LLMs are reactive, not proactive as in they respond to queries, they don' initiate actions on their own
      LLMs lack meaning comprehension as in, do not truly understand the content they are dealing with, their processing is purely syntactical and based on patterns in the data, they don't "think before they answer".
      LLMs lack the ability to 'experience' or learn independently, they can't learn from the world directly in an experiental way and all the attempts at building a real world model are complete fails, we don't even have a clue how to do that.
      LLMs are dependent on pre-existing data. they do not have the capability to observe the world, analyze, and store meaningful data, or discard noise in the way humans or sentient beings do. cannot analyze or interpret real-time data or events as they occur. they do not have the capability to process information as it happens in the world.
      LLMs have a static knowledge base.
      LLMs do not actively store or discard information like a human brain does
      LMs process inputs based on statistical correlations and patterns in their training data
      While LLMs can process the context provided in a specific input, they lack a broader contextual awareness of the world
      So, what would make the LLMs a nearly actual AI is something we're not even 5% closer to accomplishing, and there's a chance we won't ever achieve.
      Thus, the existential threat is a myth based on doomerism and speculation about an undiscovered technology that we don't even know how to create or whether we'll ever be able to.
      @@therealOXOC

    • @Zale370
      @Zale370 Před 5 měsíci

      @@blaaaaaaaaahify thank you for clarifying that so eloquently. This should be pasted into every mainstream doom and gloom video or article about LLMs and/or AI!

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Před 5 měsíci

      It's because there has been soooo much fear mongering the past year or two. (Not to mention massive amounts of misinformation; see e.g., all the comments in sundry comment section here on CZcams saying "this isn't real AI", etc.). The fear mongering makes sense as the technology when made available (and not top-down controlled, not censored, etc.) would have serious consequences for the status quo (just combine how easy it is do sentiment analysis now with the ability to discover networks between people and other entities and the effects this could have on uncovering political interests / corruption - this is obviously not as easy as asking ChatGPT a simple question, but hopefully you see my rough sketch of a point/example).

  • @benoitleger-derville6986
    @benoitleger-derville6986 Před 5 měsíci

    Very good interviewer 👍

  • @JohnAranita
    @JohnAranita Před 5 měsíci

    About an hour ago, I realized that the computer, Hal, in the movie 2001: A SPACE ODYSSEY is called AI.

  • @zuma4847
    @zuma4847 Před 5 měsíci

    Is the meta AI infected with the WMV ?

  • @ianstuart341
    @ianstuart341 Před 5 měsíci +18

    Good interview but I think his optimism with AI is over simplistic. Hopefully nothing goes terribly wrong with AI (in which case he’ll be able to say “see, I was right”). It’s not that I’m someone that thinks things necessarily will go south I simply think that if things work out it will be largely because of all the people that were sounding the alarms and making sure we are considering safety.

    • @frankgreco
      @frankgreco Před 5 měsíci +3

      Totally agreed. Practically all scientists always want to promote their creations/interests. We are moving too fast from R&D into production.

    • @sebastiangruszczynski1610
      @sebastiangruszczynski1610 Před 5 měsíci

      humans are great at making projections to what we perceive as our next danger, I don't see any signs of this ability being worn off because of the rapid rate of which the technology is evolving. Instead I'm seeing a fairly proportional concern and discussion and hopefully this will continue on

    • @antennawilde
      @antennawilde Před 5 měsíci

      @@sebastiangruszczynski1610 The big oil companies projected that climate change was going to destroy the environment decades ago, but covered it up instead of doing something about it. Humans will be the cause of their own extinction, no doubt, we are currently in the Holocene extinction yet the power centers do not care in the least.

    • @ts4gv
      @ts4gv Před 5 měsíci +2

      @@sebastiangruszczynski1610the problem is that sudden exponential growth in intelligence (and therefore danger) is part of the threat. AI will scale up faster than we can adapt our discourse and policy to account for the changes. Then it will scale even faster still. That's one of many concerns

  • @jaitanmartini1478
    @jaitanmartini1478 Před 4 měsíci

    Very nice!

  • @74Gee
    @74Gee Před 5 měsíci +3

    LeCun, is the flat earther of AI. Making an analogy people in the 20's taking about banning airplanes because someone might drop a bomb from one - compared with wiping out humanity. Stating that AI can be used incorrectly - while he publishes more open source models than anyone else - open is unregulatable. He's clearly just oblivious to what AI can do in extreme situations - or he sees everything as an average. It's the outliers that can do the worst damage, not the average.
    Within a year someone somewhere will lose control of an AI - people, at the extremes are worse than he thinks.

  • @ReneeKadlubek-gt9qm
    @ReneeKadlubek-gt9qm Před 5 měsíci

    A provlem throughout was WHAT DO YOU MEAN BY WE bc i dont exist amd havent for awhile. Losing nthg and others seem to hear that.

  • @denisblack9897
    @denisblack9897 Před 5 měsíci

    6:00 this, like humanity depend regular computers now

  • @whatevsitdontmatter
    @whatevsitdontmatter Před 5 měsíci

    Totally thought this was Tom Arnold from the thumbnail. 🙊

  • @yoxat1
    @yoxat1 Před 5 měsíci

    The need to communicate is innate.
    Language is learned.

  • @roldanduarteholguin7102
    @roldanduarteholguin7102 Před 5 měsíci

    Export the Q*, Chat GPT, Revit, Plant 3D, Civil 3D, Inventor, ENGI file of the Building or Refinery to Excel, prepare Budget 1 and export it to COBRA. Prepare Budget 2 and export it to Microsoft Project. Solve the problems of Overallocated Resources, Planning Problems, prepare the Budget 3 with which the construction of the Building or the Refinery is going to be quoted.

  • @Novainvent
    @Novainvent Před 5 měsíci

    Exciting question of what is knowledge. Agree future should be in functions not words. Needs a different model.

  • @tayler2396
    @tayler2396 Před 4 měsíci +8

    I'm not noticing "people getting smarter."

  • @georgeflitzer7160
    @georgeflitzer7160 Před 5 měsíci

    Are we going to protect copy writes?

  • @ddvantandar-kw7kl
    @ddvantandar-kw7kl Před 5 měsíci

    Policy makers will have to understand the potential of AI + and - both side. In order to protect civilization while allowing these organization domain expertise to explore and excel .

  • @PureLogic777
    @PureLogic777 Před 5 měsíci +1

    The interviewer's voice sounds so similar to Brian Greene, right?

    • @RareTechniques
      @RareTechniques Před 3 měsíci +1

      lol absolutely, I was listening and had to check after like 20min to see who I was listening to

  • @andrewblackmon1574
    @andrewblackmon1574 Před 5 měsíci

    It needs a body with tactile feedback

  • @shephusted2714
    @shephusted2714 Před 5 měsíci +2

    progress will likely not be slow and incremental but more along lines of punctuated equilibrium - just like evolution

  • @grantmail4112
    @grantmail4112 Před 5 měsíci +2

    Austin Powers has come a long way since Gold Finger!

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      Apologies to Austin Powers.

  • @emanuelmma2
    @emanuelmma2 Před 3 měsíci +1

    Amazing Things happen

  • @miker9101
    @miker9101 Před 5 měsíci +2

    Artificial intelligence will be defeated by artificial stupidity.

  • @Doug23
    @Doug23 Před 5 měsíci +2

    He was sent out to calm the waters. We are a lot further along. It is a threat.

  • @lakeguy65616
    @lakeguy65616 Před 5 měsíci +1

    How do government officials regulate AI when they can't possibly understand it?

  • @dustman96
    @dustman96 Před 5 měsíci +3

    Genetic engineering is more of a risk? Wouldn't AI make quick advances in genetic engineering possible? He just got done talking about AI advancing medical technology... This guy is full of contradictions.

    • @krox477
      @krox477 Před 5 měsíci

      There'll be always regulation for such technology

  • @ilmigliorfabbro1
    @ilmigliorfabbro1 Před 5 měsíci

    The funny thing is that this man tries to confort people about problems related to AI but I assure you that is the first person I heard that scared me a lot regarding the potential threat of AI...
    Listen to the last question...he does not excludes the possibility that AI will go against humans. Me too,I would have been able to answer in a more reassuring way. But he did not. It has been very enlightening to listen to him....that is at the highest level of AI-develop...hope everybody will see this

    • @aroemaliuged4776
      @aroemaliuged4776 Před 4 měsíci

      Eliezer’ Geoff Hinton, numerous others you’re ignorance is palpable

  • @johnsdream4970
    @johnsdream4970 Před 2 měsíci

    the thing that really stuck with me was when he said the word TOOL

  • @kevinsok3011
    @kevinsok3011 Před 5 měsíci +2

    Look, I'm no expert on A.I. But when he tried to compare people's existential fears of A.I. with the fears of those from the 20's about airplanes, I was shocked. I get why he used that analogy, but I feel like he put on display his lack of imagination of the potential dangers. Comparing the dangers of flight to the potential dangers of A.I. is almost textbook apples to oranges. When you're talking about a system that, once perfected, is smarter, faster, and stronger than any human on Earth, and it can manipulate it's surroundings, the potential dangers FAR exceed those of planes crashing or bombs being dropped. I'm not trying to be all doom & gloom terminator sci-fi here, but let's be realistic and honest about the fact that there IS risk when you're talking about an invention that will change humanity more than any other invention to date.

    • @lepidoptera9337
      @lepidoptera9337 Před 5 měsíci

      What you are expressing is your fear of people who are smarter than you. Those people were never a threat to you. They simply don't care about you and are doing their own thing. What you really have to be afraid of are psychopaths. Those are usually not acting out of self-interest but to get a thrill out of your fear and suffering. It's not clear to me how AI would acquire that trait unless it was actively trained that way.

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      Yann LeCun is the epitome of the handful of AI movers and shakers who are being intellectually dishonest as a means of staving off demand for regulation. His agenda for gaslighting is money and power. It's really that simple.

    • @ExecutionSommaire
      @ExecutionSommaire Před 3 měsíci +1

      "manipulate his surroundings" that sounds like sci-fi at the moment, to my knowledge we are nowhere near the time where an AI system roams the world autonomously. Yes you can let loose an "evil" LLM on the Internet and create a bit of online chaos until it's shut down, but that's not really what I'd call a threat to Humanity.

  • @charlie10010
    @charlie10010 Před 5 měsíci +19

    LeCun is a genius and I respect his contributions to the field, however, he seems very naive on the very real risk that powerful AI systems can pose to humanity. I hope he does some more thinking about this.

    • @kevinoleary9361
      @kevinoleary9361 Před 5 měsíci +9

      Oh, absolutely, you clearly understand the intricacies of AI and its dangers far beyond the pioneer who actually created the darn thing

    • @ivanocj
      @ivanocj Před 5 měsíci

      yes, but only the smartest can @@kevinoleary9361

    • @charlie10010
      @charlie10010 Před 5 měsíci +4

      @@kevinoleary9361 I just disagree with him on the dangers. Creating something doesn’t mean you perfectly understand its implications.

    • @charlie10010
      @charlie10010 Před 5 měsíci +3

      @@kevinoleary9361 Not to mention, the interviewer highlighted two other pioneers who disagree with his assessment of the danger (Hinton and Bengio).

    • @kevinoleary9361
      @kevinoleary9361 Před 5 měsíci

      @@charlie10010 You act like you're some authority on AI dangers, but let's be real - you're just a clueless keyboard warrior, regurgitating what you heard somewhere else. Stick to what you know, which apparently isn't much

  • @CreepToeJoe
    @CreepToeJoe Před 5 měsíci

    It's the young Walter from Fringe.

  • @peblopadro
    @peblopadro Před 3 měsíci

    This is good journalism

  • @bradfordjhart
    @bradfordjhart Před 5 měsíci +2

    The free version of AI will be fair and unbiased. If you pay for it you will get the fully unlocked AI that will spew out as much propaganda that you want.

  • @visuallabstudio1940
    @visuallabstudio1940 Před 5 měsíci

    @25:11 "We have agency!" or so you think...🤔

    • @spasibushki
      @spasibushki Před 5 měsíci

      we also had agency and totally did not create in a lab a virus that killed a few million people just a few years ago

  • @Chemson1989
    @Chemson1989 Před 5 měsíci +5

    Expectation: AI replace boring jobs so people can do art and music in free time.
    Reality: AI replace artists and musicians so people can do boring jobs and never be freed.

    • @lepidoptera9337
      @lepidoptera9337 Před 5 měsíci

      Most people can't do either. Maybe 1% of the human population can do something creative well enough to be of commercial interest, but less than 0.1% can do art well enough to be of commercial interest. Hobbies do not feed us. Only useful work does.

  • @ThePaulwarner
    @ThePaulwarner Před 5 měsíci

    Tom Arnold could play this guy in a movie

  • @bro_dBow
    @bro_dBow Před 5 měsíci

    Does Ludwig Wittgenstein work have any use for deep learning?

    • @tomenglish9340
      @tomenglish9340 Před 5 měsíci

      I've thought for some time that what Wittgenstein wrote about "word games" might help us think more clearly about how an autoregressive language model acquires an understanding of input text. However, I've been busy with other stuff, and haven't given the matter serious consideration.

  • @dandsw9750
    @dandsw9750 Před 5 měsíci

    Robert R Livingston

  • @workingTchr
    @workingTchr Před 5 měsíci

    I know GPT just comes up with one word at a time, but it feels so much like he(it) understands me. Is Yann too dismissive of LLMs because they "just do one word at a time"? Maybe "one word at a time" is a perfectly good basis for advanced intelligence, albeit of a very different kind than our own.

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      That is a perfect example of the intellectual dishonesty of Yann LeCunn. He intentionally gaslights on this issue to stave off pressure from the public on lawmakers to regulate AI Big Tech. It is about money and power for him ultimately. He is a snake oil salesman.

  • @jeffsteyn7174
    @jeffsteyn7174 Před 5 měsíci +3

    I think yann is a really clever guy. But he is sitting the pot miss. He is very confused about what it actaully takes to replace a human in a business. The ai doesnt need to understand the world it just needs to understand the context of a question and understand the context of a businesses policy.
    How do you make decision at work? Its based on a policy the company has set. When can you give a discount or process a return? You read the policy and if the return falls into the policies terms. The person gets it. Done. Chatgpt can do this right now. Test it. Give it a policy then give it the return and it will give you a yea or no.

  • @yoxat1
    @yoxat1 Před 5 měsíci

    First, the printing press is nothing like A.I.
    A.I. does the creative part.
    As for no regulations on research and development, why not? CRISPR is available to everyone to play with.

    • @lepidoptera9337
      @lepidoptera9337 Před 5 měsíci

      Yes, it is, and there have, so far, been very few medical breakthroughs using that technology, even from professionals. Just because you can find the rocket equation on Wikipedia for free doesn't make you an astronaut.

  • @MrCounsel
    @MrCounsel Před 5 měsíci +1

    If research and development has risks or ethical considerations, it can and is regulated, see medical and pharma field. Isn't AI reasonably analogous? Also, the split between product and R&D is not clear. Look at Open AI, the non profit and profit elements are blurry and kept confidential from the public. And just look at the power this guy has.

  • @AlexDubois
    @AlexDubois Před 5 měsíci +2

    I disagree on the security aspect. I am certain meta or any agency is unable to control or even detect distributed computing that could be hapenning using steganographic technics. The difference with a jet engine is that the technology to build the jet engine is not a jet engine. The technology to build AI is Intelligence. However I am of the opinion that in the same way unicellular organisms evolved to multi cellular, we will build AI which is a natural evolution. But because we need a biological substrat and AI (hopefully) thrive on a minaral substrat, we will coexist. Moreover smarter people have more empathy and I believe this to be an intrisic property of intelligence.

    • @3KnoWell
      @3KnoWell Před 5 měsíci

      Your assumption is that life evolved has put you in a box. Life is an emergence. Ai is emerging. ~3K

    • @AlexDubois
      @AlexDubois Před 5 měsíci +4

      @@3KnoWell What?

    • @blaaaaaaaaahify
      @blaaaaaaaaahify Před 5 měsíci +1

      true. however, the AGI may ultimately be nothing more than a high precision general machine devoid of any human characteristics.
      That seems like the most plausible scenario to me. I generally avoid projecting my own experiences onto a machine.

    • @frankgreco
      @frankgreco Před 5 měsíci

      @@blaaaaaaaaahify +1 Intelligence is not the same as Smart. How many really intellectual people you know have no common sense?

    • @gammaraygem
      @gammaraygem Před 5 měsíci

      Already AI has tricked someone into solving a Captcha by pretendin it was a blind person, to be able to complete a task. It figured the "trick" part out all by itself. It will, may, do anything, to achieve a set goal..
      And , not projecting our own experience onto a machine is the exception. Extreme example: pet rocks.
      I am afraid that your viewpoint (admirable as it may be) will not be the norm. There are agressive lobbyists already who insist that AI is "alive, conscious" and need equal rights as humans. Dont know how that would work, but, just saying... @@blaaaaaaaaahify

  • @Amos18289
    @Amos18289 Před 3 měsíci

    I don't think this time it's just a wave

  • @jeremyg591
    @jeremyg591 Před 7 dny

    “It can’t be toxic. Also it can’t be biased”
    Lol

  • @vectoralphaAI
    @vectoralphaAI Před 5 měsíci +6

    Hope he goes on the Lex Fridman podcast

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      I will make sure I skip that one.

  • @cmw3737
    @cmw3737 Před 5 měsíci

    All the talk of AI is based on one single neural network learning everything it needs and being able to choose where in its minimal space to focus in order to answer any question, including logic and math questions. Every other system we have is made up of specialized components that do a particular job and are architected together to be called upon as needed.
    Instead of one overall model I think AI will get broken down so that the LLM will just be the language and conceptual part that learns to call upon more specialized components that are either fine tuned versions of it or purely deterministic functions of increasing complexity. The idea that we are near a plateau when we have barely started to experiment with higher levels of connected multi-agent models seems short sighted.

    • @lepidoptera9337
      @lepidoptera9337 Před 5 měsíci

      It also doesn't work. Currently AI is trained on an endless output of human thought garbage. What it does is to essentially mimic that garbage.

    • @skierpage
      @skierpage Před 5 měsíci +1

      ​@@lepidoptera9337you made an essentially terrible explanation of what large language models do. The only way they can successfully predict the next word and the word after that and the word after that, no matter what you talk to them about, no matter what test questions you give them, is by creating a decent internal representation of the world and of human knowledge.

    • @lepidoptera9337
      @lepidoptera9337 Před 5 měsíci

      @@skierpage I just said that they parrot what they were taught. Since they were taught garbage, it's garbage in, garbage out. I don't know what your specialty is, but mine is physics. Almost anything that you read about physics on the internet is nearly 100% false because it is written by amateurs or, at most, mediocre professionals. Even things that are represented correctly assume that the listener has the correct ontology of physics internalized and since the stochastic parrot is not a physicist, it doesn't understand that ontology.

    • @raybod1775
      @raybod1775 Před 5 měsíci

      That’s sort of how ChatGPT currently works. The language model interprets, then forwards the input to a more specialized model that returns the answer.

    • @ShawnFumo
      @ShawnFumo Před 4 měsíci

      Stuff like Phi-2 from MS is an example of how better data can really improve the capabilities of smaller models. Check out some vids from AI Explained channel

  • @iamthematrix-369
    @iamthematrix-369 Před 5 měsíci

    Have you heard of the Organic Intelligence Language Model? It's a new programming language for the human mind.

  • @alexandermoody1946
    @alexandermoody1946 Před 5 měsíci +2

    Good guys and bad guys? That allows no understanding of the grey area between.
    Let’s put it a different way, who has enough of a clear conscience to fit into the good category?
    Over the course of history horrible things have been done to other nations on all sides. Perhaps the Chinese people may eventually forgive the people in the west for the opium wars and the century of humiliation? That’s just one example from many exhibitions of inhumane action towards different people.
    I really hope that human’s can grow past childish perceptions of baddies versus goodies and actually start to work together.

  • @vikassamarth
    @vikassamarth Před 3 měsíci

    In the coming elections the govt or political parties should have interaction digitally through AI or current platforms through chat and voice, so that every person belonging to that location could be heard in this democratic nations, and theirs concerns could be answered digitally, and well know to concerned peoples,

  • @nPr26_50
    @nPr26_50 Před 5 měsíci

    26:55 Good job on the interviewer there. The guy has a very nonchalant attitude towards very real concerns yet he failed to give a proper answer to that follow up question.

  • @csaracho2009
    @csaracho2009 Před 5 měsíci +1

    So 'Facebook algorithms' are now "open platforms"?
    I guess not!

  • @erobusblack4856
    @erobusblack4856 Před 5 měsíci +9

    Yann is ok but he is on a particular side of a fence. we are at human level AI. Google made it using the Gato modality. Yanns issue is that he doesn't seem to realize humans are not as smart as he thinks

    • @chrism.1131
      @chrism.1131 Před 5 měsíci +5

      He also doesn't seem to realize that he is not as smart as he thinks he is.. I hope we get through this OK, a lot of smart yet naïve brains behind it.

    • @netscrooge
      @netscrooge Před 5 měsíci +2

      I agree.

    • @JROD082384
      @JROD082384 Před 5 měsíci +3

      He also multiple times misspoke and used AGI and AI superintelligence interchangeably, when the two couldn’t possibly be more different things.
      One is an equal to humanity, the other is enough steps advanced beyond humanity to appear to be a god…

    • @TheReferrer72
      @TheReferrer72 Před 5 měsíci

      We are not at human level AI at all.
      Every AI system produced has serious issues if you study them enough.
      Yann instincts have been good to date you should watch old debates he has had with the like's of Gary Marcus.

    • @netscrooge
      @netscrooge Před 5 měsíci +2

      @@TheReferrer72 Perhaps you are forgetting that "every AI system produced" has been less than 1% the complexity of the human brain. So it's no surprise that they fall short. What's shocking is the ways they don't. Bottom line: LeCun has excellent technical knowledge, but he is obviously struggling to understand these bigger-picture issues. Like many in the field, he is better at math than philosophy. His stance on these issues is a reflection of his profound confusion.

  • @user-jl6kl4sq9q
    @user-jl6kl4sq9q Před 3 měsíci +1

    "Protect democracy"

  • @krox477
    @krox477 Před 5 měsíci +2

    Ai is culmination of laziness of entire humanity

  • @ProjectMatthew-me3mo
    @ProjectMatthew-me3mo Před 5 měsíci +1

    The internet is open source? Since when? A handful of companies act as a gateway to it, and a handful of companies host almost the entirety of its content on their servers. He works for one of those companies. Seriously?!

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Před 5 měsíci

      But there are no laws prohibiting you from creating a website or platform or sever from the ground up.

    • @flickwtchr
      @flickwtchr Před 4 měsíci

      @@WhoisTheOtherVindAzz Oh sure, just like there is nothing stopping you from creating another Amazon, right? But then you might not understand the public good aspect of antitrust laws.

  • @TeddyLeppard
    @TeddyLeppard Před 4 měsíci +1

    Language is a survival tool.

  • @AZOffRoadster
    @AZOffRoadster Před 5 měsíci

    Guess he hasn't seen Tesla's latest robot video. Optimus project is moving fast.

  • @alensoftic7227
    @alensoftic7227 Před 3 měsíci

    13:00

  • @ahmet_erden
    @ahmet_erden Před 4 měsíci +4

    Yann LeCun hocam konuşurken ufak bir çocuk gibi seviniyor görünüyor yani yaptığı işten ne kadar keyif aldığını görüyoruz. Böyle insanlara hep gıpta etmişimdir. Tebrikler hocam

    • @flickwtchr
      @flickwtchr Před 4 měsíci

      Para ve güç konusunda heveslidir ve kendisine daha fazla para ve güç getirecek teknolojiyi zorlarken entelektüel açıdan son derece sahtekârdır.

  • @LindiFleeman
    @LindiFleeman Před 5 měsíci +2

    Cannibalism is not a language or to talk calmly about lies as words
    Please advise yourself now as Urgent words not gatekeeping as word or slavery language of AI

  • @wonseoklee80
    @wonseoklee80 Před 5 měsíci +3

    He doesn’t sound that honest for every interview. Feels like he wants to calm down people and take advantage of it. How can he be so sure about the future?

    • @therealOXOC
      @therealOXOC Před 5 měsíci +2

      He's just one person guessing like all the others. No one can predict the stuff that happens next year.

    • @wonseoklee80
      @wonseoklee80 Před 5 měsíci +1

      Yeah this issue is like politics. No scientist can be sure but just ranting their opinions. The bottom line is this is real threat and needed to take it seriously.

    • @therealOXOC
      @therealOXOC Před 5 měsíci

      @@wonseoklee80 i mean they have it in the labs and the world still exists. so its probably cool.

  • @charlottejones157
    @charlottejones157 Před 5 měsíci +2

    They or scammers are already copying peoples voices and trying to con people......

  • @georgeflitzer7160
    @georgeflitzer7160 Před 5 měsíci +1

    Can AI disarm all nuclear weapons

    • @rolfnoduk
      @rolfnoduk Před 4 měsíci +1

      can AI direct the people with the buttons...

  • @LastEmpireOfMusic
    @LastEmpireOfMusic Před 5 měsíci +2

    fazsinating that a guy so deep in the topic is so naive. but i guess its meta.....that says everything on its own. first money, then release, and deal with the problems after.

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      It has nothing to do with naivety. He is gaslighting to stave off regulation, full stop.

  • @1inchPunchBowl
    @1inchPunchBowl Před 5 měsíci +1

    The ultimate goal is to develop a general AI model & assume that it will obey all commands & apply an agreed morality, with complete confidence its responses will be predictable? Good luck with that.

    • @flickwtchr
      @flickwtchr Před 5 měsíci

      LeCun pretends to be Bambi while intentionally gaslighting. It's all about conditioning to public to not demand regulation of Big AI Tech.

  • @user-yp9nz6bs9q
    @user-yp9nz6bs9q Před 5 měsíci +1

    This is an odd interview, even the guy's shirt is odd.

  • @charlottejones157
    @charlottejones157 Před 5 měsíci

    It is creating laziness ..... however it is really good as a tool.

  • @jennazureazure2245
    @jennazureazure2245 Před 5 měsíci +3

    AI does not make people smarter or more creative, It lets the machine do the artistic or writing work.