Mindscape 272 | Leslie Valiant on Learning and Educability in Computers and People

Sdílet
Vložit
  • čas přidán 14. 04. 2024
  • Patreon: / seanmcarroll
    Blog post with audio player, show notes, and transcript: www.preposterousuniverse.com/...
    Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.
    Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.
    Mindscape Podcast playlist: • Mindscape Podcast
    Sean Carroll channel: / seancarroll
    #podcast #ideas #science #philosophy #culture
  • Věda a technologie

Komentáře • 30

  • @MayaChose-eu5fj
    @MayaChose-eu5fj Před 2 měsíci +1

    Do not fluid intelligence tests aim to measure /get at educability?

  • @JonDurand-xu6py
    @JonDurand-xu6py Před 2 měsíci +4

    I don't know if he'd even be interested in being on the Mindscape podcast, but I request that you try getting as a guest Robert Harper to talk about Computational Trinitarianism

    • @JH-pl8ih
      @JH-pl8ih Před 2 měsíci +3

      Robert Harper would be great. While we're recomending computer scientist guests: Shafi Goldwasser (on the wider implications of cryptography) and Yoshua Bengio (on developing higher forms of reasoning within the connectionist paradigm).

    • @seionne85
      @seionne85 Před 2 měsíci

      First time hearing about computational trinitarianism, but it sounds like a digital proof Christianity 😂

    • @JonDurand-xu6py
      @JonDurand-xu6py Před 2 měsíci +1

      @@seionne85 lol it's a funny name for sure, it's his way of pointing out that something special is going on at the intersection of Logic's Proof Theory, Computer Sciences Type Theory, and Mathematics Category Theory.

    • @seionne85
      @seionne85 Před 2 měsíci

      @@JonDurand-xu6py that sounds very interesting thank you for the new rabbit hole lol!

    • @JonDurand-xu6py
      @JonDurand-xu6py Před 2 měsíci

      @@seionne85 He has lectures on CZcams, search Robert Harper Type Theory

  • @gtziavelis
    @gtziavelis Před 2 měsíci

    In the context of the idea that consciousness is not a computational process, we will never have AGI, Artificial General Intelligence. It would have to have been co-evolutionary with us throughout past history up till now, which implies that building a time machine is easier.

    • @PNNYRFACE
      @PNNYRFACE Před 2 měsíci

      Big dong and prosper

    • @trevorcrowley5748
      @trevorcrowley5748 Před 2 měsíci

      My interpretation from the talk is we do not know how consciousness works or agree on how to measure intelligence, but we do know that the human genome has not changed appreciably in 300k years. This implies that recent exponential human developments may be due to the gradual accumulation of knowledge through learning until certain tipping points are met. While current AI learning by example is useful, it is not until it can logically chain different methods together and then communication / bootstrap them to to future agents that we will be on the exponential path of General Intelligence. (Be curious to know if emotion and embodiment are also factors in Educability.) Agree that this is difficult, and that evolution took 6M years to guide us from chimps to humans. We are already in a time machine moving one second per second into the future -- let's check back in about 20 years

  • @cashkaval
    @cashkaval Před 2 měsíci +1

    Is it just me or Leslie Valiant sounds a lot like Cristopher Hitchens?

  • @user-cr5dn2bl2y
    @user-cr5dn2bl2y Před 2 měsíci

    1:53 in which universe do theoretical physicists make the big bucks? 🤔

  • @jessenyokabi4290
    @jessenyokabi4290 Před 2 měsíci

    Looking forward.

  • @OBGynKenobi
    @OBGynKenobi Před 2 měsíci +2

    These AI's are not thinking, they are calculating.
    An AI cannot deduce the subtleties of poetry and the hierarchical meanings hidden within. It doesn't understand sarcasm. It doesn't have feelings based on past events, etc etc etc

    • @Kolinnor
      @Kolinnor Před 2 měsíci +1

      If you make that argument, you must specify how the human brain works

    • @OBGynKenobi
      @OBGynKenobi Před 2 měsíci +1

      @@Kolinnor you don't have to know how brains work. You only have to test the AI. Ask it how it feels about being your friend.

    • @Kolinnor
      @Kolinnor Před 2 měsíci +1

      @@OBGynKenobi You're talking about thinking, feelings, and understanding meaning, which I think are 3 distinct concepts. I was especially answering the "understanding" part. I agree, I don't think it has feelings

    • @OBGynKenobi
      @OBGynKenobi Před 2 měsíci

      @@Kolinnor I'm also saying it doesn't think because thinking, I suggest, is emergent from input of all parts of the brain, including the subconscious, which is not understood.

    • @Kolinnor
      @Kolinnor Před 2 měsíci

      @@OBGynKenobi Mhm, fair enough. I'm not sure about thinking either actually. However I'd say they understand things

  • @lukegratrix
    @lukegratrix Před 2 měsíci +2

    I've been entertained by AI but not really impressed. They are wrong surprisingly often. Keep working computer geeks and mathematicians! You're on the right path!

    • @lukegratrix
      @lukegratrix Před 2 měsíci +2

      Like railroad workers in Blazing Saddles. Quicksand!

    • @yeezythabest
      @yeezythabest Před 2 měsíci +1

      They're not "wrong" they delivered the statistically probable next token based on your prompt. What you mean is they're not always aligned with your intent. Those are two different things and knowing that can help you help them to get you what you want