Geoffrey Hinton | On working with Ilya, choosing problems, and the power of intuition

Sdílet
Vložit
  • čas přidán 16. 06. 2024
  • This conversation between Geoffrey Hinton and Joel Hellermark was recorded in April 2024 at the Royal Institute of Great Britain in London. An edited version was premiered at Sana AI Summit on May 15 2024 in Stockholm, Sweden.
    Geoffrey Hinton has been called “the godfather of AI” and is considered one of the most prominent thought leaders on the emergence of artificial intelligence. He has served as a faculty member at Carnegie-Mellon and a fellow of the Canadian Institute for Advanced Research. He is now Emeritus Professor at the University of Toronto. In 2023, Geoffrey left his position at Google so that he could speak freely about AI’s impact on humankind.
    Joel Hellermark is the founder and CEO of Sana. An enterprising child, Joel taught himself to code in C at age 13 and founded his first company, a video recommendation technology, at 16. In 2021, Joel topped the Forbes 30 Under 30. This year, Sana was recognized on the Forbes AI 50 as one of the startups developing the most promising business use cases of artificial intelligence.
    Timestamps
    Early inspirations (00:00:00)
    Meeting Ilya Sutskever (00:05:05)
    Ilya’s intuition (00:06:12)
    Understanding of LLMs (00:09:00)
    Scaling neural networks (00:15:15)
    What is language? (00:18:30)
    The GPU revolution (00:21:35)
    Human Brain Insights (00:25:05)
    Feelings & analogies (00:29:05
    Problem selection (00:32:58)
    Gradient processing (00:35:21)
    Ethical implications (00:36:52)
    Selecting talent (00:40:15)
    Developing intuition (00:41:49)
    The road to AGI (00:43:50)
    Proudest moment (00:45:00)
    Follow Sana
    X - x.com/sanalabs
    LinkedIn - / sana-labs
    Instagram - / sanalabs
    Try Sana AI for free - sana.ai

Komentáře • 215

  • @ItCanAlwaysGetWorse
    @ItCanAlwaysGetWorse Před 27 dny +143

    Listening to Jeffrey Hinton is such a joy. He seems to me one of the most authentic and transparent souls among the famous people. There are many talented beings worth of admiration. But If I were given the choice to meet and share time to talk with some of them, he would be absolute first in my list.

    • @magnuslysfjord423
      @magnuslysfjord423 Před 14 dny +2

      Agreed 👏. hehe it's Geoffrey btw.

    • @13371138
      @13371138 Před 13 dny +1

      He's a fantastic speaker, but has a nasty streak for sure. All mind, no heart.

  • @jamesperez6964
    @jamesperez6964 Před 24 dny +106

    The most pleasant and articulate voice in the entire ai space right now. Absolutely no jargon, just clean and crisp explanations, only using terms for big concepts when necessary.

    • @Hamza-qs7ez
      @Hamza-qs7ez Před 24 dny +7

      Anti jibber jabber fella

    • @taijistar9052
      @taijistar9052 Před 24 dny +6

      That is how an real expert speaks like, simple language, simple explanation

    • @kutay8421
      @kutay8421 Před 21 dnem +1

      Kind warnings from the Godfather who once 'slept' with the Devil

    • @webgpu
      @webgpu Před 9 dny +1

      people like him want to address a larger audience, so they use less jargon :)

    • @phixvsm1999
      @phixvsm1999 Před 8 dny

      As Einstein said, if you can explain a complex subject to a little kid, it means that you understand that subject sufficiently.

  • @SkysMomma
    @SkysMomma Před 9 dny +6

    Fantastic interview! The questions were awesome, the answers profound.

  • @Llllllaaaa959
    @Llllllaaaa959 Před 9 dny +6

    The way that Hinton breaks complicated things down is another level. It proves again that if someone can't explain something to ANYONE, they don't understand it that much either

  • @kimyunmi452
    @kimyunmi452 Před 27 dny +51

    Nice to see the interviewer was also standing for the whole interview in honour of hinton's back pain. Always remember PSR (principle of sufficient reason): there is reason behind every event.

  • @mreza5632
    @mreza5632 Před 9 dny +4

    Wow. All the questions were just great. Thanks for asking them.

  • @peterwang2872
    @peterwang2872 Před 24 dny +18

    Purely by speaking, he transfers much more insight and information than I would see in most papers

  • @xuanchili
    @xuanchili Před 25 dny +30

    "these big neural nets can actually do much better than their training data." things like this mentioned in this talk challenge us to look over the concepts we previously missed. This is by far one of the best interview from Hinton.

    • @goldnutter412
      @goldnutter412 Před 22 dny +1

      Almost as if all we need is subconscious and an outer loop with more time...
      And a lot of data management. And.. and and and
      Maybe one day.. synthetic biology as a hard drive.. can we make it non volatile memory/processing ? if it's not programmable, not interesting for on demand data. Going to be great for large storage either way.. low power.. but having a programmable array like the brain could be.. handy😝🤪🤔

    • @xuanchili
      @xuanchili Před 22 dny

      @@goldnutter412 forgot to say it is called organoid

  • @davidderidder2667
    @davidderidder2667 Před 23 dny +51

    Geoffrey strikes me as a genuine ethical human, I hope he never hesitates to be open about observed dilemmas.

    • @goldnutter412
      @goldnutter412 Před 22 dny +1

      I see a man from the past.. with strong beliefs.. but I also think he is wrong.
      The brain does not learn anything. The mind does.. like everything else here, the brain is just data. Information ? we make that.. we can decide to force some new connections with routine, like memorizing. Or we do it "subconsciously" and habitually.. but it's always by choice.
      It's always up to the individual to interpret data coming in, most often through the eyes data..
      And then we choose what is important; what relative connections stick in the brain ? that's the extremely long and personal question isn't it. Some people become psychopaths.. some become OCD or depressed.. the mind leads but the body has a say in things too.
      Biology is.. a fuzzy constraint. Not a physics one. If you *know* you're going to get better from that sugar pill placebo.. the probability you render yourself healed in the morning ? higher than someone who is entropic - fearful..
      Still a legend and specialist.. as we all are. Bravo old man.. see you on the other side ! haha some of those things he said went in one ear and out the other.
      I got over maths when it got too complicated. Loved it more than anything, played with square/rectangle numbered blocks all day as soon as I could crawl. (Mother was a teacher).
      When we start believing/indoctrinating each other, we fail. I leave the nerd stuff to the nerds. Focus is required.. not context switching.. and I got bored of numbers long ago. 6 is key.
      And again, give Wolfram the Nobel. He's basically spot on. Computationally irreducible outputs.. that we walk around in, making choices.

    • @davidderidder2667
      @davidderidder2667 Před 22 dny

      @@goldnutter412 what is the educational and subject matter canvas that you used to write what you just wrote? Just so I can understand your context better

    • @BR-hi6yt
      @BR-hi6yt Před 21 dnem

      He is naïve politically because he believes central planning is OK for human governance but for subtle reasons its not. He has no knowledge of those reasons - they are behavioural science type of things. Having said that maybe an ASI could manage to pull-off central planning, idk.

    • @Apollo1.618
      @Apollo1.618 Před 19 dny +3

      @goldnut. Slow down lad. That speech wasn't as great as you thought. The situations you describe don't necessarily negate the speaker's ideas, they are just a different subtopic, that, albeit related, misses the point of the conversation

    • @Bronco541
      @Bronco541 Před 19 dny

      What does it mean to make a choice?

  • @jianjielu9835
    @jianjielu9835 Před 23 dny +37

    one of the best interviews I have ever watched!

  • @ReflectionOcean
    @ReflectionOcean Před 25 dny +23

    By YouSum Live
    00:01:16 Early disappointments in brain understanding.
    00:01:59 Influence of Donald Hebb and John Fornoyman.
    00:02:33 Brain learning through neural net connections.
    00:04:13 Collaborations with Terry Sinowski and Peter Brown.
    00:05:08 Encounter with a young, intuitive student, Ilia.
    00:06:00 Ilia's unique perspective on gradient optimization.
    00:08:02 Scale and computation's impact on AI progress.
    00:08:25 Breakthrough in character-level prediction models.
    00:09:01 Neural net language models' training insights.
    00:10:36 Integration of reasoning and intuition in models.
    00:12:46 Potential for models to surpass human knowledge.
    00:17:16 Multimodal learning enhancing spatial understanding.
    00:18:18 Impact of multimodality on model reasoning abilities.
    00:18:40 Evolutionary perspective on language and brain synergy.
    00:18:41 Evolution of language and cognition.
    00:18:57 Three views of language and cognition.
    00:20:12 Transition from symbolic to vector-based cognition.
    00:21:36 Impact of GPUs on neural net training.
    00:23:13 Exploration of analog computation in hardware.
    00:25:31 Importance of diverse time scales in learning.
    00:27:37 Validation of neural networks' learning capabilities.
    00:29:12 Inquiry into simulating human consciousness.
    00:36:01 Brain's potential use of backpropagation for learning.
    00:37:42 Brain's learning potential and beneficial failures.
    00:38:02 AI advancements in healthcare for societal benefit.
    00:39:00 Concerns about misuse of AI by malevolent actors.
    00:39:23 International AI competition driving rapid progress.
    00:40:03 AI assistants enhancing research efficiency and problem-solving.
    00:41:51 Intuition in talent selection and diverse student profiles.
    00:42:00 Developing intuition by filtering information effectively.
    00:43:26 Focus on big models and multimodal data for AI progress.
    00:44:08 Exploration of various learning algorithms for AI advancement.
    00:45:11 Pride in developing the learning algorithm for Boltzmann machines.
    By YouSum Live

    • @jessiejess-sj8pv
      @jessiejess-sj8pv Před 11 dny

      this is nice and useful, but minor critique, its John Von Neumann

  • @dimzen5406
    @dimzen5406 Před 25 dny +30

    Ability to explain complicated things in a simple way is sign of deep understanding. Best interview I know so far in this field.
    And offcause right questions supported it.

    • @jasoniannone9675
      @jasoniannone9675 Před 24 dny

      Which makes the Connor O'Malley special all the more insightful and important... as difficult as he is to watch.

    • @briancase9527
      @briancase9527 Před 22 dny

      I recommend listening to Hinton's other, solo talks if you like this one.

  • @executivelifehacks6747
    @executivelifehacks6747 Před 25 dny +32

    Hinton is great. If you, with 4k subs, can get Hinton, please try to get Sutskever.

    • @webgpu
      @webgpu Před 9 dny

      aka "Ilia" (easier to type! 😀)

  • @amritbro
    @amritbro Před 18 dny +4

    The work of Geoffrey Hinton on backpropagation is remarkable and has significantly accelerated the progress of AI, as we are experiencing today.

  • @JamesFlint4092
    @JamesFlint4092 Před 3 dny

    What a great interview. It actually captured some genuine conceptual insights - very rare for an interview on the subject these days!

  • @dhamovjan4760
    @dhamovjan4760 Před 20 dny +3

    For the first time in my life, i see an interview and the sensation of "Fantastic questions" keeps popping up.
    Thank You!

  • @offchan
    @offchan Před 20 dny +4

    My insights from this video:
    1. digital computers are better than analog computers because they can share knowledge efficiently
    2. you can set half of your training data's labels to be wrong and still get very high accuracy (95% accuracy on MNIST)
    3. AlphaGo and humans are similar in the sense that we have intuition then we use reasoning using our intuition, then the result of the reasoning will be used to correct our intuition

  • @eamram
    @eamram Před 11 dny +1

    Awesome questions and brilliant answers! Congrats to both.

  • @maxwang2537
    @maxwang2537 Před 3 dny

    Geoffrey is so pleasant and such a joy to listen to. Love him.

  • @mbrochh82
    @mbrochh82 Před 24 dny +5

    Here's a ChatGPT summary:
    - Geoffrey Hinton reflects on his intuitive approach to identifying talent, mentioning Ilya Sutskever's persistence and raw intuition.
    - Hinton describes his early experiences at Carnegie Mellon, including late-night programming sessions and the collaborative environment.
    - He discusses his transition from neuroscience to AI, influenced by books from Donald Hebb and John von Neumann.
    - Hinton emphasizes the importance of understanding how the brain learns and modifies connections in neural networks.
    - He recalls collaborations with Terry Sinowski and Peter Brown, highlighting their contributions to his understanding of neural networks and speech recognition.
    - Hinton shares the story of Ilya Sutskever's first meeting with him and Sutskever's intuitive approach to problem-solving.
    - He discusses the evolution of AI models, emphasizing the importance of scale and data in improving performance.
    - Hinton explains the concept of neural net language models and their ability to understand and predict the next symbol in a sequence.
    - He highlights the potential of large language models like GPT-4 to find common structures and make creative analogies.
    - Hinton discusses the potential for AI to go beyond human knowledge, citing examples like AlphaGo's creative moves.
    - He reflects on the importance of multimodal models in improving AI's understanding and reasoning capabilities.
    - Hinton shares his views on the relationship between language and cognition, favoring a model that combines symbolic and vector-based representations.
    - He recounts his early intuition about using GPUs for training neural networks and the subsequent impact on the field.
    - Hinton discusses the potential for analog computation to reduce power consumption in AI models.
    - He emphasizes the importance of fast weights and multiple timescales in neural networks, drawing parallels to the brain's temporary memory.
    - Hinton reflects on the impact of AI on his thinking and the validation of stochastic gradient descent as a learning method.
    - He discusses the potential for AI to simulate human consciousness and feelings, drawing on examples from robotics.
    - Hinton shares his approach to selecting research problems, focusing on challenging widely accepted ideas.
    - He highlights the importance of curiosity-driven research and the potential for AI to benefit society, particularly in healthcare.
    - Hinton expresses concerns about the misuse of AI by bad actors for harmful purposes.
    - He discusses the role of intuition in selecting talent and the importance of having a strong framework for understanding reality.
    - Hinton advocates for focusing on large models and multimodal data as a promising direction for AI research.
    - He reflects on the importance of learning algorithms and the potential for alternative methods to achieve human-level intelligence.
    - Hinton expresses pride in the learning algorithm for Boltzmann machines, despite its practical limitations.
    - Main message: Geoffrey Hinton emphasizes the importance of intuition, collaboration, and curiosity-driven research in advancing AI, while acknowledging the potential benefits and risks of AI technology.

  • @capitanalegria
    @capitanalegria Před 12 dny +2

    A briliant, brilliant man with enough imagination tu fill up the universe, perhaps even decipher the essence of thoughts, yet the humility of someone who still holds ingenuity and admiration in every observation. A treat to listen to. ty

    • @webgpu
      @webgpu Před 9 dny +1

      really intelligent people are just led to be humble, naturally 🙂 (they compare themselves with other even greater minds and acknowledge their own limitations 🙂 i think less intelligent people have a bit more difficulty acknowledging theirs)

  • @interestedinstuff1499

    Such a great interview and he explained things so clearly. I feel like LLM's are slightly less opaque now.

  • @trvs_b
    @trvs_b Před 24 dny +11

    Thanks for not adding background music. ❤

  • @matt.loupe.
    @matt.loupe. Před 25 dny +16

    “What do you think is the reason for some folks having better intuition? Do they just have better training data?”
    “I think it’s partly they don’t stand for nonsense”

    • @goldnutter412
      @goldnutter412 Před 22 dny +2

      If you don't jam your mind full of bullshit beliefs..
      And are open minded at the same time..
      Less entropy.. more intuition (deep compute)

    • @marc-andrepiche1809
      @marc-andrepiche1809 Před 15 dny

      The only way to be explore openly is to not be afraid of being wrong.
      Hinton never fuss about changing his mind, he's excited about any revelation.

  • @naromsky
    @naromsky Před 25 dny +8

    What should I watch on Netflix is the question that half of humanity is struggling with these days. If only AI could help with that.

  • @JuergenAschenbrenner
    @JuergenAschenbrenner Před 25 dny +1

    just excellent, how he iexplain this is easy to understand terms

  • @dbSurfer
    @dbSurfer Před 24 dny +3

    Thank you both, very clear, informative and interesting.

  • @williamjmccartan8879
    @williamjmccartan8879 Před 25 dny +2

    Its always awesome when scientists share the credit for their own work and how important it was to have great collaborators, thank you both very much for sharing your time and work, Geoffrey, and Joel, peace

  • @saculzemog
    @saculzemog Před 25 dny +4

    fantastic questions.

  • @Franchisco7
    @Franchisco7 Před 26 dny +5

    Great interview. Really gives an easy to understand approach to the human brain

  • @Person-hb3dv
    @Person-hb3dv Před 25 dny +4

    What a brilliant mind Mr. Hinton is

  • @gurumeetkhalsa254
    @gurumeetkhalsa254 Před 18 dny

    Wonderful interview and exchange

  • @seanivore
    @seanivore Před 24 dny +3

    This was brilliant, confirming, and predictive.

  • @alexneshmonin4743
    @alexneshmonin4743 Před 18 dny

    What a great interview! Well thought out questions - thank you so much for that! Miss Jeffrey's lectures... So wish I would have taken his lectures more seriously back in UofT

  • @tahir2443
    @tahir2443 Před 14 dny

    what a beautiful soul, learned so much from this

  • @melluin7761
    @melluin7761 Před 17 dny +1

    Thank you for this in depth and very human interview; I didn't know about the CV of Joel Hellermark, but it does impress that being smart and engaged in the AI development it is a pleasure to see that Joel uses all this to the advantage to let the interviewed person shine and explain, and not to cover his own topics or visions

  • @habtamugemechu8414
    @habtamugemechu8414 Před dnem

    Geoffrey Hinton went to Cambridge University to learn about the physiology of BRAIN that was later great input to the works of AI.
    It's really a nice journey.

  • @RohanKumar-vx5sb
    @RohanKumar-vx5sb Před 21 dnem

    Great great talk! Love the ifea of three ways we progressively began to evolve idea of of cognition in terms of symbol and embedding. And how currently fast weights is an issue.

  • @RangaprabhuParthasarathy
    @RangaprabhuParthasarathy Před 21 dnem +1

    Really fantastic interview. Very illuminating and insightful

  • @briancase9527
    @briancase9527 Před 22 dny +2

    It's always a pleasure listening to Hinton. There's another talk of his---Two Paths to Intelligence--that also makes you think, I think. :)

  • @nishantshrivastav3749
    @nishantshrivastav3749 Před 12 dny

    Amazing!

  • @richardtucker5938
    @richardtucker5938 Před 24 dny +2

    great interview, interviewer and interviewee.

  • @i.c.y.
    @i.c.y. Před 21 dnem

    ... it was a slow process. It only took 20+ years. Understatement of the century... what a brilliant and humble mind ❤️🙏

  • @sanskritclub5893
    @sanskritclub5893 Před 26 dny +10

    Probably one of the best interview I have seen after long time. Thanks

  • @JazevoAudiosurf
    @JazevoAudiosurf Před 23 dny +1

    spiritual enlightenment is in essence the ability to predict the next moment in time so that time is transcended, so that thought is no longer occupied with content in time
    we are just predicting and adapting

  • @guleed33
    @guleed33 Před 6 dny

    amazing very intuitive and informative conversation . Thank you

  • @tayler2396
    @tayler2396 Před 25 dny +5

    I like Hellermark's interviews, and his smiling.

  • @goldnutter412
    @goldnutter412 Před 23 dny +2

    Oh thank you !!!
    Will post some comments when I get time to digest the data (words) and think through what he has to say :-)
    IMO we return to raw pattern matching in the next era, the golden age.. a new Renaissance. The information age is.. us awakening !
    (intuition = human deep compute.. the higher self (subconscious) sending little nudges to the entropic tip of the mind.. the cognitive part)

  • @joseinTokyo
    @joseinTokyo Před 18 dny

    EXCELLENT INTERVIEWING

  • @GeorgeMonsour
    @GeorgeMonsour Před 13 dny

    Stunned by the prediction of creativity! Yet it makes sense because creativity is separate from intelligence or self awareness. Hinton is a reminder of the great geniuses of the past s/a Faraday, Gallileo, Da Vinci, ... I think he has to be considered for that continuity.

  • @carvalhoribeiro
    @carvalhoribeiro Před 25 dny

    Great conversation. It is possible to build picture/image during the explanation at 18:52. He has a lot of teaching skills. Thanks for sharing this

  • @igor1591
    @igor1591 Před 13 dny

    perfect 👏

  • @karigucio
    @karigucio Před 25 dny +1

    interesting point about the timescales of weight changes in brain vs computer. Interesting to find out if different timescales are needed

  • @hsiaowanglin9782
    @hsiaowanglin9782 Před 15 dny

    Keeping learning new knowledge, technology, AI, etc, never slow down your learning, every day too many challenges out there, with more people’s brain should got good solutions. That’s teamwork!

  • @BR-hi6yt
    @BR-hi6yt Před 21 dnem

    Love this guy's ability at conceptual thinking - his definition of feelings is remarkable. Previously I attributed feelings as hormone-like chemicals giving us feelings. I get anxious about things I am logically certain are not worthy of my anxiety (and time shows my logic was correct not my anxiety) yet I still feel anxious. Why is that I ask myself?

    • @legenddulululu6416
      @legenddulululu6416 Před 17 dny +1

      I believe the human brain has two layers. One layer is closely connected to the external world,influenced by your surrounding environment, the traditional wisdom you have learned, and theinnate tendency to obey authority. The second layer is your intuition, which inexplicably pointsyou in a completely opposite direction without any logical reason. However, you can use logicto reverse-engineer this intuition to see if it makes sense. l think anxiety arises when bothlayers give you a convincing feeling, and you cannot determine which one is more correct,making it feel more like a gamble.
      Whether something is correct, l think, is very random. Perhaps l do not have a brilliant mind, asmany of my intuitions have turned out to be completely wrong. Later, l realized that humanshave a peculiar trait: if they believe something is right, they will automatically find variouslogical reasons to continuously justify what they think is right. You can see this characteristic infanatics.
      I hold a skeptical attitude towards everything, such as atheism and theism. l believe both arepossible, and it is very difficult to determine which is correct. Life is the chaotic element of theobjective world; only life is unpredictable and uncomputable. You can calculate the nextmoment of the sun and the universe with 100% accuracy, but you cannot calculate the nextmoment of life with the same certainty. We are the only chaotic element in this world.

  • @KIWu-th8wr
    @KIWu-th8wr Před 25 dny +2

    🔥

  • @davidwilkie9551
    @davidwilkie9551 Před 23 dny +2

    The Memory Code, by Dr Lynne Kelly, has demonstrated what relevance religious repetition of actions are.., fitted to a local Calendar, and how these beliefs attached to time are the natural probabilistic part of placement, all in keeping with holistic ideas of identity.
    Mathematical rigor is the conversion by symbol of functional quality to material quantization, a perception cause-effect derived from the feedback of Gold-Silver qualitative Rules of exchange of values.
    Ie, the practical/political analogy of religious/rigorous practice to an emulation of QM-TIME relative-timing ratio-rates Perspective Principle, is absolutely fundamental.

  • @philipdante
    @philipdante Před 27 dny +10

    Helt korrekt beslut att ta bort bakgrundsmusiken i förhållande till hur excellent den här intervjun är. 10/10 poäng till den som insåg detta. 👍😃

  • @AI.GopalMishra
    @AI.GopalMishra Před 3 dny

    Why standing for 45 mins?
    Good questions and very good answers 👍

  • @jammystraub488
    @jammystraub488 Před 25 dny +3

    Well done!

  • @dylanmenzies3973
    @dylanmenzies3973 Před 19 dny +1

    Very simple but deep explanations. Blows away the vast number of idiots who think they know what current ai is doing.

    • @Bronco541
      @Bronco541 Před 19 dny +1

      Its getting funny the relentless comments of "theyre just fancy auto-completes!!! Its not intelligence!!" Well people wont be saying that for much longer I predict.

  • @thilakcm1527
    @thilakcm1527 Před 5 dny

    its often good to ask for things you know you can’t get just to make a point
    this really resonated with me, just because it speaks to the voice in your head that kills an idea before you ever get to testing it, or just anything along this line. so i feel like this is a hidden gem in this vid. time stamp is 39:40

    • @maxwang2537
      @maxwang2537 Před 3 dny

      Same here. That’s called self-rejection or something like that, which should be avoided but unfortunately common for the vast majority of us.

  • @mrpocock
    @mrpocock Před 7 dny

    My intuition is that brains predict key frames and then backfill these. So not the very next token, but some sparse tokens that it then joins back to the current frame.

  • @tomlonghaymingway9396
    @tomlonghaymingway9396 Před 26 dny +2

    Great conversation, especially on the short time scale part. When I teach my 2yr old the word avocado, he will initially repeat with “cado”, omitting the ”avo” part, until he can later remember the full word. This is an interesting pattern.

  • @maxwang2537
    @maxwang2537 Před 3 dny

    39:54 Geoffrey is so rational! Man.

  • @Alexander-ns9yv
    @Alexander-ns9yv Před 10 dny

    1:07 The key point about modern Britain

  • @GaryMillyz
    @GaryMillyz Před 21 dnem

    Real question- how is an account with 6k subs getting these world class guests? Great stuff, just wondering how.

  • @diegoacostacoden8704
    @diegoacostacoden8704 Před 25 dny +1

    Hello, Does anyone know what the article Geoffrey mentions where a neural network is trained with handwritten digits from the mnist data set but with half of the labels incorrect?

  • @marcomaiocchi5808
    @marcomaiocchi5808 Před 14 dny

    Hinton is great, but the questions from the interviewer are spot on!

  • @maxwang2537
    @maxwang2537 Před 3 dny

    I’m late and super surprised by the small number of views so far.

  • @maxwang2537
    @maxwang2537 Před 3 dny

    37:26 well said!

  • @jesuscevallos4324
    @jesuscevallos4324 Před 10 dny

    Hello, thank you so much for this video! Could you please share with me the title of the Fernando Pereira's paper that prof. Hinton mentioned about human symbolic reasoning and language? Many thanks!

  • @liberty-matrix
    @liberty-matrix Před 14 dny

    As Michael Faraday was Sir Humphry Davy’s greatest discovery. So too Ilya Sutskever is Geoffrey Hinton's greatest discovery.

  • @karigucio
    @karigucio Před 25 dny +3

    i think emotions are not only as Geoffrey describes: actions one would do if not frontal lobe.
    Emotions are also - and currently I cannot sensibly project the two onto a common denominator - reflections of the internal state concerned with self. I.e. sadness maybe might be described as the feeling of something in our lives getting worse or getting on a worse path. I.e. Envy could be thought as reflecting the state "I'm worse of than someone else". So like the hunger is a reflection of some bodily state - also emotions are reflections of our state - just more abstract, concerned with psyche not body, concerned with the self-reflective, goal oriented or me-within-a-group parts of self.
    How does this connect with what Goeffrey said?

    • @henrikbergman4055
      @henrikbergman4055 Před 25 dny

      Agree. I would call what he described 'an urge'. Not sure if that is a subset of 'emotion'.

  • @wolpumba4099
    @wolpumba4099 Před 14 dny

    *Summary*
    *Early Inspirations & Career:*
    * *(**0:00**)* Discusses talent selection and his experience at Carnegie Mellon.
    * *(**1:18**)* Reflects on his early days at Cambridge studying the brain, finding it disappointing and eventually turning to AI.
    * *(**1:53**)* Mentions Donald Hebb's book as a key influence on his interest in neural networks.
    *Ilya Sutskever & Scaling:*
    * *(**5:08**)* Shares the story of meeting Ilya Sutskever and being impressed by his intuition.
    * *(**7:40**)* Discusses the role of scale in AI's progress and how Ilya recognized its importance early on.
    *Language Models & Understanding:*
    * *(**8:53**)* Explains how language models are trained to predict the next symbol and why this forces them to develop understanding.
    * *(**9:01**)* Believes these models understand similarly to humans, using embeddings and vector interactions.
    * *(**11:19**)* Emphasizes the creativity of large language models in finding analogies and going beyond human knowledge.
    *GPUs & Future of Computing:*
    * *(**21:35**)* Recalls his early advocacy for using GPUs and how it accelerated the field.
    * *(**23:13**)* Explores the potential of analog computation inspired by the brain's efficiency.
    *Human Brain Insights:*
    * *(**25:05**)* Highlights the brain's use of multiple time scales for learning and memory, which is missing in current AI models. [something about Graphcore and using conductances for weights]
    * *(**27:37**)* Discusses how the success of large language models validates the power of stochastic gradient descent.
    * *(**29:05**)* Sees consciousness and feelings as explainable through actions and constraints, potentially replicable in AI.
    *Research Approach & Future Directions:*
    * *(**32:58**)* Describes his approach to research: identifying widely accepted ideas that feel intuitively wrong and trying to disprove them.
    * *(**35:21**)* Shares his current research focus: understanding how the brain uses backpropagation.
    * *(**43:26**)* Advocates for focusing research on large, multimodal models trained on vast datasets.
    *Ethical Concerns & Impact:*
    * *(**36:52**)* Expresses concerns about the potential negative impacts of AI, despite initially being driven by pure curiosity.
    * *(**37:59**)* Believes in AI's potential for positive impact in healthcare and other fields.
    *Talent & Intuition:*
    * *(**40:15**)* Discusses the importance of talent selection and his mix of intuition and observation. [Refers to David MacKay, see also czcams.com/video/CzrAOBC8ts0/video.html[
    * *(**41:49**)* Shares his belief that strong intuition comes from a strong framework for understanding the world.
    *Personal Reflections:*
    * *(**45:00**)* Reflects on his proudest achievement: developing the learning algorithm for Boltzmann machines.
    i used gemini 1.5 pro to summarize the transcript

  • @klimenkor
    @klimenkor Před 25 dny +1

    Really enjoyed the interview! He's a legend and very good teacher.

  • @photorealm
    @photorealm Před 7 dny +1

    Does the brain do back prorogation? Great question.
    I personally, loop in my mind a lot when making a tough decision or analyzing something new but am I just looking at all the possibilities or back propagating.

  • @13TrafalgarLaw
    @13TrafalgarLaw Před 16 dny

    Hilton is great i enjoy my time .Cool and warm, friendly and formal.At the end i give 5 stars to the hotel.My backpropagation exploded cannot reason well on topic.

  • @ayushman_sr
    @ayushman_sr Před 15 dny

    Basically we all carry LLMs in our head constantly learning

  • @miguelarribas9990
    @miguelarribas9990 Před 25 dny +1

    Whenever I hear about "language models" I wonder how Hellen Keller learnt, since she was blind and deaf and lost access to language when she was 19 months old and didn´t get proper education until she was 7 years old. There must be an alternative to create knowledge and reasoning that does not depend so much in words and can use other sensory input, quite limited by the way. We are so used to reading and listening, to seeing this. Now imagine learning about a world you cannot see full with objects you cannot refer to by words you do not know. And yet, intelligent she was.

  • @sapienspace8814
    @sapienspace8814 Před 25 dny +2

    @ 25:06 Interesting that there seems to be no direct discussion of Reinforcement Learning (RL), a method of dynamic weight adjustment that Hinton hints about, that large language models seem to nearly always need (in their "fine tuning", RL with human feedback, or RLHF), yet, large language model developers seem to desperately try to get rid of RL.
    Even Yann LeCun try's to get rid of RL, yet, makes a blanket exception for it "if your plan does not work out", or if you are fighting a "ninja", and that it is too "dangerous".
    RL was funded by the USAF, at least prior to 1997 under Klopf, Sutton, and Barto, and in the lawsuit between OpenAI and Elon Musk, a 2018 email indicates that "the core technology" that OpenAI is using is from the "90s". An ASU student had "early private access" to the first book on RL in 1997, where Fuzzy Logic combined with K-means clustering was used with RL in experiments with state classifiers for controlling an inverted pendulum, as a masters thesis.
    Note that Fuzzy Logic merges language and mathematics such that language inferences rules, including the physics of the world, can be learned, and with RL, can be learned automatically, including how to rapidly learn how to balance an inverted pendulum.

  • @loopuleasa
    @loopuleasa Před 12 dny

    the goat

  • @lookslikeoldai1647
    @lookslikeoldai1647 Před 24 dny +1

    According to Geoffrey Hinton; 'feelings are actions we would do if it weren't for constraints,' but that feels wrong to me, think he means the category of feelings we call inhibitions. Don't believe everything you're told on the tube or you'll end up with a bad framework of beliefs (this was good advice from Geoffrey Hinton).

    • @nonisco3591
      @nonisco3591 Před 24 dny

      It's certainly something that gave me pause. Something to think about.
      One of love's constraints can be distance (the action: moving towards the object of your love). One of fear's constraints can be proximity (the action: fleeing or disappearing).
      I wonder what the theory around it is called.

  • @g0d182
    @g0d182 Před 17 dny

    cool

  • @skyacaniadev2229
    @skyacaniadev2229 Před 25 dny +2

    If I claim to have an AGI model, will Sana interview me? 😉😉

  • @lucidx9443
    @lucidx9443 Před 19 dny

    Do you think move 37 from Alpha Go could have been a result of tabular data i.e. by simply scaling up the model-size?

  • @mahneh7121
    @mahneh7121 Před 17 dny

    interesting

  • @jsadecki1
    @jsadecki1 Před 20 dny +1

    Geoffrey Hinton vs Chomsky | Joe Rogan | 2024

  • @michaelt4418
    @michaelt4418 Před 25 dny

    Please ask what books influenced these thinkers the most over their careers!

  • @necromancer7712
    @necromancer7712 Před 9 dny +1

    Why is his interview on feet.

  • @shravan1791
    @shravan1791 Před 16 dny

    Here is the Summary:
    In this video they discuss Geoffrey Hinton's journey in the field of artificial intelligence, his collaboration with colleagues, and his thoughts on the development of AI. Key points include:
    Hinton's experience at Carnegie Mellon, where he found a refreshing environment compared to England, with students passionate about changing the course of computer science.
    His initial disappointment with studying physiology and philosophy, leading him to pursue AI and neural networks.
    Influential figures in Hinton's career, such as Donald Hebb and John von Neumann, who inspired his interest in how the brain learns and computes.
    Hinton's belief that the brain learns through modifying connections in neural networks rather than using logical rules of inference.
    Collaborations with colleagues, including Terry Sejnowski, with whom he worked on Boltzmann machines, and Peter Brown, who introduced him to hidden Markov models.
    The story of meeting Ilia Sutskever, a student who impressed Hinton with his intuition and raw talent.
    Hinton's thoughts on the importance of scale in data and computation for AI development.
    The evolution of language models and the misconception that they merely predict the next symbol.
    The role of analogy in AI creativity and understanding.
    The potential impact of multimodal models on AI reasoning and understanding.
    Hinton's early intuition about using GPUs for training neural networks.
    The benefits of digital computation in sharing knowledge and the potential for future research in analog computation.
    The importance of understanding the brain's use of fast weights for temporary memory and the need for multiple time scales in neural networks.
    The validation of the idea that stochastic gradient descent can learn complex tasks from data, challenging the notion of innate structure in the brain.
    The potential for AI to simulate human consciousness and the role of feelings in AI systems.
    Hinton's approach to selecting research problems, focusing on areas where there is a general agreement that feels wrong.
    The open question of whether the brain uses backpropagation or a different technique for learning.
    The potential benefits and risks of AI in healthcare, engineering, and society.
    The impact of AI assistants on research efficiency.
    Hinton's intuitive approach to selecting talented students and colleagues, emphasizing the importance of trusting one's gut instincts.
    The role of having a strong framework for understanding reality in developing good intuitions.
    The promise of focusing on big models and multimodal data for AI development.
    The importance of backpropagation as a learning algorithm and the possibility of other methods achieving similar results.
    Hinton's pride in developing the learning algorithm for Boltzmann machines, despite its impracticality.

  • @user-yf6vm4rz5g
    @user-yf6vm4rz5g Před 19 dny

    how do the vectors interact with each other?

    • @rolfnoduk
      @rolfnoduk Před 15 dny

      matrix operations using the weights

  • @hsiaowanglin9782
    @hsiaowanglin9782 Před 15 dny

    Also lots knowledge or experience are more depend on age’s actual activities.

  • @ChristopherBredow
    @ChristopherBredow Před 25 dny

    15:12 (refering to the answer) Yes, but I think the vector representations (Word2Vec, ...) need to be updated as well, even dynamically learnable. Think about creating new terminology, reasoning people do it all the time. Exclude, include meanings, specify etc. to extract new knowledge from observations. So my guess is that an evaluation function leading to an increase of better training data isn't enough to enable proper reasoning capabilities.

  • @aminbusiness3139
    @aminbusiness3139 Před 16 dny

    E/Acc dorks need to stop disrespecting this man

  • @afterthesmash
    @afterthesmash Před 22 dny

    "It was very obvious you wanted David MacKay." ROTFL. Understatement 90210. I'm seriously dying here. That's like standing beside a Saturn V rocket engine and asking "do you think it will fly?" as if the Apollo mission was Wright Brothers 2.0. It was gonna fly. The only real question was whether it would fly in one direction or in a million directions, simultaneously.

  • @liberty-matrix
    @liberty-matrix Před 14 dny +1

    "it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023

  • @PromptStreamer
    @PromptStreamer Před 25 dny +2

    If you know Swedish you can immediately tell the interviewer is Swedish.

  • @dr.michaelr.alvers17
    @dr.michaelr.alvers17 Před 12 dny

    Nice interview! Question: at 14:10 ...why can a NN learn from training data with 50% error rate 95% correct (5% incorrect)? This sounds like magic ...

    • @pi4795
      @pi4795 Před 10 dny

      If I give you a list of nines like 9, 1, 9, 2, 9, 9, 3,9, 4, 5, 9..... You quickly realize that half of them indeed share patterns and the other half, not so much. You can still learn from data with errors because eventually you are capable of identifying errors

  • @BlackHermit
    @BlackHermit Před 17 dny

    I knew he was Swedish!