Backpropagation and the brain

Sdílet
Vložit
  • čas přidán 23. 07. 2024
  • Geoffrey Hinton and his co-authors describe a biologically plausible variant of backpropagation and report evidence that such an algorithm might be responsible for learning in the brain.
    www.nature.com/articles/s4158...
    Abstract:
    During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embedded within multilayered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but historically it has been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated interest in whether backpropagation offers insights for understanding learning in the cortex. The backpropagation algorithm learns quickly by computing synaptic updates using feedback connections to deliver error signals. Although feedback connections are ubiquitous in the cortex, it is difficult to see how they could deliver the error signals required by strict formulations of backpropagation. Here we build on past and recent developments to argue that feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
    Authors: Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman & Geoffrey Hinton
    Links:
    CZcams: / yannickilcher
    Twitter: / ykilcher
    BitChute: www.bitchute.com/channel/yann...
    Minds: www.minds.com/ykilcher
  • Věda a technologie

Komentáře • 39

  • @YannicKilcher
    @YannicKilcher  Před 4 lety +7

    Note: This is a reupload. Sorry for the inconvenience.

    • @Stopinvadingmyhardware
      @Stopinvadingmyhardware Před rokem

      The brain does this thing call axon regulation. In some parts where there are reuptake axons, they self regulate to reduce the amount of feedback when over stimulated. Basically this means they close, and leave the flooded neutral transmitter in the flow stream for the dendrites. This has the effect of down regulating the signal.
      I saw another video where you covered the direct feedback mechanism, and mentioned that the neurons didn’t have a back propagation mechanism, and wanted to share that with you.

  • @MikkoRantalainen
    @MikkoRantalainen Před rokem

    Great video! I think I've seen at least summary of this algorithm earlier and this video make it more clear.

  • @redone9553
    @redone9553 Před 3 lety +4

    Thanks for the upload! But who says that we need negative voltage for a signed gradient? Why not assume high frequencies are positive and low are negative?

  • @Murmur1131
    @Murmur1131 Před 3 lety

    Thanks so much! Super interesting! High class content!

  • @jyotiswarupsamal1587
    @jyotiswarupsamal1587 Před 2 lety

    This is a good explanation. I could understand the basics.
    Thank you

  • @stephanrasp3796
    @stephanrasp3796 Před 4 lety +11

    I think at 4:50, the perturbation should be added to w, not x, i.e. f(x, w+n). Awesome content btw!

    • @YannicKilcher
      @YannicKilcher  Před 4 lety +2

      True, you want to jiggle the model itself. Thanks!

  • @dermitdembrot3091
    @dermitdembrot3091 Před 4 lety +5

    Could it be that perturbation learning is just Hebbian learning where the updates are scaled by the "reward"? So if the "reward" is always 1 it would correspond to Hebbian learning. And for negative rewards the weights are changed to reduce the activations. In the r=-1 vs r=-2 case that would give a negative update for both but a stronger one for the second "action" (comparable to the REINFORCE algorithm).

    • @YannicKilcher
      @YannicKilcher  Před 4 lety +2

      Yes that's exactly what's happening. Basically every unit does RL by itself.

    • @dermitdembrot3091
      @dermitdembrot3091 Před 4 lety

      @@YannicKilcher Thanks for confirmation!

  • @Neural_Causality
    @Neural_Causality Před 4 lety +4

    Does anyone know of an implementation of the proposed idea on the paper?
    Also, thanks a lot for sharing this paper, and comments on different papers, I think it's quite useful!

    • @YannicKilcher
      @YannicKilcher  Před 4 lety +1

      If you look in the comments here you'll find a link to Bengio's paper about the algorithm, they might have something.

    • @Neural_Causality
      @Neural_Causality Před 4 lety

      @@YannicKilcher Thanks! Will check it

  • @terumiyuuki6488
    @terumiyuuki6488 Před 4 lety +3

    It does sound suspiciously like Decoupled Neural Interfaces. Think you'd like to make a video on that? It would be great.
    Keep up the great work!

  • @BuzzBizzYou
    @BuzzBizzYou Před 3 lety +2

    Won’t the proposed network create a massive IIR filter?

  • @8chronos
    @8chronos Před 2 lety

    Thanks for this nice video.
    One thing still seems unclear to me, does this only allow for possibly near biological NN-training or are there also other advantages?
    E.g. Is it faster than Backprop?

    • @moormanjean5636
      @moormanjean5636 Před rokem +1

      This is what I would like to know as well. I would guess its slower, but the only way to train networks in a comparable manner given certain assumptions.

  • @bzqp2
    @bzqp2 Před 2 lety

    I like how immediately once the paper is written by Hinton you switched from drawing the layers horizontally to drawing them vertically xd

  • @joirnpettersen
    @joirnpettersen Před 4 lety +4

    If the brain uses back-propagation, and we can some day figure out a way to model it mathematically, would adverserial attacks become a thing we might need to worry about? If not, would it be for a lack of information, or is there some difference between the way the brain does it and the way we do it on computers?

    • @YannicKilcher
      @YannicKilcher  Před 4 lety

      very nice question. I think this is as of yet unanswered, but definitely possible.

    • @BrtiRBaws
      @BrtiRBaws Před 4 lety +11

      Maybe we can see optical illusions as a sort of adversarial attack :)

    • @maloxi1472
      @maloxi1472 Před 4 lety +3

      ​@@BrtiRBaws Yes, absolutely. I would argue that things like optical illusions, ideological belief structures, very elaborate lies, hallucinogens, unhealthy but tasty food... are all adversarial attacks on different substructures of the brain

    • @priyamdey3298
      @priyamdey3298 Před 3 lety +1

      Numenta shows that if information flow (both inputs and weights of neurons) are quite sparse, then a network becomes quite robust to perturbations / random noise. And they say that brain has a very sparse information flow. So maybe yes, we are still yet to include more meaningful priors (like sparseness) in the right way to make them robust.

    • @bzqp2
      @bzqp2 Před 2 lety

      Hitting a guy in the head with a shovel can be an adversarial neural network attack.

  • @victorrielly4588
    @victorrielly4588 Před 3 lety +2

    Here’s a link to an Archive.org paper on difference target propagation, for anyone like me who doesn’t want to pay to read the biology paper. Also, this paper looks like the original work describing the machine learning aspect of this idea.
    arxiv.org/pdf/1412.7525.pdf

  • @Zantorc
    @Zantorc Před 3 lety +6

    For perturbation learning, excitation and inhibition use completely different mechanisms in the brain - the neuro-transmitter is even different and different cell types are involved. So rather than dampen all weights when the result is wrong it can selectively dampen the excitation and/or amplify the inhibition. So there is an extra degree of freedom, which is the degree to which the correction falls on the inhibitory neurons v excitatory neurons as well as the magnitude of the correction. So this is at least a 2D correction vector - possible more given that individual neuron sub-types may be differently affected. Therefore my claim is that in the brain it's not so much 'scalar feedback' as 'vector feedback', at least for perturbation learning. I suspect it is the lack of distinction between neurons in ML which leads to poor results for perturbation learning.

    • @iuhh
      @iuhh Před 3 lety +1

      I think the different mechamisms in a single brain neuron could probably be represented by two or more artificial neurons though, maybe in multiple layers that handles excitation and inhibition separately, so not sure how that could relate to the quality of the results.

    • @Zantorc
      @Zantorc Před 3 lety +3

      @@iuhh The more you know about neurons, the less you're likely to think that. The point neuron can't do what a pyramidal neuron can do, it's predictive, synapse strength isn't the equivalent of a weight it's one bit at most on distal and apical dendrites and doesn't cause firing - it's part of the pattern matching process.

  • @sehbanomer8151
    @sehbanomer8151 Před 4 lety +1

    I thought this is a part 2 or something

  • @stefanogrillo6040
    @stefanogrillo6040 Před 8 měsíci

    Duper

  • @herp_derpingson
    @herp_derpingson Před 4 lety

    DEJA VU

  • @palfers1
    @palfers1 Před 3 měsíci

    2020 is quite dated.

  • @ThinkTank255
    @ThinkTank255 Před rokem +2

    How many times do I have to tell you guys, the brain doesn't "learn"??? The brain *memorizes* verbatim. For prediction, the brain says, "What matches my memories the best?" and chooses that as a prediction. It is as simple as that. Brains are generally *not* as good as backpropagation at generalization, but that feature of brains is actually very useful for nonlinear spatio-temporal patterns, such as doing mathematics and logic. This is why, to date, ML based methods have not been able to solve extremely complex reasoning based problems. They overgeneralize when it comes to nonlinear logical processes.
    It is actually extremely easy to prove the brain doesn't use backpropagation. How many times do you have to read a book to give a good summary? Once. Etc.... The brain learns *instantly* by rote memorization. Instant learning brings many evolutionary benefits.

    • @DajesOfficial
      @DajesOfficial Před rokem

      How many times have you read a book before it became possible for you to give a good summary from the first time? Lets test your hypothesis by giving a book to an infant and asking them to give a good summary from the first time?

    • @ThinkTank255
      @ThinkTank255 Před rokem +1

      @@DajesOfficial You've actually proven my point. The problem is, most humans aren't particularly good at remembering factual information. This is because 99.99% of the information you are getting at any given time isn't factual information. It random sights, sounds, smells, that your brain deems important for your survival. The reason adults are better than infants is that they have practiced that skill of honing in on factual information.