What is Back Propagation

Sdílet
Vložit
  • čas přidán 21. 06. 2023
  • Learn about watsonx→ ibm.biz/BdyEjK
    Neural networks are great for predictive modeling - everything from stock trends to language translations. But what if the answer is wrong, how do they “learn” to do better? Martin Keen explains that during a process called backward propagation, the generated output is compared to the expected output, and then the error contributed by each neuron (or “node”) is examined. By adjusting the node’s weights and biases, error is reduced and thus the overall accuracy improved.
    Get started for free on IBM Cloud → ibm.biz/sign-up-now
    Subscribe to see more videos like this in the future → ibm.biz/subscribe-now

Komentáře • 37

  • @vencibushy
    @vencibushy Před 4 měsíci +10

    Back propagation to neural networks is what negative feedback is to closed loop systems. The understanding come pretty much naturally to the people which studied automation and control engineering.
    However - many articles tend to mix thing up. In this case back propagation and gradient descent. Back propagation is the process of passing the error back through the layers and using it to recalculate the weights. Gradient descent is the algorithm used for recalculation. There are other algorithms for recalculation of the weights.

  • @Kiera9000
    @Kiera9000 Před 10 měsíci +14

    thanks for getting me through my exams cause the script from my professor helps literally nothing in understanding deep learning. Cheers mate

  • @anant1870
    @anant1870 Před 11 měsíci +11

    Thanks for this Great explanation MARK 😃

  • @hamidapremani6151
    @hamidapremani6151 Před 2 měsíci

    Brilliantly simplified explanation for a fairly complex topic. Thanks, Martin!

  • @Mary-ml5po
    @Mary-ml5po Před 11 měsíci +7

    I can't get enough of you brilliant videos. Thank you for making what it seemed to me before as complicated easy to understand . Could you please post a video about loss functions and gradient decent?

    • @im-Anarchy
      @im-Anarchy Před 10 měsíci +1

      What did he even taught actually?

  • @sakshammishra9232
    @sakshammishra9232 Před 9 měsíci +2

    Lovely man..... excellent videos..all complexities eliminated. thanks a lot 😊

  • @ca1790
    @ca1790 Před 2 dny

    The gradient is passed backward using the chain rule from calculus. The gradient is just a multivariable form of the derivative. It is an actual numerical quantity for each "atomic" part of the network; usually a neuron's weights and bias.

  • @1955subraj
    @1955subraj Před 8 měsíci

    Very well explained 🎉

  • @neail5466
    @neail5466 Před rokem +1

    Thank you for the information.
    Could you please tell if the the BP is only available and applicable for Supervised models, as we have to have a pre computed result to compare against!!
    Certainly, unsupervised models could also use this theoretically but does / could it effect in a positive way?
    Additionally how the comparison actually performed?
    Especially for the information that can't be quantised !

  • @msatyabhaskarasrinivasacha5874

    Awesome.....awesome superb explanation sir

  • @Zethuzzz
    @Zethuzzz Před 2 měsíci +3

    Remember the chain rule that you learned in high school.Well that’s what is used in Backpropogation

  • @rigbyb
    @rigbyb Před 11 měsíci

    Great video! 😊

  • @sweealamak628
    @sweealamak628 Před 2 měsíci +1

    Thanks Mardnin!

  • @idobleicher
    @idobleicher Před 2 měsíci

    A great video!

  • @rishidubey8745
    @rishidubey8745 Před 7 dny

    thanks marvin

  • @guliyevshahriyar
    @guliyevshahriyar Před 11 měsíci

    Thank you!

  • @pleasethink4789
    @pleasethink4789 Před 9 měsíci +2

    Hi Marklin!
    Thank you for such a great explanation.
    (btw, I know your name is Martin. 😂 )

  • @ashodapakian2788
    @ashodapakian2788 Před měsícem +1

    Off topic: what drawing board setup do these IBM videos use ?
    it's really great.

    • @boyyang1290
      @boyyang1290 Před měsícem

      I'd like to know, too.

    • @boyyang1290
      @boyyang1290 Před měsícem

      I find it ,he is drawing on the Glass

  • @jaffarbh
    @jaffarbh Před 11 měsíci

    Isn't Back Propagation used to lower the computation needed to adjust the weights? I understand that doing so in a "forward" fashion is much more expensive than in a "backward" fashion?

  • @Ellikka1
    @Ellikka1 Před 2 měsíci

    When doing the Loss Function hove is the "Correct" output given? Is it training data and the compared an other data file with desired outcomes? In the example of "Martin" how does the neural network get to know that your name was not Mark?

  • @l_a_h797
    @l_a_h797 Před měsícem

    5:36 Actually, convergence is does not necessarily mean the network is able to do its task reliably. It just means that its reliability has reached a plateau. We hope that the plateau is high, i.e. that the network does a good job of predicting the right outputs. For many applications, NNs are currently able to reach a good level of performance. But in general, what is optimal is not always very good. For example, a network with just 1 layer of 2 nodes is not going to be successful at handwriting recognition, even if its model converges.

    • @mateusz6190
      @mateusz6190 Před měsícem

      Hi, you seem to have good knowledge on this, can I ask you a question please. Do you know if neural networks will be good for recognizing handwritten math expressions? (digits, operators, variables, all elements seperated to be recognized individually). I need a program that would do that and I tried a neural network, it is good for images from dataset but terrible for stuff from outside the dataset. Would you have any tips? I would be really greatful

  • @boeng9371
    @boeng9371 Před 4 měsíci +1

    In IBM we trust ✊😔

  • @the1111011
    @the1111011 Před 10 měsíci

    why you didn't explain how the network updates the weight

  • @stefanfueger3487
    @stefanfueger3487 Před 11 měsíci +8

    Wait ... the video is online for four hours ... and still no question how he manages to write mirrored?

  • @mohslimani5716
    @mohslimani5716 Před 11 měsíci

    Thanks still I need to understand how technically does it happen

    • @AnjaliSharma-dv5ke
      @AnjaliSharma-dv5ke Před 11 měsíci +2

      It’s done by calculating the derivatives of the y hats with respect to the weights, and the function done backwards in the network applying the chain rule of calculus

  • @Justme-dk7vm
    @Justme-dk7vm Před 2 měsíci +1

    ANY CHANCE TO GIVE 1000 LIKES ???😩