Chain Rule | Deep Learning Tutorial 15 (Tensorflow2.0, Keras & Python)

Sdílet
Vložit
  • čas přidán 23. 08. 2020
  • This video gives a very simple explanation of a chain rule that is used while training a neural network. Chain rule is something that is covered when you study differential calculus. Now don't worry about not knowing calculus, the chain rule is rather a simple mathematical concept. As a prerequisite of this video please watch my video on derivative in this sample deep learning series (link of entire series is below). It is even better if you watch this entire series step by step so that you have your foundations clear when you are watching this video.
    🔖 Hashtags 🔖
    #chainruleneuralnetwork #chainrule #chainrulepython #neuralnetworkpythontutorial #chainrulederivatives
    Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.
    Next Video: • Tensorboard Introducti...
    Previous video: • Stochastic Gradient De...
    Deep learning playlist: • Deep Learning With Ten...
    Machine learning playlist : czcams.com/users/playlist?list...
    Khan academy video on chain rule: • Chain rule | Derivativ...
    Prerequisites for this series:
    1: Python tutorials (first 16 videos): czcams.com/users/playlist?list...
    2: Pandas tutorials(first 8 videos): • Pandas Tutorial (Data ...
    3: Machine learning playlist (first 16 videos): czcams.com/users/playlist?list...
    Website: codebasics.io/
    Facebook: / codebasicshub
    Twitter: / codebasicshub

Komentáře • 105

  • @codebasics
    @codebasics  Před 2 lety

    Do you want to learn technology from me? Check codebasics.io/ for my affordable video courses.

  • @SatyaBhambhani
    @SatyaBhambhani Před 2 lety +1

    Man, this series! I love it. As an Econometrician, I can see how well he connects even the basics like derivatives to the depths of gradient descent.

  • @satishsangole9102
    @satishsangole9102 Před 2 lety +1

    Precise, Concrete, Complete, and Simple. You are simply amazing!

  • @sejalanand23
    @sejalanand23 Před rokem +2

    As simple as it could get. Thanks a lot!!! You have a gift, sir.

  • @Ankurkumar14680
    @Ankurkumar14680 Před 3 lety +24

    Thanks for maintaining great pace of releasing new videos on this series....u r really a big help to lots of students and professionals around the globe. :)

    • @codebasics
      @codebasics  Před 3 lety +2

      Thanks Ankur for your kind words :)

  • @chankit4392
    @chankit4392 Před 2 lety

    you are such a good teacher! those complex concept I never get understand them before watching your videos

  • @RichardBronosky
    @RichardBronosky Před 2 lety +4

    When I was in school, I was never able to maintain interest in this long enough to grasp it. But now that I am doing robotics and have a need to reconcile the error between the Computer Vision prediction and physical coordinates, this is extremely interesting, and therefore easy to learn.
    Put another way, my steps are this:
    * Pull an image from a camera
    * Select a point (x,y) on the image to move finger to
    * Convert that to a point in space (x,y,z)
    * Move finger
    * Pull image from camera
    * Locate the point (x,y) on the image where the finger appears
    * Correct the error
    Well that simple sounding last step is where the process in your video applies.

  • @karthikc8992
    @karthikc8992 Před 3 lety +4

    Sir
    Looking great today
    Hope this continues 😊

  • @vivekmodi3165
    @vivekmodi3165 Před 3 lety

    Your explanation is too easy to understand. Your tutorials make the complex topic to simple. Thank you. You're great teacher for AI and machine learning.

  • @hugomartinez9370
    @hugomartinez9370 Před 3 lety +2

    thanks a lot for the explanation, now I finally understand this :)

  • @rndtest8691
    @rndtest8691 Před 2 lety

    Code Basics : your explanation is THE BEST ... You make very complex things very simple , easy to understand

  • @aakuthotaharibabu8244

    I fell thankful to find a teacher like u sir!!!!!!!!

  • @harshsawant4946
    @harshsawant4946 Před rokem +1

    You are a great teacher ... Hats off to you 🙌🙌

  • @ramsunil4317
    @ramsunil4317 Před 3 lety

    Thanks for explaining in a simple way .now the difficult puzzle has been solved for me.

  • @trend_dindia6714
    @trend_dindia6714 Před 3 lety

    Most clean explanation on back propagation 👌

  • @sandiproy330
    @sandiproy330 Před rokem

    Great content. Nice teaching with a lucid example. Thanks a lot for the effort.

  • @chandralekhadhanasekaran3530

    Each time I see your videos, I am learning new things and getting excited. Keep Exciting us :)

  • @shreyasb.s3819
    @shreyasb.s3819 Před 3 lety

    U cleared my all doubt's. Thanks a lot

  • @osho2810
    @osho2810 Před 2 lety

    Very Easy..... and free from lots of confusion.. Thank youSir

  • @sooryaprakash6390
    @sooryaprakash6390 Před 3 lety +1

    Best Video with a simple and clear explanation I have ever seen!.Will share with all my ml peeps!

  • @vinaybhujbal4347
    @vinaybhujbal4347 Před 3 měsíci

    Greatly explained 🙌🏾

  • @srinivasamaddukuri282
    @srinivasamaddukuri282 Před 2 měsíci

    you nailed it man

  • @chayaravindra7649
    @chayaravindra7649 Před 3 lety

    Thank you sir.because of you i got interest in learning Datascience,machine learning,,sir kindly upload more video on deep learning....

  • @fahadreda3060
    @fahadreda3060 Před 3 lety

    Really Nice Video, Thanks man

  • @yogeshbharadwaj6200
    @yogeshbharadwaj6200 Před 3 lety +1

    Never thought Chain Rule concept is so simple.... now I'm getting confidence in Maths day by day....its all because of you sir....you are proving that "Way of Teaching" matters a lot in making students/any person understand the concepts......Hats Off !!

    • @codebasics
      @codebasics  Před 3 lety +1

      👍☺️ thanks for your kind words yogesh

  • @AnkitKumar-cq4rc
    @AnkitKumar-cq4rc Před 3 lety

    Sir its really very helpful for us .
    hope yu update the videos as soon as possible.

  • @mahalerahulm
    @mahalerahulm Před 2 lety

    Very nice explanation ..

  • @RajaKumar-hg9wl
    @RajaKumar-hg9wl Před 2 lety

    Hats off to you!!! Very superb. I have gone through different courses / learnt from different places. But the concept of Chain Rule is not explained this much simpler way in other places. Thank you so much.

  • @navyak8717
    @navyak8717 Před 3 lety

    Very helpful

  • @GrowthPulse509
    @GrowthPulse509 Před 6 měsíci

    respect thank you so much sir

  • @nurbekss2729
    @nurbekss2729 Před 2 lety

    So sad that you were not my math teacher at school.You are so good.

  • @muhammednihas2218
    @muhammednihas2218 Před 16 dny

    Thank you so much sir ,

  • @anandanv2361
    @anandanv2361 Před 3 lety

    This is extremely a nice video

  • @achrafelkhanjari9157
    @achrafelkhanjari9157 Před 2 lety

    greetings from morocco ,nice explanation

  • @nastaran1010
    @nastaran1010 Před 4 měsíci

    great!

  • @mohdsyukur1699
    @mohdsyukur1699 Před 2 měsíci

    You are the best my boss

  • @khalidal-reemi3361
    @khalidal-reemi3361 Před 3 lety

    Thanks alot man.
    You said that you will make a separate video for "why we square the difference and not take the absolute value"
    Please give me the link to that video if available because I was always asking my self this question.

  • @saadshqah8664
    @saadshqah8664 Před 3 lety

    Thanks a lots

  • @vijaya4740
    @vijaya4740 Před 3 lety +1

    Thankyou for such a great explanation sir, I have seen all your ML and python pandas and numpy series,
    All were awesome. Hope you continue with same pace!!

    • @codebasics
      @codebasics  Před 3 lety +3

      yes my plan is to upload more videos on deep learning and than start a new series on NLP

    • @oz4232
      @oz4232 Před 3 lety

      @@codebasics this is very good news, sir.

  • @rohanrocker
    @rohanrocker Před 3 lety

    One of best video in the youtube.

  • @sanooosai
    @sanooosai Před 4 měsíci

    thank you

  • @harishwarboss
    @harishwarboss Před 3 lety +1

    Hey man u look so young ...Thank u for sharing the knowledge it is absolutely helpful ..keep doing that 🥳🥳

    • @codebasics
      @codebasics  Před 3 lety

      I am happy this was helpful to you.

  • @kirankumarb2190
    @kirankumarb2190 Před 3 lety

    Initially it felt quite confusing.. but later when you explained with example equations, I got it. Thankyou Sir.

  • @faezeabdolinejad731
    @faezeabdolinejad731 Před 2 lety +1

    thanks , 😃😀

  • @santoshkumarmishra441
    @santoshkumarmishra441 Před 3 lety

    REALLY GREAT EXPLANATIONS

  • @rushikeshdeshmukh2034
    @rushikeshdeshmukh2034 Před 2 lety +3

    Excellent video lecture and this entire series is superb. Thanks a lot for these methodical, step by step systematic videos.
    I just would like to provide constructive feedback for the chain rule formula mentioned by you after 12:00 minutes in this video. There are two paths from A to Z one via X and another via Y. So the partial derivative of Z with respect to A should consider these two paths.

  • @oz4232
    @oz4232 Před 3 lety

    thanks.....

  • @MissInesM
    @MissInesM Před 3 lety +1

    Back propagation seems almost trivial thanks to you. Thank you !

    • @codebasics
      @codebasics  Před 3 lety

      Thank you for your kind words 😊👍

    • @HarshalDev
      @HarshalDev Před 3 lety

      +1
      he's an amazing teacher :')

  • @tugrulpinar16
    @tugrulpinar16 Před 3 lety

    visuals helps a lot!

  • @royreed9148
    @royreed9148 Před 5 měsíci

    Hi, I think this video is great. I’m finished lesson 15. The part I don’t understand is at one point you talk about the derivative of loss with respect to the weights, and in the next lesson (15), you talk about derivatives between equations (and multiply them) that have nothing to do with loss. This is very confusing. I guess I’m just getting old.
    Anyway, it is true that you have a passion for this, and I wish you well.

  • @priyabratapanda1216
    @priyabratapanda1216 Před 3 lety

    Thank you so much sir .Deep Learning seems very easy to me only because of you🙏🙏🙏.
    Sir one thing I have a little doubt in this tutorial that you haven't mentioned about the activation function after each hidden layers ?
    But Overall lecture was very intuitive 🙏🙏💓

  • @zahidhassan4815
    @zahidhassan4815 Před 3 lety

    Man, you have given awesome explanation, i was trying to understand from 3bluebrown but now i am able to join the puzzle and have a picture of what's happening in reality.

    • @codebasics
      @codebasics  Před 3 lety +1

      I am happy this was helpful to you.

  • @nriezedichisom1676
    @nriezedichisom1676 Před rokem

    I love how you keep saying that this is easy whereas it is not😊

  • @1980chetansingla
    @1980chetansingla Před 3 lety +1

    Sir plz do on optimization video

  • @vinayak254
    @vinayak254 Před rokem

    sir, you explain complex things in a simple manner. Thank you very much. I expect you put some deep learning/ machine learning projects with respect to image processing.

    • @codebasics
      @codebasics  Před rokem

      yes I do have an end to end project for potato plant disease prediction

    • @vinayak254
      @vinayak254 Před rokem

      @@codebasics Thank you sir

  • @fauzisafina1827
    @fauzisafina1827 Před 3 lety

    Thank you, sir, very useful, sir, may I request a video about how to convert the Tensorflow model to Tensorflow Lite

  • @tusharpatne6863
    @tusharpatne6863 Před 3 lety +5

    Sir, please start the new series "Algorithms with Python"

  • @ezekielizedonmi310
    @ezekielizedonmi310 Před 2 lety

    I love you bro ... Thanks for these videos. Pls can I take a certification exam in Tensorflow after these tutorials? What role can I function in a company haven completed these your tutorial series?

  • @iitianvlogs9805
    @iitianvlogs9805 Před 3 lety

    how to store pdf's converted text in mongoDB so that while extracting the search string it will also tell us in which pdf and which page no and which peragraph the required information is available.please help sir big fan

  • @Mathmagician73
    @Mathmagician73 Před 3 lety +1

    Hardly waiting for new video 🤩🤩

  • @babachicken333
    @babachicken333 Před rokem

    dayumm he a gamer

  • @yashmaheshwari8420
    @yashmaheshwari8420 Před 2 lety +2

    for all those who are asking why squaring error ?
    lets say your model output [0,1] for two inputs and actual output should be [1,0]
    calculate error without squaring or doing absolute :
    total error = ((actual first output - predicted 1)+(actual second output - predicted 2))/2
    total error = ((0-1) +(1-0))/2
    total error = (-1 + 1)/2
    total error = 0/2 =0
    what ?
    error is 0 our model is perfect ?
    squaring error this time :
    total error = ((0-1)^2 +(1-0)^2)/2
    total error = (1+1)/2=1

  • @MarshallDTeach-yr2ig
    @MarshallDTeach-yr2ig Před 3 lety +1

    🙏

  • @BatBallBites
    @BatBallBites Před 3 lety

    (y) chain rule very well explained

  • @tooljerk666
    @tooljerk666 Před 2 lety +1

    At time mark 12:53, what exactly do you do with 28? Say B is age. Is this saying with a change in B, we can expect z to be whatever age is plus 28?

  • @p1yush_meher
    @p1yush_meher Před rokem

    Shall we get the ppts??

  • @orgdejavu8247
    @orgdejavu8247 Před 2 lety +1

    can someone tell why it equals = area only at 5:26

  • @joyanbhathena7251
    @joyanbhathena7251 Před 3 lety

    Why do we just substract in w1=w1-'something', shouldn't we add as well?

  • @luccahuguet
    @luccahuguet Před 3 lety

    why doesnt it have a nonlinear activation function?

  • @orgdejavu8247
    @orgdejavu8247 Před 2 lety

    I can't find how to derive the Loss func. in respect to weight1 or 2,3etc, SO how the math is done behind the derivative part of the gradient descent formula. Everyone is showing the result or a smaller reshaped formula, but I would need the steps inbetween. An example where we backprop a single perceptron (with 1 or 2 weights, L2 function) would do it. Pls someone give me a link or a hand. Thanks!
    I do understand the essence of chain rule, but I wanna know dL/dw = dL/dPred * dPred/dw (dL/dw is the change in the weight1 respec to L2 Loss function, dPred is the derivative of the neuron's output (mx+b) )
    I don't get why the result is 2(target-pred) * input1.
    Is that because L2 is a squarefunction and the derivative of x^2 or error^2 is 2X , so 2error --> 2(target-pred), but then why the other parts of the formula disappear.

  • @vikramyadav-fe4vj
    @vikramyadav-fe4vj Před 2 měsíci

    how u got the equations x= a2+7b and y = c3 +d and z=4x+3y ? plz someone explain?

  • @shah9366
    @shah9366 Před 3 lety

    hi sir ,
    i am student of computer sciences & engineering in 3rd year right now, i want to developed my career in data sciences .what should i do?how i start? now i am in Gujarat what types of competitive exam i cracked? please help me? best university in india or world how i go there ? what types of exam they have? admission step?

  • @d.s.5157
    @d.s.5157 Před 7 měsíci

    Why do you get -1 ?

  • @mujamilkhan714
    @mujamilkhan714 Před 3 lety +1

    Still how many tutorials are left for a complete deep learning series...?????

    • @karthikc8992
      @karthikc8992 Před 3 lety +2

      Alot

    • @codebasics
      @codebasics  Před 3 lety +3

      I agree. There will be many more videos 😊 and the most interesting thing would be deep learning projects 👍

    • @mujamilkhan714
      @mujamilkhan714 Před 3 lety

      codebasics thanks for update Dhaval bhai I’m eagerly waiting for projects phase✌🏻

    • @vamsinadh100
      @vamsinadh100 Před 3 lety

      @@codebasics Waiting!!!

  • @alok4289
    @alok4289 Před 3 lety

    0:25 When you squeeze hour forehead, it makes the sign of lord mahadev 🔱

  • @aleenshrestha8119
    @aleenshrestha8119 Před 2 lety

    where is the use off loss function .i didn't get in while u teaching gradient there is no use of looss function

  • @RakeshKambojVinayak
    @RakeshKambojVinayak Před 3 lety

    Sir, my question is related to gradient descent.
    why applied activation function only for gradient descent :- "sigmoid_numpy(weighted_sum) "
    not applied any activation function for stochastic and batch gradient descent:- github.com/codebasics/deep-learning-keras-tf-tutorial/blob/master/8_sgd_vs_gd/gd_and_sgd.ipynb
    Thanks for a great source of inspiration!

  • @rajdipdas1329
    @rajdipdas1329 Před 2 lety

    d/dx(z)=d/dz(z)*dz/dx
    this is the chain rule

  • @phoneix24886
    @phoneix24886 Před 18 dny

    Should have followed your channel earlier before spending money on useless bootcamps to learn DL

  • @vishaljaiswar6441
    @vishaljaiswar6441 Před 2 lety

    you are such a good teacher! those complex concept I never get understand them before watching your videos