04 PyTorch tutorial - How do computational graphs and autograd in PyTorch work

Sdílet
Vložit
  • čas přidán 19. 06. 2024
  • In this tutorial, we have talked about how the autograd system in PyTorch works and about its benefits. We also did a rewind of how the forward and backward pass work, and also how to calculate gradients.
    If you found this video helpful, please drop a like and subscribe. If you have any questions, write them in the comment section below.
    💻 Blog: datahacker.rs/004-computationa...
    🎞 CZcams: / @datahackerrs
    🎞 Link for google collab script: colab.research.google.com/git...
    .
    .
    .
    Timestamps
    00:00 Introduction
    00:52 Computational graph forward pass
    01:54 Python forward pass
    02:25 Gradient calculation
    02:49 Python backward pass
    03:29 Turn off gradient calculation
    04:58 Gradient accumulation
    07:01 Chain rule
    08:17 Derivative calculation visualized
    10:15 .backward() function
    .
    .
    .
    .
    #machinelearning #artificialintelligence #ai #datascience #python #deeplearning #technology #programming #coding #bigdata #computerscience #data #dataanalytics #tech #datascientist #iot #pythonprogramming #programmer #ml #developer #software #robotics #java #innovation #coder #javascript #datavisualization #analytics #neuralnetworks #bhfyp
  • Věda a technologie

Komentáře • 12

  • @josecuevas5814
    @josecuevas5814 Před rokem +2

    Thank you so much for the video! A practical, worry free, and clear explanation.

  • @donfeto7636
    @donfeto7636 Před 2 lety +1

    Awesome Video, would like to see more in-depth like this I like you don't ignore math, please do it with the loss functions of any algorithm you like (better if it is related to deep learning or GAN, diffusion model ..)

  • @ashilshah3376
    @ashilshah3376 Před rokem

    Very nice explanation thank you

  • @harshpandya9878
    @harshpandya9878 Před 3 lety +2

    great!!

  • @cristianarteaga
    @cristianarteaga Před 3 lety +2

    Thank you for such a great explanation!

  • @AlexeyMatushevsky
    @AlexeyMatushevsky Před 3 lety +1

    Very nice viedeo! Thank you !

    • @datahackerrs
      @datahackerrs  Před 3 lety +1

      Thanks for the nice comment, glad you like it!

  • @xflory26x
    @xflory26x Před 11 měsíci +1

    Can you please elaborate on what you are talking about in the backward() function step? It's not very clear what is happening when you reset the gradients with z.backward(v)

  • @bhavyabalan6225
    @bhavyabalan6225 Před 2 lety +2

    Hi, great video and explanation. In the last part why we need to define the vector of ones and why we do the division by length of the vector? The explanation for that change in code is not clear for me

    • @yusun5722
      @yusun5722 Před 2 lety

      The reason it needs a vector is because the gradients are calculated using the Jacobian matrix. A vector must be right multiplied with it to get the resultant gradients. Detail: pytorch.org/tutorials/beginner/introyt/autogradyt_tutorial.html
      For your second question, I think it's right NOT to divide by the length. If you manually calculate the derivative of dz/dx, it will be 20, 28 and 36.

  • @user-ig3rp7fk9c
    @user-ig3rp7fk9c Před 9 měsíci

    What about matrices, how do we handle the backward computations (derivatives) in matrices. Can you share any article or something. Thanks.