Optimization in Deep Learning | All Major Optimizers Explained in Detail

Sdílet
Vložit
  • čas přidán 27. 07. 2024
  • In this video, we will understand all major Optimization in Deep Learning. We will see what is Optimization in Deep Learning and why do we need them in the first place.
    Optimization in Deep Learning is a difficult concept to understand, so I have done my best to provide you with the best possible explanation after studying it from different sources, so that you can understand it with ease.
    So I hope after watching this video, you do struggle with the concept and you can understand it well.
    Optimization in Deep Learning is a technique that speeds up the training of the model.
    If you know about mini-batch gradient descent then you will know, that in mini-batch gradient descent, the learning takes place in a zig-zag manner. Thus, some time gets wasted in moving in a zig-zag direction instead of a straight direction.
    Optimization in Deep Learning reduces makes the learning path straighter and thus reducing the time taken to train the model
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    ▶ Mini Batch Gradient Descent: • Mini Batch Gradient De...
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    ✔ Improving Neural Network Playlist: • Overfitting and Underf...
    ✔ Complete Neural Network Playlist: • Neural Network python ...
    ✔ Complete Logistic Regression Playlist: • Logistic Regression Ma...
    ✔ Complete Linear Regression Playlist: www.youtube.com/watch?v=mlk0r...
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    Timestamp:
    0:00 Agenda
    1:02 Why do we need Optimization in Deep Learning
    2:36 What is Optimization in Deep Learning
    3:43 Exponentially Weighted Moving Average
    9:20 Momentum Optimizer Explained
    11:53 RMSprop Optimizer Explained
    15:36 Adam Optimizer Explained
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    Subscribe to my channel, because I upload a new Machine Learning video every week: czcams.com/channels/JFA.html...

Komentáře • 27

  • @user-lj3bo8it4r
    @user-lj3bo8it4r Před 10 měsíci +4

    Dude, i for the longest time felt like my understanding of moving averages and RmsProp was missing something and i found it in this video. You have no idea how grateful i am to your channel. Thank you, teachers tend to jump over important concepts without explaining them 🎉

    • @CodingLane
      @CodingLane  Před 10 měsíci

      Hehe… thank you so much for expressing this 😇

  • @areegfahad5968
    @areegfahad5968 Před rokem

    Thank you very much for the awesome explanation.

  • @azharhussian4326
    @azharhussian4326 Před 2 lety +3

    Explained it best. After many years finally got it

  • @igorg4129
    @igorg4129 Před 10 měsíci +1

    Your explanations are as always clear and very useful. One of the best on CZcams, and again I say it as a teacher myself (not in the AI field).
    Yet, an issue is being miss-explained in literally all available explanations on CZcams.
    For some reason, you are also among them.
    The issue is that you forget to mention that the loss surface is unique and different for EVERY observation and might potentially have minimums in different places for different observations. This is extremely important to understand especially in the context of stochastic gradient descent

    • @CodingLane
      @CodingLane  Před 10 měsíci

      Hi, thanks for the suggestion. Yes, its true

    • @igorg4129
      @igorg4129 Před 10 měsíci

      @@CodingLane Hope I not insulted you, If i did, my appologies. Just trying to sullply you constructive feedvacks
      As being said I think you are one of the best on you yube

  • @2daymatters
    @2daymatters Před 2 lety +1

    You explained it very smooth and clear, thank you!

  • @pavankongara792
    @pavankongara792 Před 4 měsíci

    Amazing!

  • @daniapy
    @daniapy Před 2 lety +3

    a life saver! thank you soo much for sharing this with us

  • @vijayakumarr.k.2469
    @vijayakumarr.k.2469 Před rokem

    thank you

  • @ekleanthony7735
    @ekleanthony7735 Před rokem +1

    This is the best explanation so far. Thanks for the great work

    • @CodingLane
      @CodingLane  Před rokem

      Thank you so much! Glad it helped! 🙂

  • @DelightDomain_DB
    @DelightDomain_DB Před 2 lety +1

    You made it super easy. Thanks for sharing

  • @stylish37
    @stylish37 Před 6 měsíci +1

    Best explanation out there. Thanks a lot!

  • @shubhamsinghal2756
    @shubhamsinghal2756 Před rokem +1

    A small doubt, in the RMSprop as W and B both are affected by the SEWMA then the B direction is also affected by db, thus the steps in the B direction will also get checked, so won't it too decrease the learning rate and defy our whole purpose?

  • @antonykahuro8349
    @antonykahuro8349 Před 2 lety +1

    Very nice explanation..Keep up the good work

    • @CodingLane
      @CodingLane  Před 2 lety

      Thank you so much. I appreciate it!

  • @nileshsahu6786
    @nileshsahu6786 Před 2 lety +1

    Nice Explanation, Keep it up.

  • @mugomuiruri2313
    @mugomuiruri2313 Před 7 měsíci

    good

  • @rohithdasari4888
    @rohithdasari4888 Před 2 měsíci

    i still dont understand one thing what does B mean here is it the direction or bias