Mini Batch Gradient Descent | Deep Learning | with Stochastic Gradient Descent

Sdílet
Vložit
  • čas přidán 27. 07. 2024
  • Mini Batch Gradient Descent is an algorithm that helps to speed up learning while dealing with a large dataset.
    Instead of updating the weight parameters after assessing the entire dataset, Mini Batch Gradient Descent updates weight parameters after assessing the small batch of the dataset. Thus we can make much progress before our model sees the entire dataset. Thus the learning can be very fast.
    In the video, we also saw Stochastic Gradient Descent, which updates the weight parameter after evaluating every data point or data record. Stochastic Gradient Descent has its own disadvantages as well, which is overcome by the Mini Batch Gradient Descent.
    So, in practice, we don't use Stochastic Gradient Descent but we use Mini Batch Gradient descent while dealing with large datasets. And we use simple, Batch Gradient Descent while dealing with small datasets.
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    ✔ Improving Neural Network Playlist: • Overfitting and Underf...
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    ✔ Complete Neural Network Playlist: • Neural Network python ...
    ✔ Complete Logistic Regression Playlist: • Logistic Regression Ma...
    ✔ Complete Linear Regression Playlist: • How Neural Networks wo...
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    Timestamp:
    0:00 Agenda
    0:38 When Gradient Descent will Fail
    2:24 Mini Batch Gradient Descent Deep Learning
    3:31 Drawback of Mini Batch Gradient Descent
    5:47 Stochastic Gradient Descent
    6:30 When to use Mini Batch Gradient Descent
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    This is Your Lane to Machine Learning ⭐
    ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖
    Subscribe to my channel, because I upload a new Machine Learning video every week: / @codinglane

Komentáře • 41

  • @CodingLane
    @CodingLane  Před 2 lety

    If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

  • @chenmoasis
    @chenmoasis Před rokem

    Amazing helpful video!

  • @choice_of_royals5268
    @choice_of_royals5268 Před rokem

    amzing, deeply explained. thanks

  • @sumitpawar000
    @sumitpawar000 Před 2 lety +2

    This is by far the best video on introduction to optimizers.
    Very precise, articulated and clears all the doubts.
    Thanks a lot brother !

  • @SaschaRobitzki
    @SaschaRobitzki Před rokem

    Great job!

  • @afn8370
    @afn8370 Před 2 měsíci

    very good explanation. u need more views

  • @dr.ranjinips3103
    @dr.ranjinips3103 Před 3 měsíci

    Very nice Explanation. Super

  • @arvinflores5316
    @arvinflores5316 Před 2 lety +2

    Oh wow. When I was start learning ML, last year, your linear regression was one of the vids I first watched. Now, I'm a data scientist, you're still uploading high quality vids. Thank you! Hopefully we could get to see LSTMs and Transformers in the future. :P good day.

    • @CodingLane
      @CodingLane  Před 2 lety

      Wow... Really good to know this. Thank you for sharing your story! And Yes I will be uploading videos on LSTM.

    • @arvinflores5316
      @arvinflores5316 Před 2 lety +1

      @@CodingLane Wow nice! Can I make a suggestion? Maybe in the future you can include weight initialization like Xavier and He Norm. Those topics tend to be ignored because the computer is basically covering those, (I'm guilty of that :P) without knowing the reason behind it, e.g the disadvantages of weight initialization with 0 value.

    • @CodingLane
      @CodingLane  Před 2 lety

      @@arvinflores5316 Thank you for giving this suggestion. I will definitely consider making videos on these topics.

  • @GK-jw8bn
    @GK-jw8bn Před 2 lety +1

    very good explanation! well done!

  • @hardikdas2378
    @hardikdas2378 Před 2 lety +1

    very informative and precise.

  • @devanshsanghavi
    @devanshsanghavi Před 2 lety +2

    Brilliant explanation... Keep it up!

  • @jordiwang
    @jordiwang Před rokem

    you got a new sub bro, good video

  • @hosseinkarimi3381
    @hosseinkarimi3381 Před rokem

    Thank you so much for your videos.

    • @CodingLane
      @CodingLane  Před rokem

      You're Welcome. Glad you like them! 😇

  • @Jump_mind
    @Jump_mind Před rokem

    thx

  • @malikhamza9286
    @malikhamza9286 Před 3 lety +1

    Amazing dude. Keep it up.

  • @azadehazhdari931
    @azadehazhdari931 Před 10 měsíci

    👌

  • @amirrezamousavi5139
    @amirrezamousavi5139 Před 2 lety +1

    Great job

  • @sharangkulkarni1759
    @sharangkulkarni1759 Před 2 lety +1

    ty

  • @alexkonopatski429
    @alexkonopatski429 Před rokem +1

    good video! but I have only one question: where does the noise come from, that you mentioned at 5:11?

    • @CodingLane
      @CodingLane  Před rokem

      Thats how the loss changes when we have more number of features

  • @mugomuiruri2313
    @mugomuiruri2313 Před 7 měsíci

    good boy

  • @AbhinavSingh-oq7dk
    @AbhinavSingh-oq7dk Před 2 lety +1

    So from what I understand,
    in Mini Batch Gradient Descent, model will train 1st mini batch, update the weights and then those updated weights will be used to train the 2nd mini batch, update and then the 3rd mini batch,..., till the last mini batch (1 epoch), then the last mini batch updated weight will be again used on 1st mini batch during 2nd epoch and so on?
    Do correct if wrong.

    • @CodingLane
      @CodingLane  Před 2 lety +1

      You are correct on your understanding Abhinav. I would just like to correct the words. You can say that the updated weights after training on any mini-batch is used to propoagate forward, and then they are updated again in backward propagation. Eg, randomly initialise weights at the beginning. Propagate forward (perform forward propagation) using 1st mini batch, then perform backward propagation, then update weights. Use those updated weights to perform forward propagation using 2nd minibatch, then backward propagation, update weights again and so on.

    • @AbhinavSingh-oq7dk
      @AbhinavSingh-oq7dk Před 2 lety +1

      @@CodingLane Thanks :)

  • @EEDNAGELIVINAYAKSHRINIWAS

    Your videos are helpful, Can you suggest a good book on same...

    • @CodingLane
      @CodingLane  Před 2 lety

      Hi... I don’t refer any book so can’t suggest you any. Although you can search for good books on ML online. I once found an article which showed top 10 books for learning ML.

    • @EEDNAGELIVINAYAKSHRINIWAS
      @EEDNAGELIVINAYAKSHRINIWAS Před 2 lety

      @@CodingLane Ok ! Thanks!...But have topics like batch normalisation and standard network like LeNet, AlexNet, AGG, GoogleNet in detail

    • @CodingLane
      @CodingLane  Před 2 lety

      @@EEDNAGELIVINAYAKSHRINIWAS Have you read the original research papers for these? I think you can learn about these in their research papers only.

    • @EEDNAGELIVINAYAKSHRINIWAS
      @EEDNAGELIVINAYAKSHRINIWAS Před 2 lety

      @@CodingLane Ok..can you mail me on my id lets discuss separately on some projects

    • @CodingLane
      @CodingLane  Před 2 lety

      Hi Vinayak, you can mail me on codeboosterjp@gmail.com with your query. I will see what i can do to help.