Stochastic gradient descent explained | Stochastic gradient descent vs Gradient descent|Mini batch

Sdílet
Vložit
  • čas přidán 4. 06. 2020
  • Stochastic gradient descent explained | Stochastic gradient descent vs Gradient descent|Mini batch
    #StochasticgradientDescent #SGD
    Hello All,
    I am Aman, a data scientist by profession.
    About this video:
    In this video, I explain about stochastic gradient descent in detail. I explain how is stochastic gradient descent different from gradient descent with example. I also explain about mini batch gradient descent.
    Below questions are answered in this video:
    1. What is Stochastic gradient descent in Neural netowrk?
    2. SGD in machine learning.
    3. gradient descent vs Stochastic gradient descent (SGD).
    4. What is mini batch gradient descent?
    5. How Stochastic gradient descent/SGD works in machine learning?
    About Unfold Data science: This channel is to help people understand basics of data science through simple examples in easy way. Anybody without having prior knowledge of computer programming or statistics or machine learning and artificial intelligence can get an understanding of data science at high level through this channel. The videos uploaded will not be very technical in nature and hence it can be easily grasped by viewers from different background as well.
    Join Facebook group :
    groups/41022...
    Follow on medium : / amanrai77
    Follow on quora: www.quora.com/profile/Aman-Ku...
    Follow on twitter : @unfoldds
    Get connected on LinkedIn : / aman-kumar-b4881440
    Follow on Instagram : unfolddatascience
    Watch Introduction to Data Science full playlist here : • Data Science In 15 Min...
    Watch python for data science playlist here:
    • Python Basics For Data...
    Watch statistics and mathematics playlist here :
    • Measures of Central Te...
    Watch End to End Implementation of a simple machine learning model in Python here:
    • How Does Machine Learn...
    Learn Ensemble Model, Bagging and Boosting here:
    • Introduction to Ensemb...
    Access all my codes here:
    drive.google.com/drive/folder...
    Have question for me? Ask me here : docs.google.com/forms/d/1ccgl...
    My Music: www.bensound.com/royalty-free...

Komentáře • 49

  • @saicharan4016
    @saicharan4016 Před 2 lety

    thanks a lot fo r sucn inof #Unfold Data Science ,Aman

  • @kunnu223
    @kunnu223 Před rokem

    What a explaination 👏👏

  • @RAJIBLOCHANDAS
    @RAJIBLOCHANDAS Před 2 lety

    Nice explanation.

  • @yosefelias3619
    @yosefelias3619 Před 2 lety +1

    Thank for your deep explanation. if you add code with data set it will be amazing

  • @Agrima_Art_World
    @Agrima_Art_World Před 4 lety +1

    Good Article of SGD.Please make vedio on SGD with momentum

  • @praveenkumarmaduri4126

    ur best...very gud explanation

  • @sandipansarkar9211
    @sandipansarkar9211 Před 3 lety +1

    great explanation

    • @UnfoldDataScience
      @UnfoldDataScience  Před 3 lety

      Thanks a lot Sandipan for motivating me through your comment.

  • @sadhnarai8757
    @sadhnarai8757 Před 4 lety

    Very good Aman

  • @phanikumar3136
    @phanikumar3136 Před 4 lety +1

    Plzz do it sir.... Ur explanation is gud..

  • @mahisyena2805
    @mahisyena2805 Před 4 lety

    veryyy well explained

  • @vaishalideshwal2534
    @vaishalideshwal2534 Před 4 lety +2

    You explained every topic very well. please make a video on how to predict COVID 19 with help of GradientBoostingRegressor in Python and on ARENA model.

    • @UnfoldDataScience
      @UnfoldDataScience  Před 4 lety +2

      Hi Vaishali, I had created a video on this topic. you can check that in my videos. My predictions were also right as per current scenario. I used time series model there. Thanks. Happy Learning!

    • @vaishalideshwal2534
      @vaishalideshwal2534 Před 4 lety +2

      @@UnfoldDataScience I have seen the video, really good knowledge.

  • @namanverma2450
    @namanverma2450 Před 2 lety +1

    nice video !!

  • @santhoshgattoji7517
    @santhoshgattoji7517 Před 4 lety

    Understood SGD aman, thanks. please make video on Momentum.

  • @laiweihandaryl8709
    @laiweihandaryl8709 Před 3 lety +1

    Hi Sir, thank you so much for your video. I'd like to ask, if I'm already doing Lasso Regression, should I still do gradient descent after that?

  • @mouleshm210
    @mouleshm210 Před 3 lety +1

    Sir..gud eve
    I have a small clarification..(reg SGD)
    I'm having a dataset of 50k rows and 15 features which i used to perform logistic regression using SGD(with L2 regularization) from scratch in python..I initially divided my dataset into train test with split of 25 percent and just trained my classifier with 37500 datapoints..
    When I executed the code with 500 epochs (where each epoch uses one random data point to update the weights & intercept)..
    It is almost taking 20 minutes to execute the same..I actually wanted to reach a convergence with a tolerance of 10^-3 but with 500 iterations I wasn't able to do it..now i increased no of iterations to 1500 where it is taking almost 1 hr to display the output..
    Could you plz tell me whether I'm going in right direction to reach the optimal weights?..

  • @kethavathlalbahadur4292
    @kethavathlalbahadur4292 Před 2 měsíci

    explanation was good sir, but we need your notes following videos sir

    • @kethavathlalbahadur4292
      @kethavathlalbahadur4292 Před 2 měsíci

      we need your notes following videos sir....could you please prepare notes .sir

  • @sudheerrao9820
    @sudheerrao9820 Před 4 lety

    Please do video on Momentum also if possible..

  • @ajiteshthawait4771
    @ajiteshthawait4771 Před rokem +2

    Weight calculated using loss function. Optimise the loss function.

    • @ajiteshthawait4771
      @ajiteshthawait4771 Před rokem

      Gradient decent takes the whole data whereas stochastic descent take one record at random, but the problem is that weight calculated might be Baised. So we'll use mini batch descent. Finding the best slope (m) and c (constant intercept).

    • @ajiteshthawait4771
      @ajiteshthawait4771 Před rokem

      Computational will be expansive.

    • @ajiteshthawait4771
      @ajiteshthawait4771 Před rokem

      Loss function is nothing but MSE, we've to optimise it by differentiating it with slope and intercept.

  • @gokulpisharody3155
    @gokulpisharody3155 Před 3 lety

    Sir make a video on adam also...

  • @coxixx
    @coxixx Před 3 lety

    please explain what is bias and mumentum.

    • @UnfoldDataScience
      @UnfoldDataScience  Před 3 lety +1

      These things are explained in a separate video. Please watch it. Its i same playlist.

  • @AlgoTribes
    @AlgoTribes Před 3 lety

    Sir jee momentum bhi samjha dijiyega..khud pe se bharosa uth gaya h..😁😁

  • @deeptiverma8922
    @deeptiverma8922 Před rokem

    SGD with momentum

  • @neuralverseinc4341
    @neuralverseinc4341 Před 3 lety +1

    notation is incorrect. loss = sum(actual - predicted)2

  • @d-pain4844
    @d-pain4844 Před 2 lety +1

    Delhi university anyone?? Jo aaj paper deke aaya aur kuch smj nhi aaya paper dekhke??