Tutorial 13- Global Minima and Local Minima in Depth Understanding

Sdílet
Vložit
  • čas přidán 28. 07. 2019
  • In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema)Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
    Below are the various playlist created on ML,Data Science and Deep Learning. Please subscribe and support the channel. Happy Learning!
    Deep Learning Playlist: • Tutorial 1- Introducti...
    Data Science Projects playlist: • Generative Adversarial...
    NLP playlist: • Natural Language Proce...
    Statistics Playlist: • Population vs Sample i...
    Feature Engineering playlist: • Feature Engineering in...
    Computer Vision playlist: • OpenCV Installation | ...
    Data Science Interview Question playlist: • Complete Life Cycle of...
    You can buy my book on Finance with Machine Learning and Deep Learning from the below url
    amazon url: www.amazon.in/Hands-Python-Fi...
    🙏🙏🙏🙏🙏🙏🙏🙏
    YOU JUST NEED TO DO
    3 THINGS to support my channel
    LIKE
    SHARE
    &
    SUBSCRIBE
    TO MY CZcams CHANNEL

Komentáře • 50

  • @saravanakumarm5647
    @saravanakumarm5647 Před 3 lety +8

    Am self studying machine learning. Really your videos are amazing to get the full overview quickly and even a layman can understand.

  • @sairaj6875
    @sairaj6875 Před 10 měsíci

    Stopped this video halfway through to say thank you! Your grasp on the topic is outstanding and your way of demonstration is impeccable. Now resuming the video!

  • @nithinmamidala
    @nithinmamidala Před 4 lety +11

    your videos are like a suspense movie. need to watch another, need to see till the final playlist.. so much time to spend to know the final result.

  • @shalinianunay2713
    @shalinianunay2713 Před 4 lety +2

    You making people fall in love with Deep learning.

  • @harshstrum
    @harshstrum Před 4 lety +2

    Krish bhaiya, you are just awesome. Thanks for all that you are doing for us.

  • @poojarai7336
    @poojarai7336 Před 5 dny

    you are a blessing for new students sir..God's gift to we students

  • @abhishek247ai6
    @abhishek247ai6 Před 2 lety +1

    You are awesome... One of the gems in this field who making others life simpler.

  • @sahilmahajan421
    @sahilmahajan421 Před rokem

    amazing. simple, short & crisp

  • @vgaurav3011
    @vgaurav3011 Před 4 lety +1

    Very very amazing explanation thanks a lot!!!

  • @hiteshyerekar9810
    @hiteshyerekar9810 Před 5 lety +27

    Hi krish,your all video are too good.But do some practicle example on those videos so we can understand how to implement it practically.

    • @SundasLatif
      @SundasLatif Před 4 lety +1

      Yes, adding how to implement will make this series more helpful.

    • @aujasvimoudgil2738
      @aujasvimoudgil2738 Před 4 lety

      Hi Krish, Please make a playlist of practical implementation of these theoretical concepts

  • @muhammadshifa4886
    @muhammadshifa4886 Před rokem

    You are always awesome! Thanks Krish Naik

  • @CoolSwag351
    @CoolSwag351 Před 3 lety +8

    Hi Krish. Thanks a lot for your videos. You make me fell love with DL❤️ I took many introductory courses in coursera and udemy from which I couldn't understand all the concepts. You're videos are just amazing. One request, could you please make some practical implementations of the concepts so that it would be easy for us to understand in practical problems.

  • @mohdazam1404
    @mohdazam1404 Před 4 lety +2

    Ultimate explanation, thanks Krish

  • @touseefahmad4892
    @touseefahmad4892 Před 5 lety +1

    Nice Explanation Krish Sir ...

  • @vishaljhaveri7565
    @vishaljhaveri7565 Před 2 lety

    Thank you, Krish sir. Good explanation.

  • @liudreamer8403
    @liudreamer8403 Před 2 lety

    very impressive explanation. Now I total adapt to India English. So wonderful

  • @sarahashmori8999
    @sarahashmori8999 Před rokem

    i like this video you explained this very well! thank you!

  • @sudhasagar292
    @sudhasagar292 Před 3 lety +4

    this is sooo easily understandable sir.. Im sooo lucky to find you here.. thanks a ton for these valuable lessons sir.. keep shining..

  • @thealgorithm7633
    @thealgorithm7633 Před 5 lety +1

    Very nice explanation

  • @mscsakib6203
    @mscsakib6203 Před 4 lety

    Awesome...

  • @baaz5642
    @baaz5642 Před 2 lety

    Awesome!

  • @enoshsubba5875
    @enoshsubba5875 Před 4 lety +9

    Never Skip Calculus Class.

  • @vikashverma7893
    @vikashverma7893 Před 4 lety

    Nice explanation krish sir ..........

  • @vishaldas6346
    @vishaldas6346 Před 3 lety

    I don't think if the derivative of loss function for calculating new weights should be used as when equal to zero it makes the weights for the neural networks to W(new) = W(old). It would be related to vanishing gradient problem. Isn't it like the derivative of loss function for the output of neural network used where the y actual and y hat becomes approximately equal and the weights are optimised iteratively. Please make me correct if I'm wrong.

  • @knowledgehacker6023
    @knowledgehacker6023 Před 5 lety +1

    very nice

  • @xiyaul
    @xiyaul Před 4 lety

    You have mentioned in previous video that you will talk about Momentum in this video but i am yet to hear....

  • @zzzmd11
    @zzzmd11 Před 3 lety +2

    Hi Krish, very informative as always. Thank you so much. Can you pls also do a tutorial on Fokker Planck equation...Thanks alot in advance...

  • @sandipansarkar9211
    @sandipansarkar9211 Před 4 lety +7

    Hi Krish, .That was also a great video in terms of understandingPlease make a playlist of practical implementation of these theoretical concepts.Then please download the ipynb notebook just below so that we can practice it in jupyter notbook

  • @shefaligoyal3907
    @shefaligoyal3907 Před rokem

    at global minima if the deriavtive of the loss function wrt w becomes 0 then wold=wnew and lead to no change in value then how the loss function value be reduced?

  • @ahmedpashahayathnagar5022

    nice explanation Sir

  • @louerleseigneur4532
    @louerleseigneur4532 Před 3 lety

    Thanks Krish

  • @ohn0oo
    @ohn0oo Před rokem

    what if i have a decrease form 8 to infinity, would the lowest visible point still be my global minima?

  • @ibrahimShehzadGul
    @ibrahimShehzadGul Před 4 lety

    I think, at local minima the "∂L/∂w" is not = 0, bcz the ANN output is not equal to the required output. if I am wrong please correct me

  • @munjirunjuguna5701
    @munjirunjuguna5701 Před 2 lety +2

    Hello Krish,
    Thanks for the amazing work you are doing.
    Quick one: you have talked about the derivative being zero when updating the weights...so how do you tell it's a global minima and not the vanishing GD problem?

    • @sportsoctane
      @sportsoctane Před rokem

      U will check for the slope, let say you are getting started from negative slope, that means weights are getting decreased, now after reaching zero if it changes to positive, that means you got ur minima. As for vanishing it will keep decreasing only. Correct me @anyone if I'm wrong.

  • @quranicscience9631
    @quranicscience9631 Před 4 lety

    nice

  • @jaggu6409
    @jaggu6409 Před 3 lety

    krish bro when the w new and w old are equal then that will be forming the vanishing gradient decent right??

    • @alinawaz8147
      @alinawaz8147 Před 2 lety

      no bro vanishing gradient is a problem that occurs in chain rule when we use sigmoid or tanh to overcome that problem we use the ReLu activation function

  • @rafibasha1840
    @rafibasha1840 Před 2 lety

    Hi Krish,when the slope is zero at local maxima why don’t we consider local/global maxima instead of minima

  • @mizgaanmasani8456
    @mizgaanmasani8456 Před 4 lety +1

    why do Neurons need to get converge at global minima ?

    • @ish694
      @ish694 Před 4 lety +5

      Neurons dont. Weights converge to some values and those values represent the point at which the loss functions is at its minimum. Our goal here is to formulate some loss function and to find the weights or parameters that optimize, minimize, that loss function. Because if we don't optimize it, then our function won't learn any input-output relationship. It wont know what to predict when given a set of inputs.
      Also I think when he said neurons converge at the end, he meant parameters of a neuron not the value of a neuron itself.

  • @anindyabanerjee743
    @anindyabanerjee743 Před 3 lety +2

    If at global minima w'new is equal to w'old ,what is point of reaching there ?? am I missing something?? @krish naik

    • @bhagyashrighuge4170
      @bhagyashrighuge4170 Před 3 lety

      after that point slope increases or decreses

    • @KrishnaMishra-fl6pu
      @KrishnaMishra-fl6pu Před 2 lety

      The whole point is to reach the global minima only... Because at global minima you get W and at that W you'll get minimum loss..

  • @virkutisss3563
    @virkutisss3563 Před 2 lety

    Why do we need to minimize cost function in machine learning, what's the purpose of this? Yeah, I understand that there will be less erorrs etc., but I need to understand it from fundamental perspective. Why don't we use global maximum for example?

    • @aritratalapatra8452
      @aritratalapatra8452 Před rokem

      You minimise the error of your prediction, maxima means the point where error function is highest.

  • @prerakchoksi2379
    @prerakchoksi2379 Před 4 lety

    How do we deal with local maxima I am still not clear

    • @adityaanand3065
      @adityaanand3065 Před 3 lety

      Look for simulated annealing... you will get your answer. There are definitely many other methods, but I know this one.