Neural Networks Part 5: ArgMax and SoftMax

Sdílet
Vložit
  • čas přidán 16. 06. 2024
  • When your Neural Network has more than one output, then it is very common to train with SoftMax and, once trained, swap SoftMax out for ArgMax. This video give you all the details on these two methods so that you'll know when and why to use ArgMax or SoftMax.
    NOTE: This StatQuest assumes that you already understand:
    The main ideas behind Neural Networks: • The Essential Main Ide...
    How Neural Networks work with multiple inputs and outputs: • Neural Networks Pt. 4:...
    For a complete index of all the StatQuest videos, check out:
    statquest.org/video-index/
    If you'd like to support StatQuest, please consider...
    Buying my book, The StatQuest Illustrated Guide to Machine Learning:
    PDF - statquest.gumroad.com/l/wvtmc
    Paperback - www.amazon.com/dp/B09ZCKR4H6
    Kindle eBook - www.amazon.com/dp/B09ZG79HXC
    Patreon: / statquest
    ...or...
    CZcams Membership: / @statquest
    ...a cool StatQuest t-shirt or sweatshirt:
    shop.spreadshirt.com/statques...
    ...buying one or two of my songs (or go large and get a whole album!)
    joshuastarmer.bandcamp.com/
    ...or just donating to StatQuest!
    www.paypal.me/statquest
    Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
    / joshuastarmer
    0:00 Awesome song and introduction
    2:02 ArgMax
    4:21 SoftMax
    6:36 SoftMax properties
    9:31 SoftMax general equation
    10:20 SoftMax derivatives
    #StatQuest #NeuralNetworks #ArgMax #SoftMax

Komentáře • 227

  • @statquest
    @statquest  Před 2 lety +9

    The full Neural Networks playlist, from the basics to deep learning, is here: czcams.com/video/CqOfi41LfDw/video.html
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @mrglootie101
    @mrglootie101 Před 3 lety +28

    Can't wait for "cross entropy cleary explained!" BAM!

  • @AlbertHerrandoMoraira
    @AlbertHerrandoMoraira Před 3 lety +33

    Your videos are awesome! Thank you for doing them and continue with the great work! 👍

  • @bryan6aero
    @bryan6aero Před 2 lety +2

    Thank you! This is by far the clearest explanation of SoftMax I've found. I finally get it!

  • @cara1362
    @cara1362 Před 3 lety +15

    The video is so impressive especially when you explain why we can't treat the output of softmax as a simple probability. Best tutorial ever for all the explanations in ML!!!

  • @201pulse
    @201pulse Před 3 lety +2

    I just want to say that YOU are awesome. Best educational content on the web hands down.

  • @Aman-uk6fw
    @Aman-uk6fw Před 3 lety +8

    No words for you man , you are doing a very great, and I totally fall in love with your music and way you teach, love from india❤️

  • @karansaxena96
    @karansaxena96 Před 2 lety +3

    Your way of explaining things made me subscribe you. Love to see topics explained in a simple yet funny way. Keep up the great work. And also.... *BAM*

    • @statquest
      @statquest  Před 2 lety +1

      Thank you very much! BAM! :)

  • @factsfigures2740
    @factsfigures2740 Před 3 lety +5

    Sir the way you teach is exceptionally creative
    thanks to you , my deep learning exam went well

    • @statquest
      @statquest  Před 3 lety +3

      TRIPLE BAM!!! Congratulations!!

  • @ishanbuddhika4317
    @ishanbuddhika4317 Před 2 lety +3

    Hi Josh,
    Your explanations are super awesome!!! You ruin barriers for statistics!!! Also they are super creative :). Many Thanks! Please keep it up. Thanks again. BAM!!!

  • @iReaperYo
    @iReaperYo Před měsícem +1

    nice touch at the end. I didn't realise the use for ArgMax until you said it's nice for classifying new observations

  • @aswink112
    @aswink112 Před 3 lety +1

    Thanks Josh for the crystal clear explanation.

  • @NicholasHeeralal
    @NicholasHeeralal Před 2 lety +2

    Your videos have been extremely helpful, thank you so much!!

  • @lucarauchenberger628
    @lucarauchenberger628 Před 2 lety +2

    this is all so well explained! just wow!

  • @menchenkenner
    @menchenkenner Před 3 lety +7

    Hey Josh, needless to say, your videos and tutorials are amazingly fun! Can you please create an video-series on Shapley values! Those are widely used in practise.

    • @statquest
      @statquest  Před 3 lety +2

      Thanks for your support and I'll keep that topic in mind! :)

  • @AndruXa
    @AndruXa Před rokem +9

    universities offering AI/ML programs should just hire a program manager to sort and prioritize Josh Starmer's YT videos and organize exams

  • @srishylesh2935
    @srishylesh2935 Před rokem +1

    Josh. Hands down genius. Im crying.

  • @drccccccccc
    @drccccccccc Před 2 lety +1

    you deserve a professor tittle!!! Fantastic

  • @haadialiaqat4590
    @haadialiaqat4590 Před 2 lety +1

    Excellent vedio. Thank you for explaining so well.

  • @coralkuta7804
    @coralkuta7804 Před rokem +1

    Just bought your book ! it's AMAZING !!! your videos too :)

    • @statquest
      @statquest  Před rokem +1

      Thank you so much! :)

    • @coralkuta7804
      @coralkuta7804 Před rokem +1

      @@statquest I'm spreading your existance to all of my students friends ✌️

  • @palsshin
    @palsshin Před 2 lety +1

    amazing as always!!

  • @user-se8ld5nn7o
    @user-se8ld5nn7o Před 2 lety +1

    Hi! First of all, absolutely amazing video!

  • @ilkinhamid1072
    @ilkinhamid1072 Před 3 lety +1

    Thank You for awesome explanation

  • @faycalzaidi6459
    @faycalzaidi6459 Před 3 lety +2

    bonjour JOSH
    merci beaucoup pour cette belle explication.

  • @jijie133
    @jijie133 Před 3 lety +1

    predicted probabilities, probabilities calibration. Great video.

  • @patriciachang5079
    @patriciachang5079 Před 3 lety +5

    Thousand thanks for the explanation! Your explanation is much easier to understand, comparing to my lecturers! Could you make some videos about cost function? :)

  • @gurns681
    @gurns681 Před 2 lety +1

    Fantastic vid!

  • @amiryo8936
    @amiryo8936 Před rokem +1

    Lovely video 👌

  • @hangchen
    @hangchen Před rokem +1

    11:06 The best word of the century.

  • @naughtrussel5787
    @naughtrussel5787 Před 9 měsíci +1

    Cute bear next to formulae is the best way to explain math to me.

  • @qingfenglin
    @qingfenglin Před 6 měsíci +1

    Thanks!

    • @statquest
      @statquest  Před 6 měsíci

      Thank you so much for supporting StatQuest! TRIPLE BAM!!! :)

  • @shivamkumar-rn2ve
    @shivamkumar-rn2ve Před 2 lety +1

    BAM you cleared all my doubt

  • @BillHaug
    @BillHaug Před 11 měsíci +1

    I saw the thumbnail and the pirate flag and immediately knew where you were going haha.

  • @francismikaelmagueflor1749

    low key kinda proud that I did the derivative before you even asked where it came from xd

  • @AnujFalcon
    @AnujFalcon Před 2 lety +1

    Thanks.

  • @abhishekm4996
    @abhishekm4996 Před 3 lety +2

    Thanks..🥳

  • @junaidbutt3000
    @junaidbutt3000 Před 3 lety +1

    Great video as always Josh! Just to clarify something about the discussion around the 9:38 timestamp, you're taking i =1 (Setosa) as an example right? When updating all of the parameter values via backpropagation, we would need to compute the softmax derivatives for all i and with respect to all output values - is that correct? So we would also require the derivative for the softmax value Virginica with respect to raw values for setosa, versicolor and virginica and also the derivative for the softmax value Versicolor with respect to raw values for setosa, versicolor and virginica?

  • @jennycotan7080
    @jennycotan7080 Před 6 měsíci +1

    That pirate joke!
    Moving on in the fields of Maths...

  • @weisionglee360
    @weisionglee360 Před rokem +1

    First, thank you for your amazingly well-planned and prepared course videos! They are invaluable! A question about SoftMax func. It seems to me, for single output, Softmax() will always return value "1", so can't be used for backpropagation, no?

    • @statquest
      @statquest  Před rokem

      If you only have a single output from your NN, then you wouldn't use Softmax to begin with. However, when you have more than one output, then the derivative works out. For details, see czcams.com/video/M59JElEPgIg/video.html czcams.com/video/6ArSys5qHAU/video.html and czcams.com/video/xBEh66V9gZo/video.html

  • @hunterswartz6389
    @hunterswartz6389 Před 2 lety +1

    Nice

  • @fndpires
    @fndpires Před 2 lety +1

    Come on people, buy his songs, subscribe to the channel, thumbs UP, give him some money! Look what hes doing. HUGE DAMN!

    • @statquest
      @statquest  Před 2 lety +1

      Thanks for the support!!! :)

  • @breakingBro325
    @breakingBro325 Před 7 měsíci

    Hello Josh, really nice video, could I ask you what software you used to create the video? I want to take notes by using the same thing you used and learn some presentation skills from it.

    • @statquest
      @statquest  Před 7 měsíci

      I give away all of my secrets in this video: czcams.com/video/crLXJG-EAhk/video.html

  • @joaoperin8313
    @joaoperin8313 Před rokem +1

    We need to minimize SSR to Regression problems using Neural Network -> when we have a quantitative response,
    We use SoftMax , ArgMax and CrossEntropy to Classification problems using Neural Network -> when we have a qualitative response. I think is something in this line...

    • @statquest
      @statquest  Před rokem +1

      Yep, that's pretty much the idea.

  • @martynasvenckus423
    @martynasvenckus423 Před 2 lety

    Hi Josh, thanks for great video as always. The only thing I wanted to ask is about argmax function. The way you describe it works implies that argmax returns a vector of 0s (having 1 in the position of maximum value) which is of the same length as the input vector. However, the way argmax works in numpy or pytorch libraries is by returning a scalar value indicating the position instead of a vector. Given this difference, what is the true behaviour of argmax? Thanks

    • @statquest
      @statquest  Před 2 lety

      In both cases, argmax identifies the element with the largest value.

  • @Kagmajn
    @Kagmajn Před 11 měsíci +1

    nice

  • @travel6142
    @travel6142 Před 2 lety +1

    Thank you for this video. I understood the logic behind softmax. While backpropagating from loss to softmax and then from softmax to the raw input, for example for setosa we have 3 derivates (as you mentioned in video). After calculating them (derivate of setosa wrt to the 3 classes), what do we do? We sum them up? Or multiply, or, ... ?

    • @statquest
      @statquest  Před 2 lety +1

      See: czcams.com/video/xBEh66V9gZo/video.html

    • @travel6142
      @travel6142 Před 2 lety +1

      @@statquest I will check it, thank you!

  • @bingochipspass08
    @bingochipspass08 Před 2 lety +1

    Not all heroes wear capes!

  • @dianaayt
    @dianaayt Před 8 měsíci

    hi! Does softmax has any limitations? It seems to good to be true and when that happens it usually isn't good haha I've seem some like being sensitive to outliers but I don't quite understand why. Is it if the raw numbers had some outlier?

    • @statquest
      @statquest  Před 8 měsíci

      What do you mean by "too good to be true"? What seems too good to be true about the softmax function?

  • @zhenhuahuang291
    @zhenhuahuang291 Před 3 lety

    Could you do some videos of R or SAS for Neural Network using ReLU and Softmax activiation functions?

    • @statquest
      @statquest  Před 3 lety

      I plan on doing on in R soon.

  • @pranjalpatil9659
    @pranjalpatil9659 Před 2 lety +1

    I wish Josh taught me all the maths I've ever learned

  • @elemenohpi8510
    @elemenohpi8510 Před 4 měsíci

    Thank you for the video. Quick question, as far as I understood, argmax and softmax are applied to the outputs of the last layer. Couldn't we use Argmax but train the network with back propagation with the outputs before argmax is applied?

    • @statquest
      @statquest  Před 4 měsíci

      Yes, and that is often the case.

  • @anshulbisht4130
    @anshulbisht4130 Před rokem

    Hey josh ,
    Q1) if we are classifying N class then do our NN give us N-1 decision surface ?
    Q2) when we get our query point Xq , we pass it through all decision surface and get value predicted by each surface ?

    • @statquest
      @statquest  Před rokem

      A1) See: czcams.com/video/83LYR-1IcjA/video.html
      A2) See A1.

  • @janeli2487
    @janeli2487 Před rokem

    Hey @StatQuest,
    I am a bit confused about ArgMax function and why its derivative is 0. The argmax function that I used in python return the index of the max value which I would assume is different from what the ArgMax function you mentioned here. What is the explicit function of the ArgMax in your video?

    • @statquest
      @statquest  Před rokem

      Regardless of whether or not your function sets the largest output to 1 and everything else to 0, or just returns the index of the largest output and ignores everything else, the the output is is constant until the threshold is met, then switches at that point (is discontinuous) and is then constant again. Thus, either way, the derivative is 0.

  • @beshosamir8978
    @beshosamir8978 Před rokem

    Hi Josh , I have some doubts here , Why we needed to use softmax at all in training ?why we didn't continue to use SSR like a backpropagation main idea ? is there any problem with SSR , so it made us had to transform the output to something else to work with ?

    • @statquest
      @statquest  Před rokem

      SoftMax allows us to use Cross Entropy as a loss function, which I believe makes training easier when there are multiple classifications.

  • @yourfutureself4327
    @yourfutureself4327 Před rokem +1

    💚

  • @alonsomartinez9588
    @alonsomartinez9588 Před rokem

    It would be good to remind people what 'e' is in this vid as well as what the current value of it is! People could mistake error of the network vs entropy?

  • @AdrianDolinay
    @AdrianDolinay Před 2 lety +2

    Great thumbnail lol

  • @julescesar4779
    @julescesar4779 Před 2 lety +1

  • @EEBADUGANIVANJARIAKANKSH
    @EEBADUGANIVANJARIAKANKSH Před 2 lety +1

    let say i have the chance to increase ur subscriber,
    I will make it to 1M (small BAM!), {10^0}
    no no I will change it to 10M (BAM!) {10^1}
    but I guess ur channel should have at least 100M subs (Double BAM) {10^2}
    0, 1, 2 denotes the Standard of BAM!
    Jokes apart,
    I really think this is one of the most useful channel I have ever seen, I like the way he structures his videos for explaining the concept. Sometimes even my Professors look at these videos for reference. That's how good the channel is!!!!!

  • @rachelcyr4306
    @rachelcyr4306 Před 3 lety +1

    Do you have anything on soft max logistic regression????

  • @alrzhr
    @alrzhr Před 11 měsíci

    This guy is different :)))

  • @tianchengsun3767
    @tianchengsun3767 Před 2 lety

    looks that softmax is very similar to logistic regression? correct me if I am wrong? Could you give a brief explanation? Thank you so much

    • @statquest
      @statquest  Před 2 lety

      It's quite different. Logistic regression doesn't just take a bunch of random values and convert them into "probabilities". For details, see: czcams.com/play/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe.html

  • @mountaindrew_
    @mountaindrew_ Před rokem

    Is SSR used mainly for single output neural networks?

    • @statquest
      @statquest  Před rokem

      it depends on what you are predicting.

  • @porkypig7170
    @porkypig7170 Před rokem

    I’m getting 0.11 (rounded), not 0.10 as the softmax for versicolor using this calculation: e^-0,4/(e^1,43+e^-0,4+e^0,23)
    Is it correct? Just double checking to make sure I’m making the right calculations

  • @ritwikpm
    @ritwikpm Před 7 měsíci

    We minimise cross entropy (= - log likelihood) to fit both Neural Networks and Logistic Regression. Logistic regression can also theoretically converge to different parameter estimates based on initial weights - just like neural networks. But we still consider their output to be a representation of probability - specifically because they are fit to maximise log likelihood. Why can't similar logic be applied to Neural Network classification. The parameter estimates might vary, but as long as we are maximising log likelihood (and minimising the most common loss cross entropy), are we not predicting probabilities...?

    • @statquest
      @statquest  Před 7 měsíci

      To be honest, I don't really know. But if I had to guess, it might have something to do with the fact that Logistic Regression fits a relatively simple and easy to understand shape to the data that doesn't allow non-linearities in the sense that the predicted probabilities don't start low, then go up and then go low again. In contrast, neural networks have no limit on the shape they can fit to the data and allow all kinds of non-linearities.

  • @averagegamer9513
    @averagegamer9513 Před rokem

    I have a question. Why is the softmax function necessary? It seems like you could directly calculate probabilities between 0 and 1 summing to 1 without the exponential function, so why do we use it?

    • @statquest
      @statquest  Před rokem

      Sure, there are other ways you could solve this problem. However, the SoftMax function has a derivative that is relatively easy to compute, and that makes it relatively easy to work with in terms of using Backpropagation.

    • @averagegamer9513
      @averagegamer9513 Před rokem +1

      @@statquest Thanks for the explanation, and great video!

  • @MADaniel717
    @MADaniel717 Před 3 lety

    How do I tune the other weights and biases altogether?

    • @statquest
      @statquest  Před 3 lety

      Like this: czcams.com/video/IN2XmBhILt4/video.html czcams.com/video/iyn2zdALii8/video.html czcams.com/video/GKZoOHXGcLo/video.html czcams.com/video/xBEh66V9gZo/video.html

  • @Itachi-uchihaeterno
    @Itachi-uchihaeterno Před 5 měsíci

    More videos , Autoencoders and GANs

    • @statquest
      @statquest  Před 5 měsíci +1

      I'll keep those topics in mind.

  • @Anujkumar-my1wi
    @Anujkumar-my1wi Před 3 lety

    I know that a feedforward neural net with 1 hidden layer is universal approximator but can you tell me why we use nonlinear activaton function in 2nd hidden layer in neural net with 2 hidden layer ,beacuse the neurons in 1st hidden layer has leaned nonlinear function with respect to inputs and the 2nd hidden layer is just doing linear combination thus a linear combination of nonlinear function with respect to inputs is a nonlinear function ,then why we use activation function in 2nd layer in 2 layer neural net?

    • @statquest
      @statquest  Před 3 lety

      I think the more activation functions we have, the more flexibility we have in the model.

    • @Anujkumar-my1wi
      @Anujkumar-my1wi Před 3 lety

      @@statquest means we can use second just for linear combination of nonlinear function(learned from previous layer neurons)so to learn more complex nonlinear function,but this won't provide more flexibility than if we have used activation function with linear combination.

  • @brahimmatougui1195
    @brahimmatougui1195 Před 10 měsíci

    but sometimes we need to give probabilities along with the model prediction, especially for multiclass prediction. if we can not trust the probabilities (8:11) given by the model what should we do? In other words, If I want to assign probabilities to each class provided in the output, how would I go about doing it?

    • @statquest
      @statquest  Před 10 měsíci +1

      These "probabilities" follow the definition of "probability" (they are between 0 and 1 and add up to 1) - so if that is good enough, then you are good to go. However, if you want to use them in a setting where you can interpret them as "given these input values, 95% of the time the species is X", then you should use a different model. Possibly logistic regression would be a better fit.

    • @brahimmatougui1195
      @brahimmatougui1195 Před 10 měsíci +1

      @@statquest Thank you for your prompt answer

  • @CreativePuppyYT
    @CreativePuppyYT Před 3 lety

    You forgot to add this video to the machine learning playlist

    • @statquest
      @statquest  Před 3 lety +1

      Thanks! I'm still in the middle of the neural network series of videos. Hopefully when they are done (in a few weeks) I'll get the playlists organized properly.

  • @tuananhvt1997
    @tuananhvt1997 Před rokem

    >Setosa, Versicolor, Virginica
    I notice that reference 🤔

    • @statquest
      @statquest  Před rokem

      I'm not sure I understand what you are getting at.

  • @ayushupadhyay9501
    @ayushupadhyay9501 Před 2 lety +1

    Bam bam bam

  • @Anujkumar-my1wi
    @Anujkumar-my1wi Před 3 lety

    I want to know in pure mathematics , do neurons learns functions with certain superpositions, width,height and slope (controlled by neurons through weights and biases) such that when we combine them we'll get a approximation for the function we're trying to approximate?

    • @statquest
      @statquest  Před 3 lety

      Neural Networks are considered "universal function approximators".

    • @Anujkumar-my1wi
      @Anujkumar-my1wi Před 3 lety

      @@statquest I mean they approximate function by learning certain simpler function with ceratain superposition ,slope,height,width(controlled by weights and biases) so that when we combine them we get a approximation for the function we're trying to approximate?

    • @statquest
      @statquest  Před 3 lety

      @@Anujkumar-my1wi To be honest, I'm probably the worst person to ask about these sorts of things. I know that, through weights and biases, we create a wide variety of non-linear functions that are added together to create a complicated function that approximates the training data. However, I'm not sure that's what you're looking for.

    • @Anujkumar-my1wi
      @Anujkumar-my1wi Před 3 lety

      @@statquest No , i just wanted to ask whether that's the way a neural net works mathematically.

    • @statquest
      @statquest  Před 3 lety

      @@Anujkumar-my1wi I'm still a little confused, because mathematically, Neural Networks do exactly what I describe in these videos. I'm not dumbing down the math, this is the real deal, so what you see here is what Neural Networks do mathematically.

  • @andrewdunbar828
    @andrewdunbar828 Před rokem

    Does the output range depend on the activation function? Looks like ReLU but I think it can't happen with sigmoids.

    • @statquest
      @statquest  Před rokem

      The output range of what?

    • @andrewdunbar828
      @andrewdunbar828 Před rokem

      @@statquest The output nodes. Right at the start around 1:40

    • @statquest
      @statquest  Před rokem +1

      @@andrewdunbar828 Because the activation functions are in the middle, and then, after them, we multiply those values by weights and add biases, that, in theory, could be anything, we could definitely end up with numbers > 1 and < 0 even if the activation functions were sigmoids. For example, if the last bias term before the output for setosa was +100, then we could easily end up with output values > 100.

    • @andrewdunbar828
      @andrewdunbar828 Před rokem

      @@statquest Hmm I have much to learn (-:

  • @Rictoo
    @Rictoo Před 4 měsíci

    I have a question! At 3:35 you say "ArgMax will output 1 for any other value greater than 0.23" - but shouldn't it be "greater than 1.43", because ArgMax points to the value that is the highest in the set of outputs? Other related question: And then is the intuition that if we know the true value of Virginica (e.g., if the training sample was truly Virginica), then if the ArgMax is 0 for Virginica on that training example (because we predicted it wrong), then we essentially "Wouldn't know how to get to the right answer", because we have no slope pointing towards the right answer? We're just told "You're wrong. Not telling you _how_ wrong, just wrong." which isn't helpful for learning.

    • @statquest
      @statquest  Před 4 měsíci +1

      At 3:34 I say "> 0.23", because 0.23 is the second largest number, and any number larger than it, will be the one selected by argmax. If, instead, I had said "> 1.43", then nothing would be selected, since 1.43 is the largest number and nothing is larger.
      And your intuition for the second part is correct.

    • @Rictoo
      @Rictoo Před 4 měsíci

      Ohhh, thanks. Now I understand that the Argmax function you're plotting there is the Argmax of the Setosa class, not Versicolor (I think?). I was initially under the impression it was for the Versicolor class.@@statquest

  • @csmatyi
    @csmatyi Před 2 lety

    what happens when you run the NN with softmax and 2 outputs have the same value?

    • @statquest
      @statquest  Před 2 lety

      Then they'll have the same softmax output.

  • @shubhamtalks9718
    @shubhamtalks9718 Před 3 lety

    Why not do the normalization of raw output values? What is the benefit of first doing exponentiation and then normalization?

    • @statquest
      @statquest  Před 3 lety +1

      I believe that the exponentiation ensures that the SoftMax function will be continuous for all input values.

    • @shubhamtalks9718
      @shubhamtalks9718 Před 3 lety

      @@statquest Will it be discontinuous if we do normalization of raw output values?

    • @statquest
      @statquest  Před 3 lety +1

      @@shubhamtalks9718 If two of the 3 outputs are 0, then we'll get ArgMax, and that's no good.

    • @shubhamtalks9718
      @shubhamtalks9718 Před 3 lety +1

      @@statquest BAM!!! Got it. Thanks.

  • @felipe_marra
    @felipe_marra Před 7 měsíci +1

    up

  • @luciferpyro4057
    @luciferpyro4057 Před 2 lety

    What does e stand for in the softmax equation? did I miss something ?
    Is "e" suppose to represent euler's number = 2.7182818284590452353602874713527... ?

    • @statquest
      @statquest  Před 2 lety

      'e' is Euler's number. 'e', and the natural log (log base 'e'), are used throughout machine learning (and statistics) because their derivatives are so easy to work with.

    • @luciferpyro4057
      @luciferpyro4057 Před 2 lety +1

      @@statquest Thanks

  • @srewashilahiri2567
    @srewashilahiri2567 Před 2 lety

    If we start with different values for weights and biases then why will the optimum values be different if we have a global minimum for each through gradient descent? What am I missing?

    • @statquest
      @statquest  Před 2 lety

      There are lots of local minimums that we can get stuck in, and there may be several that are almost as good as the global minimum.

    • @srewashilahiri2567
      @srewashilahiri2567 Před 2 lety +1

      @@statquest Did some reading and got your point completely....thanks for the videos...not sure if learning ML could get any easier or better!

    • @statquest
      @statquest  Před 2 lety

      @@srewashilahiri2567 bam!

    • @yashikajain5997
      @yashikajain5997 Před 2 lety

      @@statquest Stucking in the local minima would depend on the cost function? If we use Cross-entropy as the loss function, then because it is a convex function, it will definitely converge to the global minima. And, in this case can we trust the accuracy of these 'probabilities'?
      This is what I am thinking, please correct me if I am wrong.
      Thank You

    • @statquest
      @statquest  Před 2 lety +1

      @@yashikajain5997 Unfortunately it's not that simple. Cross-Entropy, like SSR, is convex in very simple situations, but the entire Neural Network is non-linear with respect to the parameters so regardless of the loss function, we can end up with a strange shape that has local minima that we can get stuck in.

  • @Xayuap
    @Xayuap Před rokem +2

    ¡ B A M ! 😳

  • @gummybear8883
    @gummybear8883 Před 2 lety

    Anybody knows what is the equivalent of argmax in tensorflow's activation arguments ? They only have softmax in there.

    • @statquest
      @statquest  Před 2 lety

      There's probably a base "max" function in Python or numpy you could use.

    • @gummybear8883
      @gummybear8883 Před 2 lety

      @@statquest Thanks for the suggestion Josh. I bought your new sketch book and I think it is very clever. I thought it would have been much better, if the book cover was hard bound. Overall, thank you for making these videos.

    • @statquest
      @statquest  Před 2 lety +1

      @@gummybear8883 Thanks! I would have loved to have made a hardback addition, but I'm self publishing and it was not an option.

  • @BlackHermit
    @BlackHermit Před 2 lety +1

    Arrrrrrrg! .)

  • @YuriPedan
    @YuriPedan Před 3 lety

    Somehow "Part 4 Multiple inputs and outputs" video is not available for me :(

    • @statquest
      @statquest  Před 3 lety

      Thanks for pointing that out. I've fixed the link: czcams.com/video/83LYR-1IcjA/video.html

    • @YuriPedan
      @YuriPedan Před 3 lety +1

      @@statquest Thank you very much!

  • @Janeilliams
    @Janeilliams Před 2 lety

    can you show or share the python implementaion

  • @howardkennedy4540
    @howardkennedy4540 Před 3 lety

    Why is the versicolor softmax value +0.10 vs -0.10? The math indicates a negative value.

    • @statquest
      @statquest  Před 3 lety

      SoftMax values are always positive and between 0 and 1. Can you explain how you got a negative value?

    • @howardkennedy4540
      @howardkennedy4540 Před 3 lety +1

      @@statquest I misunderstood your notation and missed your comment on e raised to the power. My apologies.

  • @nelsonmcnamara
    @nelsonmcnamara Před 4 měsíci

    Hello comment section. Would anyone know, or can point me to the right direction if I actually want the Probability (no quote), instead of "Probability"?
    Imagine if I am predicting the probability of Red Sox winning or Kim winning the Presidential Election, how would I approach that?

    • @statquest
      @statquest  Před 4 měsíci

      If you want real probabilities, than you don't want to use a neural network. Instead, consider using something like linear regression czcams.com/video/nk2CQITm_eo/video.html or logistic regression czcams.com/video/yIYKR4sgzI8/video.html

  • @austinoquinn815
    @austinoquinn815 Před rokem

    Why do we bother applying either of these? cant we just train with raw outputs rather than using softmax and just take the highest valued node as the answer rather than argmax?

    • @statquest
      @statquest  Před rokem +1

      That's a valid question and the answer has to do with how softmax feeds into Cross Entropy, and cross entropy is easer to train than the raw output values. For details on all of this, see: czcams.com/video/6ArSys5qHAU/video.html

  • @phoenixado9708
    @phoenixado9708 Před 2 lety +1

    So where's hardmax and hardplus

  • @alternativepotato
    @alternativepotato Před 3 lety +1

    heh, setosas value after softmax is 0.69

  • @Alchemist10241
    @Alchemist10241 Před 2 lety

    6:33 This teddy bear eats raw outputs, digests them using Vitamin e (not E) and then sh*ts them between flag zero and flag one. 😁

  • @Anonymous-tm7jp
    @Anonymous-tm7jp Před 9 měsíci +1

    AAAARRRRRGGGG!!! mAx😂😂

  • @charansahitlenka6446
    @charansahitlenka6446 Před rokem

    at 6:51 softmax takes 1.43 and gives out 0.69, heavy sus

  • @terjeoseberg990
    @terjeoseberg990 Před 6 měsíci +3

    Nobody likes derivatives that are totally lame. Especially gradient decent.

  • @jijie133
    @jijie133 Před 3 lety +1

    toilet paper. so funny.

  • @allyourcode
    @allyourcode Před 3 lety

    ArgMax and SoftMax seem rather pointless since you can already tell which classification the NN is predicting from its raw output; just look for the the greatest output. SoftMax is just going to lull people into the false sense that the outputs are probabilities. In reality, there is nothing super special about its choice of the exp function to force everything to be positive (plus a normalization factor to force everything to add up to 1). Any (differentiable) function f where f(x) >= 0 would have worked just as well as exp.

  • @Salmanul_
    @Salmanul_ Před 7 měsíci +1

    Thanks!