Maximum Likelihood Estimate in Machine Learning

Sdílet
Vložit
  • čas přidán 10. 09. 2024

Komentáře • 31

  • @RezoanurRahman
    @RezoanurRahman Před 3 lety +2

    Bro..This is THE EXACT video i was looking for. Literally explained like i am a 5 year old.

  • @meet_minimalist
    @meet_minimalist Před 3 lety

    This is amazing. Using these loss functions for last 2 years but neither anyone taught me this nor I found such derivation on the internet. Love you for this video.

    • @NormalizedNerd
      @NormalizedNerd  Před 3 lety +1

      Glad to be able to make these videos...Keep supporting ❤️

  • @SumitKumar-uq3dg
    @SumitKumar-uq3dg Před 4 lety +3

    Hey man. Thanks a lot for considering my request. Really appreciate. Can u pls explain how the sigmoid curve is derived in logistic regression as we can probably have many of them. Like we use ols to find the best fit line in linear regression. Similarly how the sigmoid curve is found

  • @sharathkumartn5504
    @sharathkumartn5504 Před 4 lety +2

    It helped me understand the concept thanks. I find the way you explain is little fast, can you please slow a down a little bit such that we can digest.

    • @NormalizedNerd
      @NormalizedNerd  Před 4 lety +1

      Thanks for your feedback. I'll try to follow that. Keep supporting.

  • @pranalipatle2215
    @pranalipatle2215 Před 6 měsíci

    fan of your amazing explanation dear

  • @Kage1128
    @Kage1128 Před 6 měsíci

    10:08 excuse me sir why is it not mx + c + errorterm?

  • @suvkaka
    @suvkaka Před 3 lety

    You are great man !!!!!!!! If possible please make a video on Expectation-Maximization and how k-means clustering is related to it.

  • @surajgorai618
    @surajgorai618 Před 2 lety

    Great explanation

  • @SumitKumar-uq3dg
    @SumitKumar-uq3dg Před 4 lety +1

    Also can u pls make a quick video on linear regression assumptions as many youtube video confuses a lot

    • @NormalizedNerd
      @NormalizedNerd  Před 4 lety

      In this video, I've explained linear regression in great detail. Let me know if you still have doubts after watching this one. czcams.com/video/fnDO1s4fzi4/video.html

  • @congmou856
    @congmou856 Před 2 lety

    Very helpful!

  • @prashantjoshi8847
    @prashantjoshi8847 Před 3 lety

    This is awsome ... great job :-)

  • @mahesh151994
    @mahesh151994 Před 3 lety

    From 10:49+, I think you mean to say Joint Density Function and not Probability.

  • @Pedritox0953
    @Pedritox0953 Před 3 lety

    Good video!!

  • @sathya_official3843
    @sathya_official3843 Před 3 lety +1

    Hey! Good attempt. You still have a bit of things to correct for. Use presentation slides for clarity; or use writings that are legible and consistent for the same clarity issue. Give some more background information for the theories: examples, more theoretical story .. etc

  • @UmmadikTas
    @UmmadikTas Před 3 lety

    You should be citing some resources my dude. Your accent is not enough :P

  • @SumitKumar-uq3dg
    @SumitKumar-uq3dg Před 4 lety

    Hey man. Thanks a lot for considering my request. Really appreciate. Can u pls explain how the sigmoid curve is derived in logistic regression as we can probably have many of them. Like we use ols to find the best fit line in linear regression. Similarly how the sigmoid curve is found

    • @NormalizedNerd
      @NormalizedNerd  Před 4 lety

      Glad this video helped you.
      Now here's the thing...
      I think you have confused the shape of the sigmoid curve with the shape of the boundary in case of logistic regression. As long as we are passing a linear function (h(x)) inside the sigmoid function we can only get linear boundaries (a line in 2D, a plane in 3D and so on). So in logistic regression also we are finding the best fit line. The difference is that...In linear regression, the best fit line is that line from which the squared distance of all points is minimum but in logistic regression, the best fit line is that line which acts as the best boundary between the 2 classes. The sigmoid function (non-linear) just helps with the probability and does not affect the linearity of the boundary! In fact, this the reason why logistic regression can not solve the xor problem.

    • @SumitKumar-uq3dg
      @SumitKumar-uq3dg Před 4 lety

      @@NormalizedNerd so do u mean we find the best fit line in logistic regression and then apply sigmoid function in order to get the linear boundary between the two classes!

    • @NormalizedNerd
      @NormalizedNerd  Před 4 lety

      @@SumitKumar-uq3dg No, I wanted to say that the best fit line (in the context of logistic regression) is the boundary line.

    • @SumitKumar-uq3dg
      @SumitKumar-uq3dg Před 4 lety

      @@NormalizedNerd am still confused. In almost every video I have seen they have explained about restricting the values between 0 and 1 by applying sigmoid.

    • @NormalizedNerd
      @NormalizedNerd  Před 4 lety

      @@SumitKumar-uq3dgYes that is true. Restricting the values between (0,1) gives the sense of probability. But where is your exact confusion?