Perceptron Learning Algorithm in Machine Learning | Neural Networks

Sdílet
Vložit
  • čas přidán 22. 06. 2020
  • Machine Learning: • Machine Learning
    Android App(Notes+Videos): play.google.com/store/apps/de...
    Facebook: / thinkxacademy
    Twitter: / thinkxacademy
    Instagram: / thinkxacademy
    #perceptron #ml #neuralnetworks

Komentáře • 57

  • @srimannarayanaiyengar8914
    @srimannarayanaiyengar8914 Před 3 lety +57

    Excellent explanation my friend . I loved it .I am CSE professor of age 61 years . May god bless you .Please provide vedios like this so that many of student community can learn. May godess saraswathi bless you for your bright future.

    • @ThinkXAcademy
      @ThinkXAcademy  Před 3 lety +7

      Thank you so much for such kind words i will keep creating more videos to help student community ✅😄

    • @ninasirsi2340
      @ninasirsi2340 Před 11 měsíci +1

      Sir at age of 61 what will you do by learning perception are you going to teach them in your college?

    • @animezone2062
      @animezone2062 Před 4 měsíci +1

      ​@@ninasirsi2340 he said hes a Cse PROFESSOR

  • @jeremyyd1258
    @jeremyyd1258 Před rokem +3

    Thank you SO much for such a clear explanation, with the visuals to support it. I really appreciate it!

  • @richardnorthiii3374
    @richardnorthiii3374 Před 4 měsíci +2

    Finally a clear explanation. Thank you.

  • @mauryaashish1865
    @mauryaashish1865 Před rokem +3

    Your way of explanation is so simple and organized that any one can understand.
    I enjoyed learning Perceptron, you are amazing educator. Thank you for such content. :)

  • @bubiubcyufg-zc4ui
    @bubiubcyufg-zc4ui Před měsícem

    it was super useful for me
    thank you my friend!

  • @rangeenbilla
    @rangeenbilla Před 6 měsíci

    finally understood after hoping so many videos. W!

  • @riki2404
    @riki2404 Před 3 lety +3

    simply amazing explanation . Thanks a lot.

    • @ThinkXAcademy
      @ThinkXAcademy  Před 3 lety +1

      Thanks for support✔️These comments make my day😄

  • @Babygirl_S
    @Babygirl_S Před rokem

    This was so good! Thank you very much.

  • @anvayawalgaonkar4119
    @anvayawalgaonkar4119 Před 2 lety +1

    Explained in a very easy way..please share the basics of perceptron on jupyter notebook like real hands on experience

  • @slainiae
    @slainiae Před rokem

    Excellent explanation👍

  • @harshchindarkar5887
    @harshchindarkar5887 Před 2 lety

    Thanks man now concept is cleared for me...

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety +1

      Great👍🏻Please share our videos to help this channel grow🌟

  • @csadhi
    @csadhi Před 2 lety +4

    I have gone through few videos about the topic and did not get the clear understanding. But your video was very clear and examples were very simple to understand, great job and keep up the good job. A big thanks for explaining things clearly.

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety

      Thanks😀 Do share my videos with other students to help this channel grow🌟

  • @samarthagarwal6929
    @samarthagarwal6929 Před 2 měsíci +1

    you forgot to multiply xi in the formula for calculating new weight.

  • @basab4797
    @basab4797 Před rokem

    Really awesome

  • @shashwatgandhi4895
    @shashwatgandhi4895 Před rokem +2

    Correct me if I am wrong.
    The weight changing algorithm is increasing the weights if the target value is higher than the actual value and vice versa. That will make the output in the next iteration closer to the target output BUT it will not be that way if suppose the inputs (xi) are say always negative. In that case the weight changing has to be modified that is if the input was negative then the sign of delta weight should be reversed.
    Eg. :
    x1 = -1, w1 = 1 -> x1*w1 = -1 (actual output) ,
    sigmoid(-1) -> 0
    target output = 1
    delta weight = (n) * (1)
    Assuming n to be 0.1 then
    delta weight = 0.1
    So the new weight becomes -> w1 = w1 + delta weight
    w1 = 1.1
    But now running it again we see=>
    x1 = -1, w1 = 1.1 -> x1*w1 = -1.1 (actual output)
    sigmoid(-1.1) -> 0
    This makes the algorithm even worse now.
    So we should have made sure that as the input was negative rather than adding the delta weight we should have subtracted it.
    so
    w1 = w1 - delta weight = 1 - 0.1 => 0.9
    x1 = -1 , w1 = 0.9 => x1*w1 = -0.9
    sigmoid(-0.9) = 0

    • @ThinkXAcademy
      @ThinkXAcademy  Před rokem +1

      yes for negative weights we need to handle that case

  • @decodingrules2494
    @decodingrules2494 Před rokem

    Thank you man

  • @mo-ry5je
    @mo-ry5je Před rokem

    Thank you

  • @kumarsourabh5862
    @kumarsourabh5862 Před 2 lety

    very nice explanation ..thank you

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety +1

      Like and share our content to support us😄

  • @amnshumansunil3371
    @amnshumansunil3371 Před 3 lety +1

    dude you're amazing!! keep up the good job :)

    • @ThinkXAcademy
      @ThinkXAcademy  Před 3 lety

      Thank you😀 Please share to help my channel reach to more students✌🏻

  • @vamsipaidupalli7904
    @vamsipaidupalli7904 Před 3 lety +2

    Nice 👌 keep it up sir

  • @user-mx7sv3mo4i
    @user-mx7sv3mo4i Před rokem

    exellent!!

  • @meetpatel1011
    @meetpatel1011 Před 6 měsíci

    Thanks

  • @chamithdilshan3547
    @chamithdilshan3547 Před 2 lety

    Thank you!

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety

      Welcome 🌟 Share it with other students also👍

  • @srimannarayanaiyengar8914

    please post Multilayer perception model with an example

  • @amitblizer4567
    @amitblizer4567 Před rokem

    Very clearly explained video!, thank you!

  • @fiilixwonder7675
    @fiilixwonder7675 Před rokem

    Thank you 👍

  • @shashwatgandhi4895
    @shashwatgandhi4895 Před rokem +1

    Wouldn't all the weights be equal after all the iteration as the delta we are adding to each of the weight is always the same for all in any iteration. (Assuming the weights were same at the start) ?

    • @dragster100
      @dragster100 Před 6 měsíci

      I think the error term of (yi - yi bar) takes care of that. As the iterations go your error term will also becomes smaller and smaller until it converges eventually.

  • @sherlockholmes2752
    @sherlockholmes2752 Před 2 lety +2

    Very good explanation!!

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety

      Thanks😄 Share our videos to help this channel grow💯

  • @bhavikprajapati2614
    @bhavikprajapati2614 Před rokem

    How can a set of data be
    classified using a simple perceptron? Using a simple perceptron with
    weights w0, w1 , and w2 as −1, 2, and 1, respectively, classify data
    points (3,4); (5, 2); (1, −3); (−8, −3); (−3, 0).

  • @kabirbaghel8835
    @kabirbaghel8835 Před rokem

    amazing 10/10

  • @praveenchristopher7776

    Thankyou for the very clear explanation, it was was a pleasure to learn. I have a question on the activation function, x.w+b, since we are using a squashing function should it not be x.w+b < 0.5 for 0, and x.w+b > 0.5 for it to be classified as 1. Thanks again

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety +1

      No, I have rechecked the conditions, it is correct in the video.

  • @mrkhan3188
    @mrkhan3188 Před 2 lety

    Thanks dude .... I have exam tmrw

  • @mih2965
    @mih2965 Před rokem

    Basically.

  • @dashsingh30095
    @dashsingh30095 Před 2 lety

    Very well explained 😀😀

    • @ThinkXAcademy
      @ThinkXAcademy  Před 2 lety

      Thanks..please share my videos to help me grow😄