#12 Machine Learning Specialization [Course 1, Week 1, Lesson 3]

Sdílet
Vložit
  • čas přidán 12. 09. 2024
  • The Machine Learning Specialization is a foundational online program created in collaboration between DeepLearning.AI and Stanford Online. This beginner-friendly program will teach you the fundamentals of machine learning and how to use these techniques to build real-world AI applications.
    This Specialization is taught by Andrew Ng, an AI visionary who has led critical research at Stanford University and groundbreaking work at Google Brain, Baidu, and Landing.AI to advance the AI field.
    This video is from Course 1 (Supervised Machine Learning Regression and Classification), Week 1 (Introduction to Machine Learning), Lesson 3 (Regression Model), Video 4 (Cost function intuition).
    To learn more and access the full course videos and assignments, enroll in the Machine Learning Specialization here: bit.ly/3ERmTAq
    Download the course slides: bit.ly/3AVNHwS
    Check out all our courses: bit.ly/3TTc2KA
    Subscribe to The Batch, our weekly newsletter: bit.ly/3TZUzju
    Follow us:
    Facebook: / deeplearningaihq
    LinkedIn: / deeplearningai
    Twitter: / deeplearningai_

Komentáře • 16

  • @emmanuelteitelbaum
    @emmanuelteitelbaum Před 4 měsíci +6

    My favorite linear regression video of all time!

  • @soap4890
    @soap4890 Před 6 měsíci +7

    🎯 Key Takeaways for quick navigation:
    [\[00:01\](czcams.com/video/peNRqkfukYY/video.html)] 🧠 Understanding the cost function:
    - The section aims to build intuition about the role and behavior of the cost function in linear regression.
    - A recap emphasizes the objective of finding parameter values \( W \) and \( B \) that minimize the cost function \( J(W, B) \), indicating optimal model performance.
    - Simplifying the linear regression model to \( f(W, X) = W \times X \) facilitates visualizing the cost function's relationship with parameter \( W \).
    [\[04:05\](czcams.com/video/peNRqkfukYY/video.html)] 📉 Analyzing the cost function graphically:
    - Graphical representations of both the linear model \( f(W, X) \) and the cost function \( J(W) \) are examined.
    - Different values of parameter \( W \) correspond to various straight line fits to the training data, influencing the cost function.
    - The relationship between parameter \( W \), the linear model fit, and the cost function values is elucidated through graphical analysis.
    [\[13:47\](czcams.com/video/peNRqkfukYY/video.html)] 📊 Choosing optimal parameter values:
    - Optimal parameter values \( W \) are selected based on minimizing the cost function \( J(W) \), indicating the best fit of the linear model to the training data.
    - Graphical analysis demonstrates how the choice of parameter \( W \) affects the fit of the model to the data and the resulting cost function values.
    - The ultimate goal in linear regression is to identify parameter values that result in the smallest possible cost function value, indicating an accurate model fit.
    Made with HARPA AItail

  • @tamaramartinovic
    @tamaramartinovic Před rokem +8

    So well explained, thank you very much!

  • @leogodofnone2706
    @leogodofnone2706 Před 5 měsíci +4

    Thank you 🙏🏻

  • @ooxxx
    @ooxxx Před rokem +2

    thank you!

  • @mithun442
    @mithun442 Před 8 měsíci +1

    Understood sir

  • @nozarlahooti1789
    @nozarlahooti1789 Před 6 měsíci

    For the second example would be 4.25/6=0.7, right?

  • @karthik_0180
    @karthik_0180 Před 11 měsíci

    Yes

  • @quietcorner5989
    @quietcorner5989 Před 10 měsíci +1

    so why you calculated J function in the beginning with the same slope which is w=1 ,and then u calculated the value of j function from two different slopes w=1 and w=0.5? in the first one you considered that the actual function and the function of the model prediction had the same slope w=1 so it is normal that the error would be 0!! can you please clarify this i didn t get it. did you suppose in the beginning that the model predition was 100% correct and precise and it mached with the actual function , so the error was 0?

    • @Lostverseplays
      @Lostverseplays Před 9 měsíci

      In the start yes the model prediction was 100% correct that is why the cost function(J) gave 0 as output as there was no error in prediction, when the slope became 0.5 the model(Function used to predict prices i.e f(x)) was deviated from actual prediction whose error was calculated by J function

  • @YashrajVerma-ni4jn
    @YashrajVerma-ni4jn Před rokem +1

    😊

  • @TeacherAzkaShehzadi-gf6vb
    @TeacherAzkaShehzadi-gf6vb Před 5 měsíci

    date 16

  • @gnull
    @gnull Před měsícem

    i smell calculus

  • @MikeM-uy6qp
    @MikeM-uy6qp Před 4 měsíci

    Huh?