Linear Regression Algorithm In Python From Scratch [Machine Learning Tutorial]

Sdílet
Vložit
  • čas přidán 29. 06. 2024
  • We'll build a linear regression model from scratch, including the theory and math. Linear regression is the most popular machine learning algorithm, and implementing it in python will help you understand how it works.
    First, we'll cover the theory and the equation to calculate the coefficients. Then we'll implement the equation in python. We'll end by calculating the r squared value to figure out how well our regression fits the data.
    We'll be using data from the Olympics to implement our algorithm. We'll try to predict how many medals a country will earn based on how many athletes it enters into the Olympics.
    You can find the code and data at github.com/dataquestio/projec... .
    Chapters
    00:00 Intro
    00:20 Theory and equation
    14:25 Python implementation
    20:02 r-squared calculation
    ---------------------------------
    Join 1M+ Dataquest learners today!
    Master data skills and change your life.
    Sign up for free: bit.ly/3O8MDef

Komentáře • 47

  • @ninobach7456
    @ninobach7456 Před 4 měsíci +4

    I recommend this video for those who understand the general concept of linear regression, but want to know what happens 'under the hood'

  • @namrata_roy
    @namrata_roy Před rokem +4

    Amazing tutorial. Difficult concepts were explained with such ease. Kudos team Dataquest!

  • @sulaimansalisu5833
    @sulaimansalisu5833 Před rokem +3

    Very explicit. You are a wonderful teacher. Thanks so much

  • @dataprofessor_
    @dataprofessor_ Před rokem +4

    this is a great tutorial. Beautifully explained.

  • @Mara51029
    @Mara51029 Před 3 měsíci +1

    This is absolutely amazing and great video. I can’t wait to see more great work

  • @BTStechnicalchannel
    @BTStechnicalchannel Před rokem +2

    Very well explained!!!

  • @learn-with-lee
    @learn-with-lee Před rokem +1

    Thank you . It was well explained.

  • @anfedoro
    @anfedoro Před rokem

    Great and very clear explanation. The only point missed in the end is the regression visualisation 😉. Nice to have both initial data and the regression plotted

  • @zheshipeng
    @zheshipeng Před rokem +1

    Thanks so much. Better than any E-books 🙂

  • @HIEUHUYNHUC
    @HIEUHUYNHUC Před 6 měsíci

    Today you will my teacher. I'm from VietNam. Thank you so much

  • @abidson690
    @abidson690 Před rokem

    Thanks so much for the Video

  • @fassstar
    @fassstar Před měsícem +1

    One correction, not relevant to the actuall regression, but should be said nonetheless. The number of medals one athlete can win is not limitted to one, rather it is limited to the number of events the athlete competes in (maximum of one per event). In fact, numerous athletes have one multiple medals in one Olympics. Just wanted to clarify that. Of course, from a certain number of athelets, it will be impossible for a smaller team to compete in as many events as the large team, making it more likely that the larger team wins more medals.

  • @JohnJustus
    @JohnJustus Před 3 měsíci +1

    Perfect,,, thnks a lot

  • @dembobademboba6924
    @dembobademboba6924 Před 5 měsíci

    great job

  • @user-ql7de7ud6q
    @user-ql7de7ud6q Před 4 měsíci

    THANKS ALOT🤯

  • @AndresIniestaLujain
    @AndresIniestaLujain Před rokem

    Would the solution for B be considered a least squares solution? Also, If we wanted to construct say a 95% confidence interval for each coefficient, would we take B for intercept, athletes, and prev_medals (-1.96, 0.07, 0.73) and multiply them by their respective standard errors and t-scores? Would the formula would be as follows: B(k) * t(n-k-1, alpha = 0.05/2) * SE(B(k)) , or does this require more linear algebra? Great tutorial btw, thanks for the help.

  • @guilhermesaraiva3846
    @guilhermesaraiva3846 Před 10 měsíci

    thanks for the lesson, but just a question, during the model the separation of x,y_train and x,y_test was not made, why would it not be necessary, and if it is necessary to do it, how would it be done?
    thanks

  • @oluwamuyiwaakerele4287

    Hi, this is a wonderful explanation. Great job putting this together. The only thing that really confuses me is how you factor in previous medals in the predictive model. What would that look like in the linear equation at 1:54?

    • @Dataquestio
      @Dataquestio  Před rokem +2

      You would add a second term b2x2, so the full equation would be b0 + b1x1 + b2x2. x1 would be athletes, x2 is previous medals. Then you'd have separate coefficients (b1 and b2) for each.

  • @television80
    @television80 Před 7 měsíci

    Hi Vikas, which is better for GLM models in python: sklearn or statmodels package?

  • @jeanb2682
    @jeanb2682 Před 9 měsíci

    Hey, That is a great beatiful demonstration of linear regression. Thank you. But I didn't understand where prev_medals coming in building X matrix at the beginning?
    some one can give to me explanation on apparution of these value inside the X matrix?

  • @cclementson1986
    @cclementson1986 Před 5 měsíci

    Is there a reason you chose to implement the normal equation over gradient descent? I'm quite curious as I am more familiar with gradient descent.

  • @joshwallenberg337
    @joshwallenberg337 Před 8 měsíci

    Do you have an example like this with multiple x-values or features?

  • @im4485
    @im4485 Před 8 měsíci

    This guy is old, young, sleepy and awake all at the same time.

  • @hameedhhameed1996
    @hameedhhameed1996 Před rokem

    It is such a fantastic explanation of Linear Regression. My question is, is there any possibility that we can't obtain the inverse of matrix X?

    • @Dataquestio
      @Dataquestio  Před rokem +2

      Hi Hameed - yes, some matrices are singular, and cannot be inverted. This happens when columns or rows are linear combinations of each other. In those cases, ridge regression is a good alternative. Here is a ridge regression explanation - czcams.com/video/mpuKSovz9xM/video.html .

  • @josuecurtonavarro8979

    Hi guys! Very interesting indeed! There is one thing I don't understand though. The identity matrix, as you mentioned , behaves like one in matrix multiplication when you multiply it with a matrix of the same size. But in this precise case (around the 13:08) the matrix B doesn't have the same size. So how come you can eliminate the identity matrix here from the equation? Thanks!

    • @Dataquestio
      @Dataquestio  Před rokem

      Hi Josué - I shouldn't have said "of the same size". Multiplying the identity matrix by another matrix behaves like normal matrix multiplication. So if the identity matrix (I) is 2x2, and you multiply by a 2x1 matrix B, you end up with a 2x1 matrix (equal to B).
      The number of columns in the first matrix you multiply has to match the number of rows in the second matrix. And the final matrix has the same row count as the first matrix, and the same column count as the second matrix.

  • @bomidilakshmimadhavan9501

    Can you please make a video demonstrating the multivariate regression analysis with the following information taken into consideration?
    Performs multiple linear regression trend analysis of an arbitrary time series. OPTIONAL: error analysis for regression coefficients (uses standard multivariate noise model).
    Form of general regression trend model used in this procedure (t = time index = 0,1,2,3,...,N-1):
    T(t)=ALPHA(t) + BETA(t)*t + GAMMA(t)*QBO(t) + DELTA(t)*SOLAR(t) + EPS1(t)*EXTRA1(t) + EPS2(t)*EXTRA2(t) + RESIDUAL_FIT(t),
    where ALPHA represents the 12-month seasonal fit, BETA is the 12-month seasonal trend coefficient, RESIDUAL_FIT(t) represents the error time series, and GAMMA, DELTA, EPS1, and EPS2 are 12-month coefficients corresponding to the ozone driving quantities QBO (quasi-biennial oscillation), SOLAR (solar-UV proxy), and proxies EXTRA1 and EXTRA2 (for example, these latter two might be ENSO, vorticity, geopotential heights, or temperature), respectively.
    The general model above assumes simple linear relationships between T(t) and surrogates which is hopefully valid as a first approximation. Note that for total ozone trends based on chemical species such as involving Chlorine, the trend term BETA(t)*t could be replaced (ignored by setting m2=0 in the procedure call), with EPS1(t)*EXTRA1(t) where EXTRA1(t) is the chemical proxy time series.
    This procedure assumes the following form for the coefficients ALPHA, BETA, GAMMA,...) in effort to approximate realistic seasonal dependence of sensitivity between T(t) and surrogate.
    The expansion shown below is for ALPHA(t) - similar expansions for BETA(t), GAMMA(t), DELTA(t), EPS1(t), and EPS2(t):
    ALPHA(t) = A0

  • @iamgarriTech
    @iamgarriTech Před 7 měsíci

    Why do we need to add those "1" when solving the matrix

  • @borutamena8207
    @borutamena8207 Před rokem

    tnx sir

  • @sunilnavadia6347
    @sunilnavadia6347 Před rokem

    Hi Team... Very well explained Linear Regression from scratch... Do you have any video for Ridge Regression from Scratch using Python?

    • @Dataquestio
      @Dataquestio  Před rokem +4

      Hi Sunil - we don't. I'll look into doing ridge regression in a future video! -Vik

    • @sunilnavadia8203
      @sunilnavadia8203 Před rokem

      @@Dataquestio Thank you

    • @manyes7577
      @manyes7577 Před rokem

      @@Dataquestio thanks you are awesome

    • @abidson690
      @abidson690 Před rokem

      @@Dataquestio thanks

  • @yousif533
    @yousif533 Před rokem +1

    Thank you for this video. Could you please share the ppt slides of this lesson?

    • @Dataquestio
      @Dataquestio  Před rokem

      Hi Yousif - this was done using video animations, so there aren't any powerpoint slides, unfortunately. -Vik

  • @sunilnavadia8203
    @sunilnavadia8203 Před rokem

    In predictions we got values as 0.24,-1.6,-1.39 so can you explain does -1.6 medals is valid? Or I need to use some other dataset to perform regression like house prediction? Can you suggest me some dataset in which i can apply ridge regression?

    • @Dataquestio
      @Dataquestio  Před rokem

      Hi Sunil - with the way linear regression works, you can get numbers that don't make sense with the dataset. The best thing to do is to truncate the range (anything below 0 gets set to 0). Other algorithms that don't make assumptions about linearity can avoid this problem (like decision trees, k-nn, etc).

    • @sunilnavadia8203
      @sunilnavadia8203 Před rokem

      @@Dataquestio Thank you for your message, As the prediction of this data(Medals is in decimal ) so do you have any suggestion regarding other dataset in which i can make prediction which make sense using ridge regression?

  • @gabijakielaite3179
    @gabijakielaite3179 Před rokem

    I am wondering is it okay to have a model which predicts country to receive negative amount of medals? Isn't that just impossible?

    • @Dataquestio
      @Dataquestio  Před rokem +1

      This is one of the weaknesses of linear regression. Due to the y-intercept term, you can get predictions that don't make sense in the real world. An easy solution is to replace negative predictions with 0.

  • @adityakakade9172
    @adityakakade9172 Před 6 měsíci

    I rather use statsmodel than using this mthod which makes things complex

  • @oluwamuyiwaakerele4287

    I guess another question I have is how to invert a matrix

    • @Dataquestio
      @Dataquestio  Před rokem +1

      Hi Oluwamuyiwa - there are a few ways to invert a matrix. The easiest to do by hand is Gaussian elimination - en.wikipedia.org/wiki/Gaussian_elimination . That said, there isn't a lot of benefit to knowing how to invert a matrix by hand, so I wouldn't worry too much about it.

  • @HIEUHUYNHUC
    @HIEUHUYNHUC Před 6 měsíci

    sorry teacher. i guess you were confused SSR was SSE and R2 = 1 - (SSE/SST) = SSR/SST