Very interesting video. Quite funny is that I actually had the same calculator in high school although my school is located half across the globe (Germany compared to Australia). Globalisation really brings us together.
You minimize the Sum of Squared Residuals (often also refered to Sum of squared errors). The formula has the underlying idea that every data point has some distance to the line of best fit and that the distance should be minimized. We use the square of this formula so that negative and positive deviations (between the data point and the prediction of the line of best fit) are both treated equally. The SSR (sometimes SE or SSE) basically adds all those residuals' squares and tries to minimise them. You can do that by hand by taking the first order derivative, then setting to zero and solving for a and b.
Looking at the syllabus, it seems to be under year 12 "Statistical analysis 2" in Mathematics Advanced. MA-S2 Descriptive Statistics and Bivariate Data Analysis (page 21/75 of the mathematics advanced syllabus)
Calculators were frowned upon during my school time, nice to see you using the right tool for the right job!
I have my 4th year civil engineering exam 30 mins from now and you just saved my fucking semester!
Very interesting video. Quite funny is that I actually had the same calculator in high school although my school is located half across the globe (Germany compared to Australia).
Globalisation really brings us together.
We still use this same calculator in Pakistan :)
Excellent as always
This was really helpful
Does anybody know the name of the blonde girl in the bottom right corner
creep
@@xMdb ah I see, it was your mom
👍🏻
Hi Eddie, can we work together? I am also a math teacher
How would I do this by hand, without a calculator?
A bunch of formulas which would take a good couple of hours if you have a lot of data points
You minimize the Sum of Squared Residuals (often also refered to Sum of squared errors).
The formula has the underlying idea that every data point has some distance to the line of best fit and that the distance should be minimized. We use the square of this formula so that negative and positive deviations (between the data point and the prediction of the line of best fit) are both treated equally.
The SSR (sometimes SE or SSE) basically adds all those residuals' squares and tries to minimise them. You can do that by hand by taking the first order derivative, then setting to zero and solving for a and b.
Just search on CZcams linear regression. It's also in machine learning!
What year level is this for?
Looking at the syllabus, it seems to be under year 12 "Statistical analysis 2" in Mathematics Advanced.
MA-S2 Descriptive Statistics and Bivariate Data Analysis (page 21/75 of the mathematics advanced syllabus)
1