At 7:23, line (linear function) represents the prediction. A point belonging to a line must be labelled as "guess", whereas the original label of every example should be labelled as "y". [Slip of 'marker' maybe] Nice video though. 'Desperately' waiting for Layers API video. :)
that 'dan from the future' correction was done in good taste, and was very useful (plus entertaining) very very good way of making sure you cover topics properly :)
This is a million times better than commercial courses like Udacity's AI "nanodegree" where they drive around in a self-driving car trying to show you how amazing and successful they are.
There is this pretty cool 'syntax' to define a variable in tensorflow.js. Instead of wrapping the tf.scalar around the tf.variable (13:00), we can actually use chaining. For example: tf.variable(tf.scalar(random (1))) is same as tf.scalar(random(1)).variable(). [As I am reading documentation while watching this video, I thought I should share this "syntactic sugar"]
SGD changes m and b in small steps in the direction of the gradient. The step size is determined by the learning rate. It’s not random, but the direction of the gradient can change as it learns, so it needs to be recalculated after each time m and b are updated.
Antes de ver el video, el solo hecho de que sea de Daniel, me hace sentir que por fin desbloquearé este tema del todo y en javascript. Gracias Daniel, en mi caso el mejor profesor!
Wow - thank you so much! Finally found an great introduction to TensorFlow.js and machine learning that I understand and makes fun! I added the loss value as additional line to the graph :-)
I love you sir. You have my respects. I never took interest in machine learning but after watching the previous videos and this one, I am surely gonna dive more into it. Thank you again.
Pro, your ability to coding is amazing, you must let the virtual come out to reality, you should program arduino and create a incredible project genetic algorithm neuronal net evolution robot
do you ever get bored coding? im 17 and i think u have the years of experience same as my age. i'm taking the first baby-steps of coding and i love it but can't do it for long. how do you keep yourself motivated and curious?
I don't really understand the point of storing x and y positions of points as numbers from 0 - 1 if we are gonna convert them back to their initial form? As in what is the point of mapping mouseX from 0 - 1 if we are later just converting it back to 0 - width?
WhatTheHACK! I think that the minimize function may take in a normalized (between 0 and 1) number or something like that. I’m not exactly sure why they normalize numbers on a canvas but I think it has to do with how the answer to a neural network usually is a Boolean (yes or no) which is between 0 and 1 (0 being no and 1 being yes.. and in-between being how sure it is of the answer )
optimizer.minimize(() => loss(predict(x_vals), ys)); what does this code actually do ? as I dont see it related to below code as const ys are re-declared at the bottom of draw function. Thx.
I need to do a better job of being more clear where to find the code. thecodingtrain.com/CodingChallenges/104-linear-regression-tfjs.html github.com/CodingTrain/website/tree/master/CodingChallenges/CC_104_tf_linear_regression
are you referring to the squared error? if so in most of the math applications you use that to avoid polarity. Like you just want the difference between two numbers and you don't care about their sign. For example take two numbers 2 and 5. The difference between them is 3. However depending on which comes first, your subtraction may give you different results. That is 2 - 5 = -3 while 5 - 2 = 3 In linear regression, you are only concerned with 3(which would be your error for that data point). You are not concerned about the negative sign or positive sign.So to avoid having to worry about negative values, you usually square them. So both -3 and 3 would become 9. Eventually you would still be reducing the error that way. Hope it explains.
@@blasttrash thanks for explanation, so by squaring even the value become 9, it doesnt affect the output that we want ? like it should be 3 not -3 or 9
@@rained23JMTi True, I guess it does effect the output a bit, but it only effects the speed at which we find the answer I guess. I mean if we took 3, we would probably reduce that error 3 to being 0(or close to 0) within say 2 iterations. On the other hand, if we took it as 9, it would probably take 5 iterations. But in the end, it doesnt matter since we will be solving or reducing the error down to zero in either of those cases. Now I am pretty sure there are some other advantages of squared error(although idk what they are). If you check this wiki link, it talks about some bias and variance, I guess thats where the answer lies. However I do not fully understand those concepts, so I will wait until someone more knowledgeable answers it. en.wikipedia.org/wiki/Mean_squared_error
@blasttrash haha...look at 5-2=3 and 9-12=-3, and if we add them we get 0. But 2 is not a good approximation to 5 nor 12 is to 9. That's just one reson. The other one is that square function is smooth, and when we want to find a (local) min or max it's easier to work with differentiable functions. And the third reason is that square function is also convex, which makes optimizatiom much much easier. The local optimum is global!
There's something i don't really understand. I've made a program similar to this, but on Fortran. When I was taught how to do it, all they said was "you just throw in two vectors with your data, and make the program return two coefficients that plot the line". How is this any faster or more efficient than just making the computer operate with arrays? I don't understand why you specifically need to make them tensors -i assume it is because tf functions only accept tensors as arguments-. Does the implementation of the optimizer as a native function make the execution any faster?
Thank you for your great work. It really helps. I just wanted to share an implementation done with React.js and SVG. github.com/gpietro/reactflow-linear-regression
At 7:23, line (linear function) represents the prediction. A point belonging to a line must be labelled as "guess", whereas the original label of every example should be labelled as "y". [Slip of 'marker' maybe] Nice video though. 'Desperately' waiting for Layers API video. :)
Ah, shoot, you are absolutely right. Pinning this comment!
Visualising that whiteboard at 7:23 ish, as coding challenge using in P5 to animate whiteboard drawings as an interactive data visual.
#bounty :)
that 'dan from the future' correction was done in good taste, and was very useful (plus entertaining)
very very good way of making sure you cover topics properly :)
these videos are incredible
This is a million times better than commercial courses like Udacity's AI "nanodegree" where they drive around in a self-driving car trying to show you how amazing and successful they are.
I love this channel because it has time travels. love it when he does that
There is this pretty cool 'syntax' to define a variable in tensorflow.js. Instead of wrapping the tf.scalar around the tf.variable (13:00), we can actually use chaining. For example: tf.variable(tf.scalar(random (1))) is same as tf.scalar(random(1)).variable(). [As I am reading documentation while watching this video, I thought I should share this "syntactic sugar"]
Oh, awesome, I did not know this!
Your videos keep getting better. I generally use c++ and c#, but your videos are good for finding fun ideas and useful ideas. Keep it up, man!
SGD changes m and b in small steps in the direction of the gradient. The step size is determined by the learning rate. It’s not random, but the direction of the gradient can change as it learns, so it needs to be recalculated after each time m and b are updated.
I've done many ML courses, but yours way to explain is being the best! Congratiulations!!!
Hooray ! A coding challenge with tensorflow !
Antes de ver el video, el solo hecho de que sea de Daniel, me hace sentir que por fin desbloquearé este tema del todo y en javascript. Gracias Daniel, en mi caso el mejor profesor!
This is out of my range!!!
Wow - thank you so much! Finally found an great introduction to TensorFlow.js and machine learning that I understand and makes fun! I added the loss value as additional line to the graph :-)
I love you sir. You have my respects. I never took interest in machine learning but after watching the previous videos and this one, I am surely gonna dive more into it. Thank you again.
finally get the real example with tensorflow js
coool
Yayyy! Keep up the good work Daniel
the future thing was awesome :D
I first noticed that a tensor flow was possible with js.It's very very amazing!!
omg didn't notice it lasted 44 minutes! It was a blast, good video!
Thanks for your video tutorial, they are great.
great tutorial bro
Excellent! Now we're coding with Power!
THE FUTURE THING WAS JUST AWESOME!!!!!!!!!!!!!!!!!!!
2:25 in fact that is not a linear relation in reality.
because mass grows with volume and volume grows with the third power of lenght
the future dan was awesome lol. i felt like i was really watching a movie. :P
Pro, your ability to coding is amazing, you must let the virtual come out to reality, you should program arduino and create a incredible project genetic algorithm neuronal net evolution robot
awesome project :)
Thanks!!
at minute 18 the Sub() function, what is the use? and could it be done in other ways for this linear regression?
I don't know anything about tensor flow, but are you not able to just get the trained m and b values and draw the line with those?
Do you have a tutorial on how you create your coding environment?
do you ever get bored coding? im 17 and i think u have the years of experience same as my age. i'm taking the first baby-steps of coding and i love it but can't do it for long. how do you keep yourself motivated and curious?
u should do a tutorial about that how future of you breaking in the middle :) that was some Hollywood material :) like it :))
but a height vs weight function isn't typically linear, and shouldn't be represented with a line, since weight scales with the cube of height.
Great feedback, thank you!
I don't really understand the point of storing x and y positions of points as numbers from 0 - 1 if we are gonna convert them back to their initial form? As in what is the point of mapping mouseX from 0 - 1 if we are later just converting it back to 0 - width?
5:28 when my teacher asks me a question
Unable to draw the line..... After writing plotted point function
What are the only five Tensors left with tf.memory().numTensors ? (there's m,b and ...)
I'm entertained.
Why is there 5 tensors instead of just 2 at the end ?
Daniel code a time machine...
tf.variable( obj ) also works as obj.variable()
Thanks for the tip!
where is the codes he used in this video ? in which github repo?
why you taking width and height of the canvas from 0 to 1? how it makes it better? what if i take it as the total width and height of the canvas
WhatTheHACK! I think that the minimize function may take in a normalized (between 0 and 1) number or something like that. I’m not exactly sure why they normalize numbers on a canvas but I think it has to do with how the answer to a neural network usually is a Boolean (yes or no) which is between 0 and 1 (0 being no and 1 being yes.. and in-between being how sure it is of the answer )
is it wrong to take distance of the point to the line as perpendicular distance? will it affect the solution in any way? please answer :)
It is not wrong, it is actually a better approximation but the math and coding is just harder.
can anyone send me the complete code
optimizer.minimize(() => loss(predict(x_vals), ys));
what does this code actually do ? as I dont see it related to below code as const ys are re-declared at the bottom of draw function. Thx.
it minimizes the loss function using the optimizer's algorithm
With "rmsprop" optimizer, I got 2 lines at the same time...
why do you create m and b with random 1 tensor ?
like in any project, you need to start somewhere, so he uses some random variable as a starting point
Where is the github page for this? Or am I just blind and it is right there in the description?
I need to do a better job of being more clear where to find the code.
thecodingtrain.com/CodingChallenges/104-linear-regression-tfjs.html
github.com/CodingTrain/website/tree/master/CodingChallenges/CC_104_tf_linear_regression
Thanks Daniel!
i am still confused about why do it need to be squared? can someone explain to me
are you referring to the squared error? if so in most of the math applications you use that to avoid polarity. Like you just want the difference between two numbers and you don't care about their sign.
For example take two numbers 2 and 5. The difference between them is 3. However depending on which comes first, your subtraction may give you different results. That is
2 - 5 = -3 while
5 - 2 = 3
In linear regression, you are only concerned with 3(which would be your error for that data point). You are not concerned about the negative sign or positive sign.So to avoid having to worry about negative values, you usually square them. So both -3 and 3 would become 9. Eventually you would still be reducing the error that way. Hope it explains.
@@blasttrash thanks for explanation, so by squaring even the value become 9, it doesnt affect the output that we want ? like it should be 3 not -3 or 9
@@rained23JMTi True, I guess it does effect the output a bit, but it only effects the speed at which we find the answer I guess. I mean if we took 3, we would probably reduce that error 3 to being 0(or close to 0) within say 2 iterations. On the other hand, if we took it as 9, it would probably take 5 iterations. But in the end, it doesnt matter since we will be solving or reducing the error down to zero in either of those cases.
Now I am pretty sure there are some other advantages of squared error(although idk what they are). If you check this wiki link, it talks about some bias and variance, I guess thats where the answer lies. However I do not fully understand those concepts, so I will wait until someone more knowledgeable answers it.
en.wikipedia.org/wiki/Mean_squared_error
@blasttrash haha...look at 5-2=3 and 9-12=-3, and if we add them we get 0. But 2 is not a good approximation to 5 nor 12 is to 9. That's just one reson. The other one is that square function is smooth, and when we want to find a (local) min or max it's easier to work with differentiable functions. And the third reason is that square function is also convex, which makes optimizatiom much much easier. The local optimum is global!
lol do you have a green laptop on your left? =) 12:20
Yep he does, he uses it to manage his stream, the chat and so on I believe :)
yes, this is correct!
There's something i don't really understand.
I've made a program similar to this, but on Fortran. When I was taught how to do it, all they said was "you just throw in two vectors with your data, and make the program return two coefficients that plot the line". How is this any faster or more efficient than just making the computer operate with arrays? I don't understand why you specifically need to make them tensors -i assume it is because tf functions only accept tensors as arguments-. Does the implementation of the optimizer as a native function make the execution any faster?
0:39 Have you liked your own video ? :D
always 😂
that reminds me of Snape's always from Harry Potter. lol
im confused af
Thank you for your great work. It really helps. I just wanted to share an implementation done with React.js and SVG. github.com/gpietro/reactflow-linear-regression
Nice work! You can submit a link to the coding train website if you like!
github.com/CodingTrain/website/wiki/Community-Contributions-Guide
are you using braces? you speech changed.... you sound like you're using braces or have lisp tongue. lol