NLP Demystified 10: Neural Networks From Scratch
Vložit
- čas přidán 2. 08. 2024
- Course playlist: • Natural Language Proce...
Neural Networks have led to incredible breakthroughs in all things AI, but at the core, they're pretty simple. In this video, we'll learn how neural networks work and how they "learn". By the end, you'll have a clear understanding of how neural networks work under the hood.
We'll take a bottom-up approach starting with simple functions, move on to individual nodes, then how to compose nodes into full neural networks. We'll then cover how neural networks get better at a task through feedback and a process called backpropagation.
And to really ground our understanding, we'll then build a neural network from scratch in the demo.
Colab notebook: colab.research.google.com/git...
Timestamps
00:00:00 Neural Networks I
00:00:39 Neural networks learn a function
00:03:34 Why we need a bias
00:04:49 Why we need a non-linearity
00:05:55 The main building block of neural networks
00:09:17 Combining units into neural networks
00:11:08 Neural networks as matrix operations
00:13:51 Neural network setups and loss functions
00:23:45 Backpropagation: Learning to get better
00:33:45 Neural networks search for transformations
00:34:55 DEMO: Building neural networks from scratch
01:09:29 Neural Networks I recap
This video is part of Natural Language Processing Demystified --a free, accessible course on NLP.
Visit www.nlpdemystified.org/ to learn more.
Timestamps
00:00:00 Neural Networks I
00:00:39 Neural networks learn a function
00:03:34 Why we need a bias
00:04:49 Why we need a non-linearity
00:05:55 The main building block of neural networks
00:09:17 Combining units into neural networks
00:11:08 Neural networks as matrix operations
00:13:51 Neural network setups and loss functions
00:23:45 Backpropagation: Learning to get better
00:33:45 Neural networks search for transformations
00:34:55 DEMO: Building neural networks from scratch
01:09:29 Neural Networks I recap
This is a fantastic explainer, and man you've got a great set of pipes. I'm bookmarking this video for the next time someone asks me how neural networks work.
great video so far!
Wow..!! What a great way of explanation. Truly awesome..!!
Great explanations!
omggg, kudos to your efforts!!!!! I really wish you have more subscribers
thx u for your hard work, to output this series of video
You have thought me on AI what 2 semesters of AI course has not... Simplified alot
Very helpful set of videos. However, it is unclear how is it that the weights determined for one set of input values X1 and the corresponding expected output value Y1, will hod for any other set of input values X2 and their corresponding output value Y2? In your example, the weights computed for inputs x1=2, x2=3 and expected output y=0, maybe different for any other inputs and expected output.
11
6:17 or a perceptron? I remember my uni teacher did not like the word neural networks because it implies thats how biological brains work but in reality the two have little to do with each other
Yep, I also like getting away from biology-inspired terminology when I can which is why I prefer "unit" or "node". Regarding "perceptron": at 6:48, I explain the difference between units as we use them today and the classic perceptron from the 1950s (the latter doesn't have a differentiable activation function which is why I didn't include the term).
can you send me slides?