NLP Demystified 10: Neural Networks From Scratch

Sdílet
Vložit
  • čas přidán 2. 08. 2024
  • Course playlist: • Natural Language Proce...
    Neural Networks have led to incredible breakthroughs in all things AI, but at the core, they're pretty simple. In this video, we'll learn how neural networks work and how they "learn". By the end, you'll have a clear understanding of how neural networks work under the hood.
    We'll take a bottom-up approach starting with simple functions, move on to individual nodes, then how to compose nodes into full neural networks. We'll then cover how neural networks get better at a task through feedback and a process called backpropagation.
    And to really ground our understanding, we'll then build a neural network from scratch in the demo.
    Colab notebook: colab.research.google.com/git...
    Timestamps
    00:00:00 Neural Networks I
    00:00:39 Neural networks learn a function
    00:03:34 Why we need a bias
    00:04:49 Why we need a non-linearity
    00:05:55 The main building block of neural networks
    00:09:17 Combining units into neural networks
    00:11:08 Neural networks as matrix operations
    00:13:51 Neural network setups and loss functions
    00:23:45 Backpropagation: Learning to get better
    00:33:45 Neural networks search for transformations
    00:34:55 DEMO: Building neural networks from scratch
    01:09:29 Neural Networks I recap
    This video is part of Natural Language Processing Demystified --a free, accessible course on NLP.
    Visit www.nlpdemystified.org/ to learn more.

Komentáře • 13

  • @futuremojo
    @futuremojo  Před 2 lety +6

    Timestamps
    00:00:00 Neural Networks I
    00:00:39 Neural networks learn a function
    00:03:34 Why we need a bias
    00:04:49 Why we need a non-linearity
    00:05:55 The main building block of neural networks
    00:09:17 Combining units into neural networks
    00:11:08 Neural networks as matrix operations
    00:13:51 Neural network setups and loss functions
    00:23:45 Backpropagation: Learning to get better
    00:33:45 Neural networks search for transformations
    00:34:55 DEMO: Building neural networks from scratch
    01:09:29 Neural Networks I recap

  • @uncannyrobot
    @uncannyrobot Před 2 lety +11

    This is a fantastic explainer, and man you've got a great set of pipes. I'm bookmarking this video for the next time someone asks me how neural networks work.

  • @joshw3485
    @joshw3485 Před 2 lety +3

    great video so far!

  • @yogendrashinde473
    @yogendrashinde473 Před rokem +1

    Wow..!! What a great way of explanation. Truly awesome..!!

  • @computerscienceitconferenc7375

    Great explanations!

  • @lochanaemandi6405
    @lochanaemandi6405 Před 7 měsíci

    omggg, kudos to your efforts!!!!! I really wish you have more subscribers

  • @ts-yr8yz
    @ts-yr8yz Před rokem

    thx u for your hard work, to output this series of video

  • @pipi_delina
    @pipi_delina Před rokem +1

    You have thought me on AI what 2 semesters of AI course has not... Simplified alot

  • @SatyaRao-fh4ny
    @SatyaRao-fh4ny Před 7 měsíci

    Very helpful set of videos. However, it is unclear how is it that the weights determined for one set of input values X1 and the corresponding expected output value Y1, will hod for any other set of input values X2 and their corresponding output value Y2? In your example, the weights computed for inputs x1=2, x2=3 and expected output y=0, maybe different for any other inputs and expected output.

  • @Engineering_101_
    @Engineering_101_ Před 2 měsíci

    11

  • @rohanofelvenpower5566
    @rohanofelvenpower5566 Před 2 lety

    6:17 or a perceptron? I remember my uni teacher did not like the word neural networks because it implies thats how biological brains work but in reality the two have little to do with each other

    • @futuremojo
      @futuremojo  Před 2 lety +2

      Yep, I also like getting away from biology-inspired terminology when I can which is why I prefer "unit" or "node". Regarding "perceptron": at 6:48, I explain the difference between units as we use them today and the classic perceptron from the 1950s (the latter doesn't have a differentiable activation function which is why I didn't include the term).

  • @ungminhhoai4510
    @ungminhhoai4510 Před rokem

    can you send me slides?