12a: Neural Nets

Sdílet
Vložit
  • čas přidán 10. 09. 2024
  • *NOTE: These videos were recorded in Fall 2015 to update the Neural Nets portion of the class.
    MIT 6.034 Artificial Intelligence, Fall 2010
    View the complete course: ocw.mit.edu/6-0...
    Instructor: Patrick Winston
    In this video, Prof. Winston introduces neural nets and back propagation.
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

Komentáře • 279

  • @tommytan8571
    @tommytan8571 Před 4 lety +232

    Rest in peace , professor . He died in 2019 , let us remembered him by watching this again and again.

  • @OttoFazzl
    @OttoFazzl Před 8 lety +283

    This professor is amazing! His explanation of SVMs was one of the best and clear I could find on the Internet.

    • @gaurav63105
      @gaurav63105 Před 7 lety +24

      I also started with SVMs and then decided to see his other lectures,he's so crisp

    • @alexm5914
      @alexm5914 Před 7 lety +3

      I'm watching SVMs right now, and I think I might do that too...

    • @binoruv
      @binoruv Před 7 lety +2

      Me too!!!

    • @magnumalba
      @magnumalba Před 3 lety +6

      It is not "This Professor". It is one of the fathers of AI.

    • @ankitasahoo668
      @ankitasahoo668 Před 3 lety +1

      I too agree

  • @ahmedmoneim9964
    @ahmedmoneim9964 Před 7 lety +167

    Thanks MIT for making these lectures publicly available, it is simply great!!

    • @vinayreddy8683
      @vinayreddy8683 Před 6 lety +6

      Ahmed AbdelMounem don't built a bomb with the base of this lecture

    • @StingBolt
      @StingBolt Před 3 lety +4

      @@vinayreddy8683 i wonder how idiots like you came here

  • @NeuralxAi
    @NeuralxAi Před 5 lety +40

    I am From a village in Kashmir. We Don't Have Teachers That Can Explain Things on this Level And i Totally depend on These Great Teachers in MIT. Lot's Of Love Sir, I wish I could Get You Subscribers from my Whole University. I Can Only Say Thank You So much for Quality Educations .

  • @muhammadhamzahm1204
    @muhammadhamzahm1204 Před 5 lety +69

    May you live in peace professor Patrick! You're a giant in field of machine learning. Your these lecture are biggest asset that beginners can use to climb.
    Thanks

  • @juliogodel
    @juliogodel Před 8 lety +32

    This is just great MIT. How I wished you could upload All classes from prof Winston.. I could keep watching them for days. Clarity and straight to the point. Marvelous!

  • @dr.mikeybee
    @dr.mikeybee Před 7 lety +9

    I really like this course. When a Professor understands the material. it can be clearly explained, and Professor Winston really understands the material.

  • @psrajoria
    @psrajoria Před 2 lety +5

    "All great ideas are simple. How come there aren't more of them? Well, because frequently, that simplicity involves finding a couple of tricks and making a couple of observations.
    So usually, we humans are hardly ever go beyond one trick or one observation. But if you cascade a few together, sometimes something miraculous falls out that looks in retrospect extremely simple." - Prof. Winston

  • @soulysouly7253
    @soulysouly7253 Před 3 lety +3

    Holy shit everything is so clear.
    I also frickin love when he explains very simply why we use a that one specific function, why we square this, why do we divide that, where does that coefficient come from, etc... and it all makes so much more sense than the gibberish written on the slides that I have to decipher every lecture.

  • @kutilkol
    @kutilkol Před 4 lety +11

    8:55 disclaimer . there exists also neurons connected directly without synaptic gaps as proposed by Camillo Golgi. so both Cajal and Golgi were right.
    RIP prof. Winston, beautiful classes, thank you sir

  • @willroman3595
    @willroman3595 Před 2 lety +1

    We live in such an awesome time that this information is available to everyone, free of charge.

  • @bradjones2071
    @bradjones2071 Před 3 lety +3

    I agree. Everyone always assumed MIT professors will just leave you with there intelligence and not be able to connect with the average lay-person but that is an incorrect assumption. I can basically understand alot of what he's talking about and am glad for the video.

  • @balllaktomas
    @balllaktomas Před 6 lety +16

    It's sad that in our school we had lecture for this and I was lost but I think teacher was too. And than this guy comes with all elegance and no arrogance providing you this information and let it share too people around the world. WELL PLAYED.

  • @sharifk9860
    @sharifk9860 Před 3 lety +3

    What an amazing lecture! I have seen many neural network lectures. This one is by far the most comprehensive and easy to understand. I instantly fell in love with prof. Winston. I hope he is now teaching God and his angels.

  • @EranM
    @EranM Před 3 lety +3

    Patrick writing on the blackboard is ASMR to my ears :>

  • @adityanarendra5886
    @adityanarendra5886 Před 2 lety +2

    Prof Winston, your explanations of AI have always fascinated and inspired mw in to the field. Rest in Peace professor.

  • @RobBarter
    @RobBarter Před 3 lety +2

    Just happened upon this youtube video and begun watching it as have a passing interest in Neural Networks.....then realised I recognised his name. Looked up and pulled down a book I bought back in 1992 (not opened in years), Artificial Intelligence by Patrick Henry Winston. Sorry to hear we've lost him.

  • @danielfernandes1010
    @danielfernandes1010 Před měsícem

    Oh my, that ending! That's the most beautiful thing I've heard today.

  • @OhhBabyATriple
    @OhhBabyATriple Před 8 lety +50

    Winston is the best AI lecturer

  • @ryanalopez
    @ryanalopez Před 2 lety +2

    Good in depth mathematical explanation of neural net components. If new to learning about neural nets, I'd recommend watching a few other videos first which cover the overall design goals of neural nets, how they work at a high level, and the outputs they are trying to achieve before jumping into the mathematical models used to describe errors and performance.

  • @jvwdigital
    @jvwdigital Před 7 lety +5

    2 years later and this is still a great lecture. Amazing instructor. I actually watched the whole thing. Simple ideas only take a quarter century to find. We humans need to make more observations, put them together, and see what shakes out.

  • @bohusb.6879
    @bohusb.6879 Před 3 lety +5

    This professor is amazing. His lectures are so clear and the same time he goes really deep. Very well structured lectures.

  • @xXxBladeStormxXx
    @xXxBladeStormxXx Před 8 lety +31

    To think that just as back as 2010, they thought Neural Nets weren't worth spending much time on and now the instructor, I'm guessing, felt compelled to update even the ocw playlist to include these videos, should give everyone an idea of how good a time it is to be studying these topics.
    In the course of just a few years, deep neural nets have become extremely relevant again. It's indeed a great time to be studying Artificial Neural Networks.

    • @limitless1692
      @limitless1692 Před 7 lety +1

      we are at the start of AI age
      being first here is a edge

    • @user-ol2gx6of4g
      @user-ol2gx6of4g Před 7 lety +5

      we were at the start of AI age since 1950s.

    • @MrAlipatik
      @MrAlipatik Před 6 lety

      wake me up when they create tiny computers in a chip, that can be able to calculate simultaneously, and all hell break loose.

  • @jvanrs4928
    @jvanrs4928 Před 3 lety +2

    Thanks MIT, initiatives like this can truly spark innovation

  • @Nestorghh
    @Nestorghh Před 7 lety +5

    world-class professor and lecture.

  • @bidhanmajhi
    @bidhanmajhi Před 5 lety +12

    He explained it very well. Sadly he's no more RIP

  • @maffixwilliam5471
    @maffixwilliam5471 Před 2 lety

    Thanks MIT for making this lecture public. The Lecturer explained the concepts, which makes it very crystal clear. Thanks. btw rip to the lecturer. done an honorable thing to the world. am benefiting from his work. thnks again to him and MIT. Keep up the great works please.

  • @irazt
    @irazt Před 4 lety +3

    I wish I could have taken these courses in person. Thank you for sharing your knowledge to the world professor

  • @Ludiusvox
    @Ludiusvox Před 5 lety +2

    Right now I am studying a Lexus ES350 Air Conditioning system, and Neural Networks are part of the A/C controls. Not being able to find any resources on it at the school this lecture is very useful. I might add, MATLAB deep learning toolkit is useful also.

  • @redaelhail9877
    @redaelhail9877 Před 4 lety +2

    Thank you professor Patrick ! you had an extraordinary simple explanation for complex principles !
    Thank you MIT for sharing this incredible content.

  • @OnionKnight541
    @OnionKnight541 Před rokem

    that was fantastic. at the end, he says, this miracle was a consequence of two tricks plus an observation. and, all great ideas are simple and easy to overlook.

  • @JG_1998
    @JG_1998 Před rokem +1

    Rest in Power Dr. Winston.

  • @thevirginmarty9738
    @thevirginmarty9738 Před 8 lety +400

    Awesome course. Someday I will use this to build a robot girlfriend. Thank you!

    • @robl4836
      @robl4836 Před 8 lety +25

      You need a Robot first before you can build it a Girlfriend ;)

    • @23Ather
      @23Ather Před 7 lety +44

      You need both the robot and the girlfriend to find the minimum of the cost function. (robot - girlfriend)^2 ;)

    • @koushik7604
      @koushik7604 Před 7 lety +2

      :)

    • @vijayd8634
      @vijayd8634 Před 7 lety +8

      Funny, the cost will be half of it!

    • @peterkay7458
      @peterkay7458 Před 7 lety +9

      When you get it working please make the CAD files available online. PLEAZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ jk

  • @backpropalgo
    @backpropalgo Před 8 měsíci

    amazing content. I miss real blackboards like this. I have to admit that the prof looked to be struggling a bit. I heard he passed away, so I would just say thank you for a really great session that I have shared with everyone in my own circle that had questions about how the foundation/basics of modern AI work

  • @nikre
    @nikre Před 3 lety +2

    what a privilege to be a student in this class.

  • @jonelya
    @jonelya Před 2 lety

    29:26 the best ever explanation of chain rule..thank you so much

  • @ragy1986
    @ragy1986 Před 11 měsíci

    It's the best video on NN on youtube, bar none!

  • @qzorn4440
    @qzorn4440 Před 8 lety +3

    a very relaxing lecture, this makes me think of deep learning programs. thanks.

  • @tuha3524
    @tuha3524 Před 2 lety

    yes, yes, absolutely agree with Professor. "hardly ever go beyond one trick or one observation."

  • @AlwaniAkber
    @AlwaniAkber Před 6 lety +1

    Though I am not good in math but few of the explanation really make sense ..great professor and video

  • @yusuferoglu9287
    @yusuferoglu9287 Před 5 lety +4

    RIP Sir!

  • @mikeschmit6474
    @mikeschmit6474 Před 7 lety +5

    Just a minor correction at 4 minutes.
    That is a ring-tailed Lemur, not a Madagascar cat

  • @radsimu
    @radsimu Před 8 lety +3

    this nicely explains some of the mathematical decisions of nn models. really good stuff!

    • @AdrianVrabie
      @AdrianVrabie Před 7 lety

      hey Radu! I dunno what you are referring to when you say "mathematical decisions" but I agree with it that it's awesome stuff! Btw! You've also done some nice stuff with NLP in Romanian! :) You should contact me and give me the code in Java maybe I can continue in the free time to do some stuff too! Kudos to you in advance! :) (ce lume mica!)

    • @radsimu
      @radsimu Před 7 lety +1

      Haha :). Will upload it all on github some day. Need to make it more tidy first. Will keep you posted

    • @AdrianVrabie
      @AdrianVrabie Před 7 lety

      Radu Simionescu adauga.ma te rog pe facebook ca nu te gasesc. adrian vrabie

  • @rustycherkas8229
    @rustycherkas8229 Před 2 lety

    Great lecture! Lucid with moments of humour and humanity.
    Thanks MIT.

  • @SharathPunreddy
    @SharathPunreddy Před 5 lety +2

    Loved it, thank you very much for making complex things so simple.

  • @montserratcano2389
    @montserratcano2389 Před 6 lety +3

    Thanks for sharing MIT! Excellent teacher!

  • @maoqiutong
    @maoqiutong Před 6 lety +2

    Between 46:00 and 49:00, dynamic programming also uses similar concept to avoid exponential blowup. Maybe back propagation is also a kind of dynamic programming.

  • @mdtowhidurrahman8406
    @mdtowhidurrahman8406 Před 3 lety +4

    I am not sure if it's me or others who feel the same after the pandemic. I feel disturbed and lose focus as soon as the students start coughing in the background. The pandemic left us with a mental phobia.

    • @MICKEYISLOWD
      @MICKEYISLOWD Před 3 lety

      Go look at climate change if you really want Mental phobias! It's shocking. The acceleration of change is scary as fuck. Just 10 yrs from now and economies will begin falling.

  • @nitinsiwach1989
    @nitinsiwach1989 Před 7 lety +1

    at 41:00 .. Starting off with weights being the same would not necessarily mean they remain the same. it would if they were in same layer but here the neurons are not.. am i missing something?

  • @strings1984
    @strings1984 Před 4 lety +1

    Seems like in the biological model the hill climbing is done by the physical architecture and the pull on the axiom path by the surrounding associated stimuli's, the added advantage of this pull is it lets us know where to head towards when the solution isn't fitting the question.

  • @mathhack8647
    @mathhack8647 Před 2 lety

    @26:05, I ike this philosophy. RIP Dear Winston, Your coursers are stil used by students and perpitual leaners, like me , all over the world/ الله يرحمك ويحسن لايك بقدر ما نفعت طلابك وعموم البشر

  • @daniyalali6016
    @daniyalali6016 Před 7 lety +1

    learn a lot about neural nets from this video course.

  • @shatandv
    @shatandv Před 8 lety +2

    I'm loving this course

  • @hassananwer3674
    @hassananwer3674 Před 4 lety +5

    50:02 "All great ideas are simple"

    • @Yomama4536
      @Yomama4536 Před 3 lety +1

      But not all simple ideas are great...

  • @alv2648
    @alv2648 Před 7 lety +1

    at 4:10 seems he misspoke about misclassified examples by Geoffrey Hinton's U Toronto NN. Appears the right answers (aka labels) are shaded red (second choice for the first two photos). Labels are set by the researcher for the training set - so they chose cherry instead of dalmatian in picture #3.

  • @KaiyuZheng
    @KaiyuZheng Před 7 lety +5

    I don't quite get the last point: the computation with respect to width is w^2 (width squared).Can someone explain?

    • @Dennis4Videos
      @Dennis4Videos Před 5 lety +1

      1 year late but to whom it may concern: it is because you can cross-link the neurons hence w^2

  • @benjaminhardisty66
    @benjaminhardisty66 Před 8 lety +8

    Sweet lecture! This stuff finally makes some good intuitive sense ;)

  • @drewlaino
    @drewlaino Před 5 lety +1

    That P at 16:35 was amazing...

  • @tsvisabo731
    @tsvisabo731 Před rokem

    What an awesome teacher

  • @chetjuall2269
    @chetjuall2269 Před 7 lety +2

    Great ending beginning at 50:00

  • @SuperMaDBrothers
    @SuperMaDBrothers Před rokem

    amazing lecture good points at the end on simplicity

  • @devonallary5251
    @devonallary5251 Před 7 lety +2

    @24:30, shouldn't the weight for w0 be 1 instead of -1? Then, as long as the sum of the other inputs is greater than 0, they will always pass the threshold since w0 + SUM(w-0) >= T - -> Sum(w-0) >= T - w0 - -> Sum(w-0) >= 0.

    • @andrii5054
      @andrii5054 Před 4 lety

      I agree, thought the same thing

  • @ibadurrahman5954
    @ibadurrahman5954 Před 5 lety +2

    Thanks for this lecture it was amazing .

  • @dostoguven
    @dostoguven Před 8 lety +3

    amazing teacher.

  • @michaelredenti2054
    @michaelredenti2054 Před 5 lety +1

    The fact that the derivative of the sigmoid function is given exclusively in terms of the input/sigmoid is not that surprising since the sigmoid is a function of the exponential function whose derivative is itself.

  • @fraollemecha
    @fraollemecha Před 2 lety

    Awesome course. Someday I will use this to build a program that writes programs.

  • @TheZudork
    @TheZudork Před 6 lety +1

    Thank you for this amazing class!

  • @gianluke
    @gianluke Před 6 lety +11

    Some clarifications:
    1) It's not true that, prior to 2012 ImageNet success, neural nets had not been used in practice. As an example, LeNet5 was deployed in the late 90s to recognize ZIP codes.
    2) The ImageNet's ConvNet paper of the 2012 is authored (in order) by two students of Hinton, Krizhevsky and Sutskever, and Hinton himself. It was Alex Krizhevky to implement and train the network (in his room). Maybe we should stop to attribute every credit to the famous professors of the case.
    3) The problem with step function is not the non-differentiability in 0. That's practically irrelevant. Indeed, even the most common activation function of today (the rectifier, aka ReLU) is non-differentiable in 0. The problem with step functions is that derivatives are equal to 0 everywhere (but in 0, where it's not differentiable). So gradient descent cannot be used.
    4) Nobody was getting rid of the thresholds, it's just rewriting the same function in a different form. In modern terms, the threshold is now called "bias". And the so-called "bias trick" to "hide" the bias inside the matrix multiplication is just a notation convenience. The point here is just replacing the step activation function with another one that is (still) differentiable almost everywhere AND has non-zero derivatives in some parts of the domain.
    (Edited after a comment pointed out a mistake)

    • @asdfasdfuhf
      @asdfasdfuhf Před 6 lety

      Wtf, this lecture is based on a lie

    • @An-wd9kk
      @An-wd9kk Před 5 lety +1

      Uhmm just one point in your argument. The ReLu IS continous but NOT differentiable at one point while the step function IS BOTH discontinuous and undifferentiable at the same point.

    • @gianluke
      @gianluke Před 5 lety

      @@An-wd9kk Right. I will update the comment. Thank you :)

    • @Briefklammer1
      @Briefklammer1 Před 5 lety

      hi sboby, you seem pretty familiar with neural net. i have a question in terms of backprop. I've understand that we wanna minimaze our errorfunktion, therefore we calculate the partiell derivatives of the weights W_1,..., W_n. My question is, how do we use stochastic gradient descent to find the best weights? Is it like you explained in 21:23 ?

  • @tuha3524
    @tuha3524 Před 2 lety

    I love this course so so much. Exellent!!

  • @stumbling
    @stumbling Před 6 lety +3

    Is it just me or is the sound low on this?

  • @perrydeng6960
    @perrydeng6960 Před 5 lety +1

    Backpropagation starts at 26:25

  • @chuvaca189
    @chuvaca189 Před 3 lety

    Gracias MIT con la colaboracion de finis terrae :D

  • @5hawnK3lly
    @5hawnK3lly Před 3 lety +1

    really impressive drawing skills i must say

  • @avawinters6184
    @avawinters6184 Před 3 lety

    ok, amazing lesson and all, but where do I get one of these chalkboards?

  • @heri_prieto
    @heri_prieto Před 7 lety +2

    This was beautiful.

  • @i890ola
    @i890ola Před 3 lety

    Thanks from Syria 🇸🇾

  • @prateek5069
    @prateek5069 Před 4 lety +1

    at 22:28 which function is not continuous?

    • @andrii5054
      @andrii5054 Před 4 lety +1

      f(x,w,t) because of a step activation function (on the left bottom board), and thus the cost function P, because P(w,t) = ||d - f(x, w, t)||

  • @myroseaccount
    @myroseaccount Před 3 lety

    This wasn't overlooked but buried by Marvin Minsky in 1970 by his book Perceptrons

  • @Anand_Agrawal
    @Anand_Agrawal Před rokem

    This is art

  • @tthtlc
    @tthtlc Před 8 lety +4

    You mentioned 2010 as year when NN is nearly dumped. I tooked an AI course in 1990, and by end of 1990, have convinced myself enough that the whole idea is too probabilistic, and unlikely to show much intelligence superiority, preferring the algorithmic approach instead, and subsequently gave up the subject totally. Well, I was wrong. :-)!!!

    • @rustycherkas8229
      @rustycherkas8229 Před 2 lety

      You think you've got problems?
      I was the SysAdm at the UofT during the late '80s who set up Geoffrey Hinton's terminal in his office, and, not knowing any better, turned and asked if he needed any 'training' on how to send/receive emails...
      How was I to know that he'd become the "grandfather of AI"???
      *sob*

  • @somaprasadsahoo2446
    @somaprasadsahoo2446 Před rokem

    How the performance function became -1/2 (d-z)^2 ? 28:08

  • @dnyaneshwardarade6120
    @dnyaneshwardarade6120 Před 4 lety +4

    I only dream of sitting there and watching the professor

  • @_bobbejaan
    @_bobbejaan Před 7 lety

    Problem i have is that if in = 0 then the weight of that in does not change because its weight change depends on its input. (pd sigmoid in / pd w) = in where in = 0. I think weights should change if there is an error.But if out = 1 and in = 0 then w1 does not change.

  • @zekeanthony
    @zekeanthony Před 6 lety

    superb prof Winston

  • @bendev6807
    @bendev6807 Před 4 lety

    Great lecture. Enjoyed it a lot. RIP Prof Winston.

  • @abhi1092
    @abhi1092 Před 8 lety +3

    Is this a Graduate level or Undergraduate level course?

    • @mitocw
      @mitocw  Před 8 lety +1

      +abhi1092 This is an Undergraduate level course. See the course on MIT OpenCourseWare for more information and materials at ocw.mit.edu/6-034F10

  • @keskinaytac
    @keskinaytac Před 7 lety

    Thank you for the subtitles.

  • @aoweishen3496
    @aoweishen3496 Před 8 lety +12

    Can you please build a full playlist of this course? Cuz it's really good but i don't know how to find the rest of the course. Thank you!

    • @mitocw
      @mitocw  Před 8 lety +35

      Here is the complete playlist: czcams.com/play/PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi.html

  • @kabal127
    @kabal127 Před 4 lety

    Best course ever

  • @Jirayu.Kaewprateep
    @Jirayu.Kaewprateep Před 3 lety

    From his example, how much initial random value create BETTER results since too wide create time approx because approach algorithms or because time widely scope⁉️

  • @brambeer5591
    @brambeer5591 Před 6 lety +3

    Cool guy, awesome lecture!

  • @cagmz
    @cagmz Před 8 lety +6

    Does anyone know where the 1/2 comes from at 28:00?

    • @ThomasFauskanger
      @ThomasFauskanger Před 8 lety +12

      I think it's just to make the derivative nicer. He uses the derivate at 33:30 , and is just d-z and not 2(d-z) as it would've been otherwise.
      I think one of his points in other videos is that it's about mathematical convenience. The performance function is arbitrary and can be adjusted to "be nice".

    • @PullingEnterprises
      @PullingEnterprises Před 6 lety +1

      It's however long you want your approximation step length to be. That is, if the optimization function is -1/2 then every step you'll reduce how far you were off (d-z) by half. If it was 1/3 then our approximation would be dividing the off distance by three and traveling just that far. The (d-z) term is how much you were off from the right result, and the -1/2 is just the step size to adjust (iteratively) until your gradient descent is within a threshold to give you the outputs you want while training your network.

    • @WhoForgot2Flush
      @WhoForgot2Flush Před 6 lety +3

      Makes taking the derivative easier. You don't need it, you'll get the same result it just makes the math easier.

  • @michaelsu4253
    @michaelsu4253 Před rokem

    22:55 "Sadly in Harvard" in 1974 gave us the answers. This makes my day😂

  • @gauravstud
    @gauravstud Před 6 lety +1

    Can someone post the pre reading and prerequisites for this course?

    • @mitocw
      @mitocw  Před 6 lety +1

      For course information and materials, see the course on MIT OpenCourseWare at: ocw.mit.edu/6-034F10.

  • @neurolife77
    @neurolife77 Před 3 lety

    13:45 As a neuroscience student, I confirm this statement ;)

  • @trevorjones2095
    @trevorjones2095 Před 3 lety

    Is Conway's Game of Life hard to do with neural nets?

  • @fulliculli
    @fulliculli Před 7 lety +2

    Awesome video content. Just make the sound louder please.

  • @josetzo5599
    @josetzo5599 Před 3 lety

    gracias yanine :)

  • @IvandroidYT
    @IvandroidYT Před 2 lety

    Awesome! he explains really good

  • @prinzrainerbuyo3234
    @prinzrainerbuyo3234 Před 7 lety

    Here is the complete playlist: czcams.com/play/PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi.html