Video není dostupné.
Omlouváme se.

Machine learning - Introduction to Gaussian processes

Sdílet
Vložit
  • čas přidán 3. 02. 2013
  • Introduction to Gaussian process regression.
    Slides available at: www.cs.ubc.ca/~...
    Course taught in 2013 at UBC by Nando de Freitas

Komentáře • 159

  • @maratkopytjuk3490
    @maratkopytjuk3490 Před 8 lety +93

    Thank you, I tried to understand GP via papers, but only you could help me to build up understanding the idea. That is great that you took time to explain gaussian distribution and the important operations! You're the best!

    • @MrEdnz
      @MrEdnz Před 2 lety +5

      Learning a new subject via papers isn’t very helpful indeed :) They expect you to understand basic principles of GP. However lectures like these or books start with the basic principles💪🏻

  • @augustasheimbirkeland4496
    @augustasheimbirkeland4496 Před 2 lety +10

    5 minutes in and its already better than all 3 hours at class earlier today!

  • @sourabmangrulkar9105
    @sourabmangrulkar9105 Před 4 lety +4

    The way you started from basics and built up on it to explain the Gaussian Processes is very easy to understand. Thank you :)

  • @fuat7775
    @fuat7775 Před rokem +2

    This is absolutely the best explanation of the Gaussian!

  • @life99f
    @life99f Před 2 lety +1

    I feel so fortunate to find this video. It's like walking in a fog and finally be able to see things clearly.

  • @daesoolee1083
    @daesoolee1083 Před 2 lety +1

    The best tutorial for GP among all the materials I've checked.

  • @SijinSheung
    @SijinSheung Před 5 lety +10

    This lecture is so amazing! The hand drawing part is really helpful to build up intuition reagarding GP. This is a life-saving video to my finals. Many thanks!

  • @erlendlangseth4672
    @erlendlangseth4672 Před 6 lety +9

    Thanks, this helped me a lot. By the time you got to the hour mark, you had covered sufficient ground for me to finally understand gaussian processes!

  • @MattyHild
    @MattyHild Před 5 lety +12

    FYI Notation @22:05 is wrong. since he selected an x1 to condition on, he should be computing mu2|1 but he is computing mu1|2

  • @user-oc5gk7yn6o
    @user-oc5gk7yn6o Před 4 lety

    I've found so many lectures for understanding gaussian process. Until now you are the only one I think can make me understand it.. Thanks a lot man

  • @ziangxu7751
    @ziangxu7751 Před 3 lety +2

    What an amazing lecture. It is much clearer than lectures taught in my university.

  • @sarnathk1946
    @sarnathk1946 Před 6 lety +8

    This is indeed an Awesome lecture! I liked the way the complexity is slowly built over the lecture. Thank you very much!

  • @HarpreetSingh-ke2zk
    @HarpreetSingh-ke2zk Před 2 lety

    I started learning about multivariate Gaussian processes in 2011, but it's terrible that I just got to this video when 2021 is ending.
    He explained things in a way that even a layperson could grasp.
    He first explains the meaning of the concepts, followed by an example/data, and last, theoretical representation. Typically, mathematic's presenters/writers avoid using data to provide examples.
    I'm always on the lookout for lectures like these, where the theoretical understanding is demonstrated through examples or data.
    Unless the concepts are not difficult to grasp, but the presenter/writer has made us go deep in order to open up complex notations without providing any examples.

  • @akshayc113
    @akshayc113 Před 9 lety +34

    Thanks a lot Prof. Just a minor correction for the people following the lectures. You made a mistake while writing out the formulae at 22:10
    You were writing out mean and variance of P(X1|X2) whereas the diagram was to find P(X2|X1). Since this is symmetric, you can just get them by appropriate replacements, but just letting slightly confused people know

    • @charlsmartel
      @charlsmartel Před 8 lety +2

      +akshayc113 I think all that should change is the formula for the given graphs. It should read:
      mu_21 = mu_2 + sigma_21 sigma_11*-1 (x_1 - mu_1). Everything else can stay the same.

    • @tobiaspahlberg1506
      @tobiaspahlberg1506 Před 8 lety +2

      I think he actually meant to draw x_1 where x_2 is in the diagram. This switch would agree with the KPM formulae on the next slide.

  • @KhariSecario
    @KhariSecario Před 2 lety +1

    Here I am in 2021, yet your explanation is the easiest one to understand from all the sources I gathered! Thank you very much 😍

    • @matej6418
      @matej6418 Před rokem

      me in 2023, still the same

  • @francescocanonaco5988
    @francescocanonaco5988 Před 5 lety

    I tried to understand GP via blog article, paper and a lot of videos. Best video ever on GP! Thank you !

  • @Ricky-Noll
    @Ricky-Noll Před 3 lety

    All time one of the best videos on CZcams

  • @marcyaudrey6608
    @marcyaudrey6608 Před 10 měsíci

    This lecture is amazing Professor. From the bottom of my heart, I say thank you.

  • @malharjajoo7393
    @malharjajoo7393 Před 4 lety +1

    Basic summary of lecture video:
    1) Recap on multivariate Normal/Gaussian distribution (MVN).
    - some info on conditional probability
    2) Some information on how sampling can be done from Univariate/Multivariate Gaussian distribution.
    3) 39:00 - Introduction to Gaussian Process (GP)
    It is important to note that GP is considered as a Bayesian non-parametric approach/model

  • @turkey343434
    @turkey343434 Před 4 lety +10

    Gaussian processes start at 1:01:15

  • @dennisdoerrich3743
    @dennisdoerrich3743 Před 6 lety +2

    Wow, you saved my life with this genius lecture ! I think it's a pretty abstract idea with GP and it's nice that you can walk one through from scratch !

  • @Gouda_travels
    @Gouda_travels Před 2 lety

    after one hour of smooth explanation, he says and this brings us to Gaussian processes :)

  • @LynN-he7he
    @LynN-he7he Před 4 lety

    Thank you, thank you thank you!! I was stuck on a homework problem and still figuring out what it means to be a testing vs. training data set and how the play a role in the Gaussian Kernel function. I was stuck for the last 3 days, and your video from about 45min - 1 hour mark made the lightbulb go off!

  • @bluestar2253
    @bluestar2253 Před 3 lety

    One of the best teachers in ML out there!

  • @jx4864
    @jx4864 Před 2 lety

    After 30mins, I am sure that he is top 10 teacher in my life

  • @MB-pt8hi
    @MB-pt8hi Před 6 lety +1

    Very good lecture, full of intuitive examples which deepens the understanding. Thanks a lot

  • @DistortedV12
    @DistortedV12 Před 5 lety +1

    Finally! This is gold for beginners like me! Thank you Nando!! Saw you o the committee at the MIT defense, great questions!

  • @huitanmao5267
    @huitanmao5267 Před 7 lety +1

    Very clear lectures ! Thanks for make them publicly available !

  • @pradeepprabakarravindran615

    Thank you ! Your videos are so much awesome than any ML lecture series I have seen so far ! -- Grad Student from CMU

  • @bottomupengineering
    @bottomupengineering Před 6 měsíci

    Great explanation and pace. Very legit.

  • @DanielRodriguez-or7sk
    @DanielRodriguez-or7sk Před 4 lety +1

    Thank you so much Professor De Freitas. What a clear explanation of GP

  • @richardbrown2565
    @richardbrown2565 Před 3 lety +1

    Great explanation. I wish that the title mentioned that it was part one of two, so that I would have known it was going to take twice as long.

  • @jingjingjiang6403
    @jingjingjiang6403 Před 6 lety

    Thank you for sharing this wonderful lecture! Gaussian process was so confusing when it was taught in my university. Now it is crystal clear!

  • @taygunkekec9616
    @taygunkekec9616 Před 10 lety

    Very clearly explained. The dependencies for learning the framework is concisely and incrementally given while details that make the framework harder to understand is elaborately evaded (You will understand what I mean if you try to dig through Rasmussen's book on GP).

  • @dwhdai
    @dwhdai Před 4 lety +3

    wow, this is probably the best lecture I've ever watched. on any topic.

  • @xingtongliu1636
    @xingtongliu1636 Před 5 lety

    This becomes very easy to understand with your thorough explanation. Thank you very much!

  • @emrecck
    @emrecck Před 3 lety

    That was a great lecture Mr.Freitas, thank you very very much!
    I watched it to study my Computational Biology course, and it really helped.

  • @sanjanavijayshankar5508

    Brilliant lecture. One could not have taught GPs better.

  • @jinghuizhong
    @jinghuizhong Před 9 lety

    The lecture is quite clear and it inspires me about the the key ideas of gaussian process.
    Many thanks!

  • @woo-jinchokim6441
    @woo-jinchokim6441 Před 7 lety +1

    by far the best structured lecture on gaussian processes. love it :D

  • @oliverxie9559
    @oliverxie9559 Před 3 lety

    Really great video for reading Gaussian Processes for Machine Learning!

  • @adrianaculebro9176
    @adrianaculebro9176 Před 5 lety

    Finally understood how this idea is explained and applied using mathematical language

  • @dieg3005
    @dieg3005 Před 8 lety +1

    Thank you very much Prof. de Freitas, excellent introduction

  • @austenscruggs8726
    @austenscruggs8726 Před 2 lety

    This is an amazing video! Clear and digestible.

  • @sak02010
    @sak02010 Před 5 lety +1

    thanks a lot prof. Very clean and easy to understand explanation.

  • @Jacob011
    @Jacob011 Před 10 lety

    Absolutely superb lecture! Everything is clearly explained even with source code.

  • @saminebagheri4175
    @saminebagheri4175 Před 7 lety +7

    amazing lecture.

  • @malharjajoo7393
    @malharjajoo7393 Před 4 lety +2

    1:04:08 - Would be good to emphasize that the test set is actually used for generating prior ... I had a hard time making sense out of it because
    the test set is usually provided separately (but in this case we are generating it !!)

  • @heyjianjing
    @heyjianjing Před 3 lety +1

    around 56:00, I don't think we should omit the condition sign on the mu*, that is conditioned on f: E(f*|f), not E(f*), otherwise, the expected value of f* alone should just be zero

  • @niqodea
    @niqodea Před 5 lety +1

    BEAST MODE teaching

  • @kiliandervaux6675
    @kiliandervaux6675 Před 3 lety

    Thank you so much for this amazing lecture. I wanted to applaude at the end but I realised I was in front of my computer.

  • @quantum01010101
    @quantum01010101 Před 4 lety

    That is clear and flows naturally, Thank you very much.

  • @darthyzhu5767
    @darthyzhu5767 Před 8 lety +1

    really clear and comprehensive. thanks so much.

  • @pankayarajpathmanathan7009

    The best lecture for gaussian processes

  • @sumantamukherjee1952
    @sumantamukherjee1952 Před 9 lety

    Lucidly explained. Great video

  • @maudentable
    @maudentable Před 3 lety

    a master doing his work

  • @pattiknuth4822
    @pattiknuth4822 Před 3 lety

    Extremely good lecture. Well done.

  • @crestz1
    @crestz1 Před 4 měsíci

    Amazing lecturer

  • @dracleirbag5838
    @dracleirbag5838 Před 2 lety

    I like the way you teach

  • @tospines
    @tospines Před 5 lety +3

    I think I got the essence of GP, but what I can not understand is why we take that the mean is 0 when clearly it is not 0. I mean, if we suppose that f* will be distributed as a gaussian with mean 0, the expectation value of f* must be 0. Could anyone explain me this fact?

    • @oskarkeurulainen6414
      @oskarkeurulainen6414 Před 5 lety +1

      0 is only the mean for the prior for f*. When we know values of other variables that are correlated with f*, then we actually want to consider the mean when f* is conditioned on the other observed variables. Compare with the ellipse in the beginning with x1 and x2, both have mean 0 but if we observe one of them to be positive, the other one is also likely to be positive and thus has a positive conditional expectation.

  • @philwebb59
    @philwebb59 Před 2 lety

    1:05:58 Analog computers existed way before the first digital circuits. A WWII vintage electrical analog computer, for example, consisted of banks of op amps, configured as integrators and differentiators.

  • @bingtingwu8620
    @bingtingwu8620 Před rokem

    Thanks!!! Easy to understand👍👍👍

  • @terrynichols-noaafederal9537

    For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?

  • @Raven-bi3xn
    @Raven-bi3xn Před 3 lety

    Am I correct to think that the "f" notation in 30':30" is not the same "f" in 1:01':30"? In the latter case, each f consists of all the 50 f distributions that are exemplified in the former case?
    If that understanding is correct, then in sampling from the GP, each sample is a 50by1 vector from the 50D multivariate Gaussian distribution. This 50by1 vector is what Dr. Nando refers to as "distribution over functions".
    In other words, given the definition of a stochastic process as "indexed random variables", each random variable of GP is drawn from a multivariate Gaussian distribution. In that viewpoint, each "indexed" random variable is a function in 1:01':30".
    This lecture from 2013 is truly an amazing resource.

  • @jhn-nt
    @jhn-nt Před 2 lety

    Great lecture!

  • @katerinapapadaki4810
    @katerinapapadaki4810 Před 5 lety +1

    Thanks for the helful lecture!
    The only thing I want to point out is that if you put labels on the axises on your plots, it would be more helful for the listener to understand from the begging what you describe

  • @chenqu773
    @chenqu773 Před rokem

    It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.

  • @EbrahimLPatel
    @EbrahimLPatel Před 8 lety

    Excellent introduction to the subject! Thank you :)

  • @JadtheProdigy
    @JadtheProdigy Před 5 lety +1

    Can someone explain why f is distributed with mean 0?

  • @afish3356
    @afish3356 Před 4 lety

    An extremely good lecture! Thank you for recording this :) :)

  • @kevinzhang4692
    @kevinzhang4692 Před 2 lety

    Thank you! It is a wonderful lecture

  • @huuducdo143
    @huuducdo143 Před 7 měsíci

    Hello Nando, thank you for your excellent course.
    Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct?
    Other understanding is welcomed. Thanks a lot!

  • @rsilveira79
    @rsilveira79 Před 6 lety

    Awesome lecture, very well explained!

  • @deephazarika2259
    @deephazarika2259 Před 6 lety +2

    when estimating 'f', why each point is treated as a separate dimension and not different points in the same dimension?

    • @malekebadi9805
      @malekebadi9805 Před 4 lety

      As far as I understood, Gaussian process (regression) serves two purposes: refining the prior (and posterior) and predicting the response for new points. If you collect new observations for the same points you are refining the posterior and if you extend your new point to a new dimension, you're predicting. In the former case, the confidence interval between two points remains relatively fat. Querying for points in new dimensions (given that practically you can do that) squeeze the confidence interval. Theoretically, it doesn't matter I guess. Think of an experiment in which you keep the x the same in every iteration but you read different y's. Think of another experiment in which your x values are changing from one iteration to another and you receives y's. From GP point of view, both are the same.

  • @JaysonSunshine
    @JaysonSunshine Před 7 lety +2

    Correct me if I am wrong, but isn't the whole cluster of examples starting at 36:35 flawed? Nando shows three points in a single dimension: x1, x2, x3 and their corresponding f-values: f1, f2, f3. It seems these points are three samples from a univariate normal distribution with a scalar variance, rather than what he shows, i.e. a vector from R^3 with a 3x3 covariance matrix.

    • @JaysonSunshine
      @JaysonSunshine Před 7 lety

      On further reflection, perhaps you're doing a non-parametric approach in which you assign a Gaussian per point...
      ...since the distribution you're forming is empirical, it seems it would be more precise to to say the mean vector of the f-distribution is [f1, f2, f3], yes?

    • @DESYAAR
      @DESYAAR Před 6 lety

      I agree. That took me a while as well.

  • @GiiWiiDii
    @GiiWiiDii Před 4 lety +2

    23:56 That would be nice, thanks!

  • @user-ym7rp9pf6y
    @user-ym7rp9pf6y Před 3 lety

    Awesome explanation. thanks

  • @swarnendusekharghosh9539

    Thankyou sir for a clear explanation

  • @ratfuk9340
    @ratfuk9340 Před 5 měsíci

    Thank you for this

  • @ho4040
    @ho4040 Před 2 lety

    Holy shit...what a good lecture

  • @xinking2644
    @xinking2644 Před 2 lety

    if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?

  • @kambizrakhshan3248
    @kambizrakhshan3248 Před 3 lety

    Thank you!

  • @yunlongsong7618
    @yunlongsong7618 Před 4 lety

    Great lecture. Thanks.

  • @dhruv385
    @dhruv385 Před 5 lety

    Wow! Great Lecture!

  • @itai19
    @itai19 Před 3 lety

    Thanks for the lecture, I have a problem with the discussion around 11 - from my understanding, a spherical case does represent some correlation between X and Y, as X is a sub-component of the max radius calculation, meaning larger x leads to smaller possible values of y (or at least lower probability for higher values). In other words, the covariance can be approximated to something like E[x*sqrt(r^2-x^2)]. Are we saying that ends up being zero, i.e. correlation is unable to express such a dependency?
    My intuition currently understands a square to express 0 correlation

  • @redberries8039
    @redberries8039 Před 3 lety

    This was a good explanation.

  • @yousufhussain9530
    @yousufhussain9530 Před 8 lety

    Amazing lecture!

  • @homtom2
    @homtom2 Před 9 lety

    This helped me so much! Thanks!

  • @ahaaha8462
    @ahaaha8462 Před 4 lety

    amazing lecture, thanks a lot

  • @haunted2097
    @haunted2097 Před 10 lety

    well done! Very intuitive!

  • @user-nr3ej2ud5j
    @user-nr3ej2ud5j Před 2 měsíci

    isn't 22:19 the right side formula for x1|x2 not for x2|x1?

  • @gourv7ghoshal
    @gourv7ghoshal Před 6 lety

    Thank you for sharing this vdo, it was really helpful

  • @brianstampe7056
    @brianstampe7056 Před 4 lety

    Very helpful. Thanks!

  • @liamdavey8726
    @liamdavey8726 Před 6 lety

    Great Teacher! Thanks!

  • @AlqGo
    @AlqGo Před 7 lety +7

    39:55 your function seems to be exponential but the mean is assumed to be 0.......that's a really confusing example Prof.

    • @heyjianjing
      @heyjianjing Před 3 lety

      My understanding is that at 39:55, the mean refer to the prior information of the mean of f. Without any information, before seeing any data, zero is not a bad prior for the mean. Once you see the data, the mean is updated per equations at 56:00, here, the posterior mean is no longer zero.

  • @KristoferPettersson
    @KristoferPettersson Před 5 lety +1

    If I run the example code I get an error stating that my K_ isn't a positive-definite matrix. What am I doing wrong?

  • @bertobertoberto3
    @bertobertoberto3 Před 9 lety +3

    Round of Applause

  • @xesan555
    @xesan555 Před 8 lety

    nando you are wondeful...

  • @nichiyanani
    @nichiyanani Před 6 lety

    Great lecture

  • @SimoneIovane
    @SimoneIovane Před 5 lety

    Great lesson! Thank you!