Nando de Freitas
Nando de Freitas
  • 69
  • 4 071 464

Video

Deep Learning Lecture 16: Reinforcement learning and neuro-dynamic programming
zhlédnutí 120KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 14: Karol Gregor on Variational Autoencoders and Image Generation
zhlédnutí 49KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford. Guest Lecture by Karol Gregor of Google Deepmind.
Deep Learning Lecture 13: Alex Graves on Hallucination with RNNs
zhlédnutí 52KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford. Guest lecture by Alex Graves of Google Deepmind, who gives a new meaning to "the dreams in which I'm dying are the best I ever had"
Deep Learning Lecture 12: Recurrent Neural Nets and LSTMs
zhlédnutí 127KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 11: Max-margin learning, transfer and memory networks
zhlédnutí 30KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 10: Convolutional Neural Networks
zhlédnutí 154KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 9: Neural networks and modular design in Torch
zhlédnutí 44KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep learning Lecture 7: Logistic regression, a Torch approach
zhlédnutí 38KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 8: Modular back-propagation, logistic regression and Torch
zhlédnutí 39KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 5: Regularization, model complexity and data complexity (part 2)
zhlédnutí 35KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 6: Optimization
zhlédnutí 47KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 4: Regularization, model complexity and data complexity (part 1)
zhlédnutí 47KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 3: Maximum likelihood and information
zhlédnutí 79KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 2: linear models
zhlédnutí 98KPřed 9 lety
Slides available at: www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/ Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford.
Deep Learning Lecture 1: Introduction
zhlédnutí 347KPřed 9 lety
Deep Learning Lecture 1: Introduction
Machine learning - Markov chain Monte Carlo (MCMC) II
zhlédnutí 35KPřed 11 lety
Machine learning - Markov chain Monte Carlo (MCMC) II
Machine learning - Importance sampling and MCMC I
zhlédnutí 86KPřed 11 lety
Machine learning - Importance sampling and MCMC I
Machine learning - Deep learning II, the Google autoencoders and dropout
zhlédnutí 24KPřed 11 lety
Machine learning - Deep learning II, the Google autoencoders and dropout
Machine learning - Deep learning I
zhlédnutí 33KPřed 11 lety
Machine learning - Deep learning I
Machine learning - Neural networks
zhlédnutí 31KPřed 11 lety
Machine learning - Neural networks
Machine learning - Logistic regression
zhlédnutí 39KPřed 11 lety
Machine learning - Logistic regression
Machine learning - Unconstrained optimization
zhlédnutí 17KPřed 11 lety
Machine learning - Unconstrained optimization
Machine learning - Random forests applications
zhlédnutí 18KPřed 11 lety
Machine learning - Random forests applications
Machine learning - Random forests
zhlédnutí 238KPřed 11 lety
Machine learning - Random forests
Machine learning - Decision trees
zhlédnutí 221KPřed 11 lety
Machine learning - Decision trees
Machine learning - Bayesian optimization and multi-armed bandits
zhlédnutí 131KPřed 11 lety
Machine learning - Bayesian optimization and multi-armed bandits
Machine learning - Gaussian processes
zhlédnutí 92KPřed 11 lety
Machine learning - Gaussian processes
Machine learning - Introduction to Gaussian processes
zhlédnutí 296KPřed 11 lety
Machine learning - Introduction to Gaussian processes
Machine learning - Bayesian learning part 2
zhlédnutí 24KPřed 11 lety
Machine learning - Bayesian learning part 2

Komentáře

  • @AnilKumarnn
    @AnilKumarnn Před 12 dny

    Best lecture in GP. Complement with examples in GPT or claude.

  • @TheTacticalDood
    @TheTacticalDood Před 16 dny

    This is amazing. Thanks so much!

  • @aa-vb3vh
    @aa-vb3vh Před 26 dny

    59:55 Why does the prediction for new data x_* calculate P(y|x_*, D, σ) instead of (x_*)^T Θ ?

  • @user-nr3ej2ud5j
    @user-nr3ej2ud5j Před 3 měsíci

    isn't 22:19 the right side formula for x1|x2 not for x2|x1?

  • @Sheriff_Schlong
    @Sheriff_Schlong Před 4 měsíci

    at 1:02:40 IK this teacher was a legend. 11years late and still able to gain much valuable knowledge from these lectures!

  • @crestz1
    @crestz1 Před 5 měsíci

    beautifully linked the idea of maximising likelihood by illustrating the 'green line' @ 51:41

  • @crestz1
    @crestz1 Před 5 měsíci

    Amazing lecturer

  • @forughghadamyari8281
    @forughghadamyari8281 Před 6 měsíci

    hi. Thanks for wonderful videos. please introduce a book to study for this course.

  • @ratfuk9340
    @ratfuk9340 Před 6 měsíci

    Thank you for this

  • @bottomupengineering
    @bottomupengineering Před 7 měsíci

    Great explanation and pace. Very legit.

  • @terrynichols-noaafederal9537

    For the noisy GP case, we assume the noise is sigma^2 * the identity matrix, which assumes iid. What if the noise is correlated, can we incorporate the true covariance matrix?

  • @m0tivati0n71
    @m0tivati0n71 Před 7 měsíci

    Still great in 2023

  • @huuducdo143
    @huuducdo143 Před 7 měsíci

    Hello Nando, thank you for your excellent course. Following the bell example, the muy12 and sigma12 you wrote should be for the case that we are giving X2=x2 and try to find the distribution of X1 given X2=x2. Am I correct? Other understanding is welcomed. Thanks a lot!

  • @newbie8051
    @newbie8051 Před 8 měsíci

    It amazes me that people were discussing these topics when I was studying about the water-cycle lol.

  • @ScieLab
    @ScieLab Před 9 měsíci

    Hi Nando, is it possible to access the codes that you have mentioned in the lecture?

  • @S25plus
    @S25plus Před 9 měsíci

    Thanks prof. Freitas, this is extremely helpful

  • @TheDeatheater3
    @TheDeatheater3 Před 10 měsíci

    super good

  • @marcyaudrey6608
    @marcyaudrey6608 Před 11 měsíci

    This lecture is amazing Professor. From the bottom of my heart, I say thank you.

  • @bodwiser100
    @bodwiser100 Před rokem

    One thing that remained confusing for me for a long time and which I don't think he clarified in the video was that the N and the summation from i = 1 to i = N does not refer to the # of data points in our dataset but to the number of times of we run the Monte Carlo simulation.

  • @truongdang8790
    @truongdang8790 Před rokem

    Amazing example!

  • @guliyevshahriyar
    @guliyevshahriyar Před rokem

    Thank you very much.

  • @bingtingwu8620
    @bingtingwu8620 Před rokem

    Thanks!!! Easy to understand👍👍👍

  • @subtlethingsinlife
    @subtlethingsinlife Před rokem

    He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity

  • @fuat7775
    @fuat7775 Před rokem

    This is absolutely the best explanation of the Gaussian!

  • @nikolamarkovic9906
    @nikolamarkovic9906 Před rokem

    49:40 str 46

  • @el-ostada5849
    @el-ostada5849 Před rokem

    Thank you for everything you have given to us.

  • @charlescoult
    @charlescoult Před rokem

    This was an excellent lecture. Thank you.

  • @learner-long-life
    @learner-long-life Před rokem

    Great lecture, abrupt ending. I believe this is the short (but dense) book mentioned by Criminisi about decision forests www.microsoft.com/en-us/research/wp-content/uploads/2016/02/CriminisiForests_FoundTrends_2011.pdf

  • @chenqu773
    @chenqu773 Před rokem

    It looks like that the notation of the axis in the graph on the right side of the presentation, @ around 20:39, is not correct. It could probably be the x1 on x-axis. I.e: it would make sense if μ12 refered to the mean of variable x1, rather than x2, judging from the equation shown on the next slide.

  • @kianbehdad
    @kianbehdad Před 2 lety

    You can olny "die" once. That is how I remember die is singular :D

  • @hohinng8644
    @hohinng8644 Před 2 lety

    The use of notation at 23:00 is confusing for me

  • @rikki146
    @rikki146 Před 2 lety

    Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!

  • @marouanbelhaj7881
    @marouanbelhaj7881 Před 2 lety

    To this day, I keep coming back to your videos to refresh ML concepts. Your courses are a Masterpiece!

  • @emmanuelonyekaezeoba6346

    Very elaborate and simple presentation. Thank you.

  • @Gouda_travels
    @Gouda_travels Před 2 lety

    This is when got really interesting 22:02 typically, I'm given points and I am trying to learn the mu's and the sigma's

  • @MrStudent1978
    @MrStudent1978 Před 2 lety

    1:12:24 What is mu(x)? Is that different from mu?

  • @ahmed_mohammed_1
    @ahmed_mohammed_1 Před 2 lety

    I wish if i discovered your courses a bit earlier

  • @adamtran5747
    @adamtran5747 Před 2 lety

    Love the content. <3

  • @michaelcao9483
    @michaelcao9483 Před 2 lety

    Thank you! Really great explanation!!!

  • @augustasheimbirkeland4496

    5 minutes in and its already better than all 3 hours at class earlier today!

  • @truptimohanty9386
    @truptimohanty9386 Před 2 lety

    This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you

  • @htetnaing007
    @htetnaing007 Před 2 lety

    Don't stop sharing these knowledge for those are vital to the progress of humankind!

  • @jeffreycliff922
    @jeffreycliff922 Před 2 lety

    access to the source code to do this would be useful

  • @gottlobfreige1075
    @gottlobfreige1075 Před 2 lety

    So, basically, it's partial derivatives?

  • @gottlobfreige1075
    @gottlobfreige1075 Před 2 lety

    I don't understand, it's basically a lot of derivatives within the layers.. correct?

  • @jx4864
    @jx4864 Před 2 lety

    After 30mins, I am sure that he is top 10 teacher in my life

  • @cicik57
    @cicik57 Před 2 lety

    the best way to explain gamma function, is that is continuous factorial. you should point that P(teta) you write is probability DENCITY function here

  • @jhn-nt
    @jhn-nt Před 2 lety

    Great lecture!

  • @gottlobfreige1075
    @gottlobfreige1075 Před 2 lety

    How do you understand the math part with depth? Anyone? Help me!

  • @xinking2644
    @xinking2644 Před 2 lety

    if their is a mistake in 21:58 ? it should be condition on x1 instead of x2 ?