Marc Deisenroth
Marc Deisenroth
  • 65
  • 254 256

Video

4 some extensions
zhlédnutí 288Před rokem
4 some extensions
Iterative State Estimation in Non-linear Dynamical Systems Using Approximate Expectation Propagation
zhlédnutí 494Před 2 lety
video summarizing TMLR paper available at openreview.net/forum?id=xyt4wfdo4J
Jackie Kay: Fairness for Unobserved Characteristics
zhlédnutí 318Před 3 lety
Recent advances in algorithmic fairness have largely omitted sexual orientation and gender identity. We explore the concerns of the queer community in privacy, censorship, language, online safety, health, and employment to study the positive and negative effects of artificial intelligence on queer communities. These issues highlight a multiplicity of considerations for fairness research, such a...
Inference in Time Series
zhlédnutí 1,4KPřed 3 lety
Inference in Time Series
Numerical Integration
zhlédnutí 986Před 3 lety
Numerical Integration
Gaussian Processes - Part 2
zhlédnutí 3,8KPřed 3 lety
Gaussian Processes - Part 2
Bayesian Optimization
zhlédnutí 17KPřed 3 lety
Bayesian Optimization
Gaussian Processes - Part 1
zhlédnutí 12KPřed 3 lety
Gaussian Processes - Part 1
Monte Carlo Integration
zhlédnutí 972Před 3 lety
Monte Carlo Integration
Normalizing Flows
zhlédnutí 960Před 3 lety
Normalizing Flows
Introduction to Integration in Machine Learning
zhlédnutí 1,6KPřed 3 lety
Introduction to Integration in Machine Learning
12 Stochastic Gradient Estimators
zhlédnutí 1,2KPřed 3 lety
Slides and more information: mml-book.github.io/slopes-expectations.html
05 Normalizing flows
zhlédnutí 4,5KPřed 3 lety
Slides and more information: mml-book.github.io/slopes-expectations.html
06 Inference in Time Series
zhlédnutí 801Před 3 lety
Slides and more information: mml-book.github.io/slopes-expectations.html
04 Monte Carlo Integration
zhlédnutí 1,5KPřed 3 lety
04 Monte Carlo Integration
03 Numerical Integration
zhlédnutí 1,5KPřed 3 lety
03 Numerical Integration
02 Introduction to Integration
zhlédnutí 1,5KPřed 3 lety
02 Introduction to Integration
01 Welcome
zhlédnutí 2,5KPřed 3 lety
01 Welcome
Bayesian Optimization (backup)
zhlédnutí 567Před 3 lety
Bayesian Optimization (backup)
Projections (video 1): Motivation
zhlédnutí 927Před 5 lety
Projections (video 1): Motivation
Inner Products (video 2): Dot Product
zhlédnutí 1,2KPřed 5 lety
Inner Products (video 2): Dot Product
Inner products (video 3): Definition
zhlédnutí 936Před 5 lety
Inner products (video 3): Definition
Inner products (video 8): Outro
zhlédnutí 403Před 5 lety
Inner products (video 8): Outro
Introduction to the Course
zhlédnutí 7KPřed 5 lety
Introduction to the Course
Statistics (video 7): Outro
zhlédnutí 699Před 5 lety
Statistics (video 7): Outro
Projections (video 5): Example N-dimensional Projections
zhlédnutí 515Před 5 lety
Projections (video 5): Example N-dimensional Projections
Outro Course
zhlédnutí 231Před 5 lety
Outro Course
Inner Products (video 4): Lengths and Distances, Part 1/2
zhlédnutí 811Před 5 lety
Inner Products (video 4): Lengths and Distances, Part 1/2
Statistics (video 6): Linear Transformations, Part 2/2
zhlédnutí 965Před 5 lety
Statistics (video 6): Linear Transformations, Part 2/2

Komentáře

  • @_jojo11
    @_jojo11 Před 12 dny

    Part 1 and Part 2 are the most comprehensive well-explained tutorials on GP I have found. I was struggling with reading through the Rasmussen and Williams book, but this has provided me with a lot more ammunition to go ahead and tackle it once again. Thank you for uploading.

  • @user-sm1re8xm5p
    @user-sm1re8xm5p Před měsícem

    Thanks for the explanation. I have one question though : the dimensions of the multivariate normal are orthonormal and hence do not have an ordering. but suddenly we have a 2d-graph where one point (= sample from a specific dimension ?) is next to only 2 others, and close points influence each other more than far away ones. Any help is greatly appreciated...

  • @nzambabignoumba445
    @nzambabignoumba445 Před měsícem

    Amazing!!

  • @Sheriff_Schlong
    @Sheriff_Schlong Před 4 měsíci

    Very good video. Understandable, not too extensive, but also your querying of important conceptual questions can give the viewers a chance to pause and try to answer as well! You sound similar to Elon Musk... im wondering if maybe you ARE him lol

  • @rajanalexander4949
    @rajanalexander4949 Před 5 měsíci

    Incredible lecture; thank you

  • @violetzzzz
    @violetzzzz Před 5 měsíci

    This is interesting! May I know where can I find the slides?

  • @EmmanuelAyobami-ni2ck
    @EmmanuelAyobami-ni2ck Před 6 měsíci

    Thank you for your insightful explanation of projection in your CZcams video! Your clear and concise approach made understanding the concept so much easier. Grateful for your teaching!

  • @vearchen7939
    @vearchen7939 Před 8 měsíci

    Thank you! This video is really helpful for me!!

  • @kianacademy7853
    @kianacademy7853 Před 9 měsíci

    rational Qudratic kernel has |x1-x2|^2 term, not |x1-x2|

  • @appliedstatistics2043
    @appliedstatistics2043 Před 9 měsíci

    Does anyone know where to download the slides?

  • @lyian5595
    @lyian5595 Před 10 měsíci

    Danke für die Mathematik. Sehr verständlich!

  • @zitafang7888
    @zitafang7888 Před 10 měsíci

    Thanks for your explanation. May I ask where I can download the slide?

  • @matej6418
    @matej6418 Před 11 měsíci

    elite content

  • @matej6418
    @matej6418 Před 11 měsíci

    Elite content, i ve never seen a better explanation for timeseries modelling using GPs

  • @StratosFair
    @StratosFair Před 11 měsíci

    Excellent lecture, thank you.

  • @blup737
    @blup737 Před 11 měsíci

    3:09 how you get root 12

  • @forheuristiclifeksh7836

    43:04

  • @forheuristiclifeksh7836

    52:33

  • @fikusznumerikusz5816

    At 58:10 , f1,..., fk and x1,...xk are with stars I guess. At 1:00:59 what is cov( f(x);f(x*))? Maybe it is a (k+N)x(k+N) covariance matrix there defined via kernels.

  • @CppExpedition
    @CppExpedition Před rokem

    WOOOOOOOOOOOOOOOW you blow my mind! 🤯

  • @sakcee
    @sakcee Před rokem

    Excellent !!! very clear explanation

  • @DogDoggieDog
    @DogDoggieDog Před rokem

    I've watched a handful of other videos on Bayesian optimisation but after this video I actually understood it. Many thanks for this!

  • @yashwanths-dz2gp
    @yashwanths-dz2gp Před rokem

    still wondering why you didn't get enough views for this awesome content

  • @Vikram-wx4hg
    @Vikram-wx4hg Před rokem

    Super tutorial! Only wish: I wish I could see what Richard is pointing to when he is discussing a slide.

  • @GauravJoshi-te6fc
    @GauravJoshi-te6fc Před rokem

    Woah! Amazing explanation.

  • @leiqin5756
    @leiqin5756 Před rokem

    Hi Marc, I find the lecture very interesting. Could you please share with me the slides?

  • @user-rx5tp8xk3u
    @user-rx5tp8xk3u Před rokem

  • @samirelzein1095
    @samirelzein1095 Před rokem

    need a version with a Python code and live graph

  • @jasonhe6947
    @jasonhe6947 Před rokem

    Excellent explanation. Thank you so much for making this video.

  • @Raven-bi3xn
    @Raven-bi3xn Před rokem

    At 3’:45”, you mention that if we have the x observations, given that we have z0 distribution, if we had good parameters of phi, we can map z0 to x. Is the path into finding the parameters of phi related to diffusion models?

  • @zakreynolds5472
    @zakreynolds5472 Před rokem

    Thanks this presentation has been really useful but I am a little stuck and have a question. In this first portion of the presentation the CoV function is shown to show correlation between random variables (x axis=variable index) but from there on it seems to revert to being used to compared to values within the same variable (from X in bold on axis to lower case x). I appreciate that this is a difference between multivariate and univariate (I think?) But could you please elaborate?

  • @heyjianjing
    @heyjianjing Před rokem

    By far the best introduction to GP, thank you Prof. Turner!

  • @thepresistence5935
    @thepresistence5935 Před rokem

    At last I find a gold :)

  • @nanjiang2738
    @nanjiang2738 Před rokem

    really appreciated for your high-quality explanation!

  • @findoc9282
    @findoc9282 Před 2 lety

    Sir may I ask why in the slide page 40 the Posterior has different functions instead like in pages before only one function?

  • @findoc9282
    @findoc9282 Před 2 lety

    the last 5 mins of process adding observation is so insightful and helped me a lot, thanks professor!

  • @yeshuip
    @yeshuip Před 2 lety

    i understood like variable index coressponds to the variable and we are plotting its values then somehow you talking about variable index can take real values and forgot about the distances. I didn't understand this concept. Can anyone explain me this

  • @yeshuip
    @yeshuip Před 2 lety

    hello can anyone provide the code please

  • @danielliu3039
    @danielliu3039 Před 2 lety

    The discussion in the video is restricted to inference, but how would one incorporate EP into the learning in a latent time series? Regrettably EP does not specify a variational lower bound of log likelihood (as compared to variational, EM-like methods), and, learning non-linear state transition functions in a time-series is non-trivial.

  • @vi5hnupradeep
    @vi5hnupradeep Před 2 lety

    Thankyou so much ! This series is very helpful.

  • @katateo328
    @katateo328 Před 2 lety

    hahaha, generally, lazy is goooooood.

  • @katateo328
    @katateo328 Před 2 lety

    yeah, Gaussian quadrature is exactly the boosting concept.

  • @katateo328
    @katateo328 Před 2 lety

    shifting with rotation the nodes a little bit would create other weak estimators.

  • @katateo328
    @katateo328 Před 2 lety

    dont you think we can improve the performance of trapezoidal method by using boosting concept?? trapezoidal is a weak estimator. Boosting weak estimators may have as well performance as Simpson method.

  • @katateo328
    @katateo328 Před 2 lety

    wowowo, super cool!! you have a wide vision.

  • @norkamal7697
    @norkamal7697 Před 2 lety

    The best GP explanation evaaa

  • @maddoo23
    @maddoo23 Před 2 lety

    At 45:30, the covariance of brownian motion cov(B_s, B_t) = min(s,t), right? And not whats given on the slide..

    • @ret2666
      @ret2666 Před 2 lety

      See here for the sense this is Brownian motion: en.wikipedia.org/wiki/Ornstein-Uhlenbeck_process

  • @airindutta1094
    @airindutta1094 Před 2 lety

    Best GP visualization and explanation I have ever seen.

  • @marc_deisenroth
    @marc_deisenroth Před 2 lety

    Slides are available at deisenroth.cc/teaching/2019-20/linear-regression-aims/lecture_bayesian_optimization.pdf