Video není dostupné.
Omlouváme se.

An introduction to the Random Walk Metropolis algorithm

Sdílet
Vložit
  • čas přidán 14. 05. 2018
  • This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co....
    For more information on all things Bayesian, have a look at: ben-lambert.co.... The playlist for the lecture course is here: • A Student's Guide to B...

Komentáře • 32

  • @collincherubim2698
    @collincherubim2698 Před 4 lety +19

    Finally, visuals for MCMC! Highly illuminating, thank you.

  • @jiachengchen7828
    @jiachengchen7828 Před 3 lety +6

    This video is 10000x better than the equations in the class.

    • @distrologic2925
      @distrologic2925 Před 2 lety +2

      Right? I don't understand how actual lecturers can be THAT terrible at conveying knowledge. Its like they don't want people to understand it.

  • @AmrutaOfAllTrades
    @AmrutaOfAllTrades Před 4 lety +3

    Finally found out why its called Monte Carlo. This is the best explanation of the algorithm I have ever seen. Thanks for this.

  • @vman049
    @vman049 Před 5 lety +9

    Best explanation of MH on CZcams. Thank you!

  • @johnedwardhills4529
    @johnedwardhills4529 Před 3 lety +3

    Thanks Ben. This is a really clear visual representation of what the algorithm is doing and how it works in principle. Excellent stuff!

  • @bradh2649
    @bradh2649 Před měsícem

    Beautifully explained

  • @MisterCactus777
    @MisterCactus777 Před 2 lety

    I used this for my Bachelors thesis to simulate ultracold fermions in a harmonic trap, which was a replication of real expermients! Thank you for explaining, I had forgotten what it did...

  • @lemyul
    @lemyul Před 5 lety +4

    thank God there's a video about this

  • @tergl.s
    @tergl.s Před 5 měsíci

    the simulation is so helpful! thanks

  • @user-mn8th3ie1t
    @user-mn8th3ie1t Před 5 měsíci

    Very insightful.

  • @mikolajwojnicki2169
    @mikolajwojnicki2169 Před 3 lety

    Great video. Way easier to understand than my uni lectures.

  • @mikotokitahara9923
    @mikotokitahara9923 Před 3 lety

    Best one on CZcams, thanks a lot.

  • @karimaelouahmani7078
    @karimaelouahmani7078 Před 2 lety

    Brilliant honestly.

  • @DJRaagaMuffin
    @DJRaagaMuffin Před 4 lety

    Great explanation. Thank you

  • @FluxProGaming
    @FluxProGaming Před 4 lety

    Subscribed. Good voice, good explanations !!

  • @distrologic2925
    @distrologic2925 Před 2 lety

    Don't gaps in the true distribution skew the samples to the borders of these gaps because the random walk is less likely to cross the gap, especially with a low sigma in the jumping distribution?

  • @darcycordell7156
    @darcycordell7156 Před 3 lety +1

    Maybe a dumb question, but at 5:46 aren't you using the unknown distribution to calculate r? Isn't the black line on the graph the unknown distribution you are trying to estimate?

    • @user-lx7jn9gy6q
      @user-lx7jn9gy6q Před 3 lety

      No thats a good question. The way I understand it is that all you need to compare is the ratio of the numerator for Bayes rule. And you sample through different values of the parameter (theta). You basically run through lots of different possible parameter values and the way it walks across the graph is when it hits a value of the parameter that corresponds to the true posterior distribution. Most of the guesses of the parameter are wrong and don't lead to anything at all which is why the animation at around 8 minutes has many more wrong guesses than correct ones.
      The numerator of Bayes's rule is what dictates where those true parameter spots are. The denominator decides the height of it. This sampling method estimates the height of them which is decided by the denominator. So for your question of r. All we know is the numerator, not the denominator.

  • @mojiheydari
    @mojiheydari Před 3 lety

    awesome

  • @Penrodyn
    @Penrodyn Před 5 lety

    Are the Mathematica programs you used in the video available? I particularly liked the last one where you showed a more complicated surface. I also just ordered your book, are there available with that?

  • @ahmedjunaidkhalid3929
    @ahmedjunaidkhalid3929 Před 5 lety

    I have a question. Suppose rather than having just one value theta, I have multiple values [A,B,C] in my state. Each variable can only have four values [0,1,2,3]. How would I choose a new state from the previous one? Would I calculate a new value for each variable and call it a proposed state and then calculate the value for the complete system?

    • @jelmerdevries7827
      @jelmerdevries7827 Před 4 lety +3

      probably too late, but for a multi-variable model you want to use a gibbs samples

  • @wunderjahr
    @wunderjahr Před 3 lety

    👏👏👏

  • @siarez
    @siarez Před 6 lety +2

    I don't get where the likelihood term and the prior term come from. Here we assume they exist. What is an example of a practical application where we have these two terms but don't have the posterior?

    • @GeoffRuddock
      @GeoffRuddock Před 5 lety +1

      @Siarez the difficult part in calculating the posterior is usually the denominator (marginal distribution). This algorithm uses the ratio of unnormalized posteriors, so the cumbersome marginal distribution cancels out.

    • @AP-rs5wz
      @AP-rs5wz Před 5 lety

      Yes, the marginal can be quite costly to compute, as you have to integrate out so many (potentially) unknown dimensions.

    • @payam-bagheri
      @payam-bagheri Před 3 lety

      I agree with you. There's a disconnect in the explanation in the video. The video mentions that the prior can be anything (just to have a starting point) but doesn't explain where the likelihood comes from.

    • @jimbocho660
      @jimbocho660 Před 3 lety +1

      @@payam-bagheri The likelihood is computed in the usual way from your data and proposed data generation model. This video is about how to sample from your unnormalized posterior once you obtain an expression for it using the likelihood x prior.

  • @ujjwaltyagi3030
    @ujjwaltyagi3030 Před 5 měsíci

    where can i find the code for this?

  • @amenaalhassan2807
    @amenaalhassan2807 Před 2 lety

    ước gì có tiền ăn tết

  • @johannesluttmer1285
    @johannesluttmer1285 Před 3 lety

    Metropolis 1927 illumination= moloch
    Conspiracy of dunges = owl of minerva =satan