Video není dostupné.
Omlouváme se.

MSR Course - 07 Particle Filter (Stachniss)

Sdílet
Vložit
  • čas přidán 15. 08. 2024
  • MSR Course - 07 Particle Filter (Cyrill Stachniss)
    Mobile Sensing and Robotics / PhoRS Course at Bonn University
    By Nived Chebrolu and Cyrill Stachniss

Komentáře • 21

  • @michaelajeni7105
    @michaelajeni7105 Před 2 lety +3

    Thanks for this video. It gave me EXACTLY what I needed.

  • @darianagomeza
    @darianagomeza Před 4 měsíci

    Hi Dr., this is amazing. THANK YOU!

  • @Ahmedhashi548
    @Ahmedhashi548 Před 3 lety +1

    Your explanation is really great, I was confused to understand particle filter but you made it easy thank you very much 👏

  • @vivekgr3001
    @vivekgr3001 Před 4 lety +3

    Amazing Explanation!!!!

  • @jamesbuchanan8570
    @jamesbuchanan8570 Před 2 lety

    Very clear presentation!

  • @hobby_coding
    @hobby_coding Před 4 lety

    i was reading peter cork book and he mensioned your tutorial in the further reading section i downloaded the paper and made a youtube search to find a video resource and wow i found this i'm looking forward to watch the video thanks a lot

  • @mojiheydari
    @mojiheydari Před 4 lety

    So great video

  • @hanchen2355
    @hanchen2355 Před 7 měsíci +1

    A little too slow but great tutorial!

  • @terrynichols-noaafederal9537

    Another method for obtaining random samples from an arbitrary distribution is via CDF inversion, which is to numerically generate the CDF of the target distribution and then sample that CDF with a uniform [0,1] random variable. The approach can be a bit unstable near the tails of the distribution (one way around this is to bound the uniform sampling to say [0.01, 0.99]), but works pretty well overall. I am not clear if it makes sense to apply this approach in this case, but I wanted to throw it out there for comment. Additionally should we consider the whether or not the noise is correlated or uncorrelated? Both would have the same PDF, but very different properties in terms of determining algorithm performance. Thanks.

  • @mobinashafaati9612
    @mobinashafaati9612 Před 3 lety

    Thank you for the great lecture! I appreciate it if you could also recommend a textbook and/or papers for further study. Thanks!

    • @MSha-Techy
      @MSha-Techy Před rokem

      Hi Mobina
      The refrence of this course is mainly "probabilistic robotics". You can find particle filter in chapter four of the book

  • @salmaabdelmonem7482
    @salmaabdelmonem7482 Před 3 lety

    thank you

  • @MSha-Techy
    @MSha-Techy Před rokem

    Dear professor
    Thanks for your great lecture, you simplifies rather complex ideas so that we feel comfortable understanding them.
    But unfortunately I cannot fully grasp the intuition behind the importance sampling. Firstly, if we know the target distribution, then what is the point of approximating it? And if we do not know it, how do we want to compute the importance weights?
    Secondly, instead of introducing a complex proposal distribution, why not just simply draw samples uniformly from the target distribution and then resample the samples with higher weight?

    • @CyrillStachniss
      @CyrillStachniss  Před 8 měsíci +1

      1: Often we cannot represent it but only evaluate it point-wise (eg in SLAM).
      2: The closer the proposal is to the target, the more effective is sampling (because you are generating samples more likely in areas of high likelihood)

    • @MSha-Techy
      @MSha-Techy Před 8 měsíci

      It makes sense, I should go over it once again though. Thanks for your reply, I really appreciate it. @@CyrillStachniss

  • @shrinivasiyengar5799
    @shrinivasiyengar5799 Před 4 lety

    At about 1:15:00 he gives an example of selection with replacement of 2 samples.
    He says, the sample with 0.666 weight has a probability of being selected as 2/3, so it is selected both times.
    Now, please help me in understanding. Did he mean, when selecting and replacing, we just record that that state, say "X1", with weight 0.666 occurred two times. Then we throw away the previous data and say that our new particles are two samples, both representing state "X1" with equal weights?
    Now, talking of 'n' samples, we do the selection and replacement 'n' times. But in those 'n' times, we're likely to only select the ones with higher weight. Am I getting it correct?

  • @kishangupta4770
    @kishangupta4770 Před 3 lety +1

    Awesome series. Is there any probability of getting lecture slides? That will be really helpful.

    • @CyrillStachniss
      @CyrillStachniss  Před 3 lety +2

      Kishan Gupta Yes, I plan to release all slides as soon as I have finished the lecture series, which will probably be in one month from now

    • @CyrillStachniss
      @CyrillStachniss  Před 3 lety +2

      Yes, I will release all slides as soon as I have finished the lecture series, which will probably be in a month from now

    • @kishangupta4770
      @kishangupta4770 Před 3 lety +1

      @@CyrillStachniss Thanks for the reply. Will be eagerly waiting for it.

    • @gauravgupta266
      @gauravgupta266 Před 3 lety

      @@CyrillStachniss Hi wondering if the slides for this course was available. I see the slides for www.ipb.uni-bonn.de/msr2-2020/ are available, but not for part 1 of the course.