Eric J. Ma - An Attempt At Demystifying Bayesian Deep Learning

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 14

  • @user-nk8ry3xs5u
    @user-nk8ry3xs5u Před 10 měsíci +2

    Great video to develop a simple mind model of neural networks. Bonus : frequentist vs. Bayesian made simple! Great work Eric!

  • @harshraj22_
    @harshraj22_ Před 2 lety +11

    1:00 Intro to Linear, Logistic regression, Neural Nets
    9:40 Going Bayesian
    14:32 Implementation Using PyMC3
    24:27 QnA

  • @mherkhachatryan666
    @mherkhachatryan666 Před 2 lety +5

    Love the charisma, enthusiasm put in this talk well done!

  • @cnaccio
    @cnaccio Před 2 lety

    Huge win for my personal understanding on this topic. I wish every talk was given in this format. Thanks!

  • @suzystar3
    @suzystar3 Před 9 měsíci

    Thank you so much! This has helped me so much with my project and really helped to understand both deep learning and bayesian deep learning much better. I really appreciate it!

  • @sdsa007
    @sdsa007 Před rokem

    great energy! and nice philosophical wrap-up!

  • @HeduAI
    @HeduAI Před 11 měsíci

    Excellent talk! Thank you!

  • @BigDudeSuperstar
    @BigDudeSuperstar Před 2 lety +1

    Incredible talk, well done!

  • @cherubin7th
    @cherubin7th Před 2 lety +1

    Great explanation!

  • @bracodescanner
    @bracodescanner Před 5 měsíci

    I understand the benefit of modelling aleatoric uncertainty, e.g. to be able to deal with heteroscedastic noise.
    However, why do we need to model epistemic uncertainty? The best prediction after all, lies in the middle of the final distribution. If you sample from the distribution, you will lose accuracy.
    So is uncertainty only useful for certain applications to determine different behaviour based on high uncertainty? For example: If uncertainty is high, drive slower?

  • @catchenal
    @catchenal Před 2 lety +2

    The other presentation Eric mentions is that of Nicole Carlson:
    Turning PyMC3 into scikit learn
    czcams.com/video/zGRnirbHWJ8/video.html

  • @vtrandal
    @vtrandal Před rokem

    Point #1 is wrong. You left out activations.

    • @bonob0123
      @bonob0123 Před 4 měsíci

      The tanh and Relu nonlinearities are the activations. He is not wrong. You are wrong. Learn to be humble.

  • @MiKenning
    @MiKenning Před rokem

    Was he referring to Tensorflow when he denigrated an unnamed company for its non-pythonic API? The new Tensorflow is much better!