ML Interpretability: SHAP/LIME

Sdílet
Vložit
  • čas přidán 5. 06. 2024
  • First in our series on ML interpretability and going through Christoph Molnar's interpretability book.
    When we're applying machine learning models, we often want to understand what is really going on in the world, we don't just want get a prediction.
    Sometimes we want to get an intuitive understanding for how the overall model works. But often, we often want to explain an individual prediction: Maybe your application for a credit card was denied and you want to know why. Maybe you want to understand the uncertainty associated with your prediction. Maybe you're going to take a real-world decision based on your model.
    That's where Shapely values come in!
    With Connor Tann and Dr. Tim Scarfe
    References:
    Whimsical canvas we were using:
    whimsical.com/12th-march-chri...
    We were using Christoph's book as a guide:
    christophm.github.io/interpre...
    christophm.github.io/interpre...
    christophm.github.io/interpre...
    SHAPLEY VALUES
    Shapley, Lloyd S. "A value for n-person games." Contributions to the Theory of Games 2.28 (1953): 307-317.
    www.rand.org/content/dam/rand...
    SHAP
    Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems. 2017.
    papers.nips.cc/paper/2017/has...
    LIME
    Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should I trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016).
    arxiv.org/abs/1602.04938

Komentáře • 25

  • @michaelallen1966
    @michaelallen1966 Před 3 lety +4

    That was by far the best introduction to Shapley values and SHAP I have seen. Thank you all !!! Perfect timings as well, as we're just looking to use Shapley values and SHAP.

  • @EtienneTremblay
    @EtienneTremblay Před 3 lety +3

    You should do a podcast series about interpretable ML techniques.

  • @juliocardenas4485
    @juliocardenas4485 Před rokem

    I really like this format (just discovered this channel); it goes deeper but it is not intractable.
    Thank you for editing the conversation =)

  • @aniruddhaghosh9823
    @aniruddhaghosh9823 Před rokem

    Really awesome explaination !!

  • @jasdeepsinghgrover2470
    @jasdeepsinghgrover2470 Před 3 lety +1

    In for every interpretability method!!!

  • @AICoffeeBreak
    @AICoffeeBreak Před 3 lety +9

    Haha, when Tim says "bite-sized" he means 40 minutes. When Ms. Coffee Bean says bite-sized, it's about 10x less. 🤣
    Now fun aside: I really appreciate the shorter format.

    • @machinelearningdojowithtim2898
      @machinelearningdojowithtim2898  Před 3 lety +1

      Thanks Letitia! 40 mins is "bite sized" for me! Haha, I need to get good at making shorter videos

    • @afafssaf925
      @afafssaf925 Před 3 lety +1

      @@machinelearningdojowithtim2898 Don't make them too short!

  • @scottmiller2591
    @scottmiller2591 Před 3 lety +1

    I suspect the 0.75 isn't empirical or arbitrary, but is the 3/4 scaling of the Epanechnikov kernel - the optimal (in the sense of requiring the fewest samples for a given accuracy) kernel for nonparametric density estimation: en.wikipedia.org/wiki/Kernel_%28statistics%29

  • @thomaskurian9025
    @thomaskurian9025 Před 2 lety

    This feels like therapy

  • @williammcnulty8408
    @williammcnulty8408 Před 3 lety +2

    Incredible work by Scott Lundberg and Su-In Lee. Why they are not sourced?

  • @francescolucantoni3243
    @francescolucantoni3243 Před 3 lety +1

    First! But I'll watch tomorrow! Love from Italy

    • @francescolucantoni3243
      @francescolucantoni3243 Před 3 lety +1

      Great video! At 7:30 I think you meant that the house price went *down* due to the RM value in fact blue bars are negative contributions while red bars are positive, correct? Thanks

  • @sreevidyaswathi4069
    @sreevidyaswathi4069 Před 4 měsíci

    Does all the time, sum of shpa values gets equal to difference in model prediction and explanation mean values?

  • @scottmiller2591
    @scottmiller2591 Před 3 lety

    I find it interesting that Rob Tibshirani, co-author with Jerome Friedman on many of the LASSO papers, says LASSO as la-so, while Ryan Friedman (Jerome Friedman's son) says la-su, as is said here. I'm going to continue to say it the correct way, which you will simply have to guess.

  • @jerbear97
    @jerbear97 Před rokem

    that intro go hard tho

  • @araldjean-charles3924
    @araldjean-charles3924 Před 10 měsíci

    Does this result hold for non-linear models?

  • @satishvavilapalli24
    @satishvavilapalli24 Před 2 lety

    can we say SHAP value = sum of squared residuals?

  • @MachineLearningStreetTalk

    First 😎😎

  • @flaskapp9885
    @flaskapp9885 Před 3 lety

    second