Explainable AI explained! | #4 SHAP

Sdílet
Vložit
  • čas přidán 17. 06. 2024
  • ▬▬ Resources ▬▬▬▬▬▬▬▬▬▬▬▬
    Interpretable ML Book: christophm.github.io/interpre...
    Github Project: github.com/deepfindr/xai-series
    Paper: arxiv.org/abs/1705.07874
    ▬▬ Timestamps ▬▬▬▬▬▬▬▬▬▬▬
    00:00 Introduction / Example
    03:09 The paper
    03:50 Calculation of Shapley values
    09:34 Code examples
    14:30 Plots / Visualizations
    ▬▬ Support me if you like 🌟
    ►Link to this channel: bit.ly/3zEqL1W
    ►Support me on Patreon: bit.ly/2Wed242
    ►Buy me a coffee on Ko-Fi: bit.ly/3kJYEdl
  • Věda a technologie

Komentáře • 47

  • @SinAndrewKim
    @SinAndrewKim Před 2 lety +30

    There is an error in your formula for Shapley values (compared to christophm's book and wikipedia). Most write the weight as (M choose 1, |S|, M-|S|-1) where S is some subset WITHOUT feature i. However, you are summing over subsets z' WITH feature i. Thus, the weight should be (M choose 1, |z'| - 1, M-|z'|).

    • @DeepFindr
      @DeepFindr  Před 2 lety +11

      Thanks for pointing out, you are absolutely right! I'll pin this comment so that everyone can see it.

  • @utkarshkulshrestha2026
    @utkarshkulshrestha2026 Před 2 lety +3

    Really insightful. Thank you for the video. It was a great explanation and demonstration to begin with.

  • @nidhisingh1325
    @nidhisingh1325 Před 2 lety +1

    Great explanation, would love more videos from you!

  • @techjack6307
    @techjack6307 Před 2 lety

    Thank you very much for explaining the concept so clearly.

  • @mohammadnafie3327
    @mohammadnafie3327 Před rokem

    Amazing explaination and to the point. Thank you!

  • @nkorochinaechetam2516

    straight to the point and detailed explanation

  • @Florentinorico
    @Florentinorico Před 2 lety

    Great example with the competition!

  • @aprampalsingh8381
    @aprampalsingh8381 Před 9 měsíci

    Best explanation over internet..you should do more videos!

  • @hprshayan
    @hprshayan Před 2 lety +1

    Thank you for your excellent explanation.

  • @orkhanmd
    @orkhanmd Před 2 lety +1

    Great explanation. Thanks!

  • @WildanPutraAldi
    @WildanPutraAldi Před rokem

    Excellent video, thanks for sharing !

  • @kevinkpakpo3215
    @kevinkpakpo3215 Před 2 lety

    Amazing tutorial. Thanks a lot!

  • @muratkonuklar3910
    @muratkonuklar3910 Před 2 lety

    Great Presentation Thanks!

  • @Diego0wnz
    @Diego0wnz Před 3 lety +2

    thanks for the video!

  • @nikolai228
    @nikolai228 Před 3 měsíci

    Great video, thanks!

  • @davidlearnforus
    @davidlearnforus Před rokem

    all our videos are great. thanks a lot!

  • @muzaffarnissar1978
    @muzaffarnissar1978 Před 9 měsíci

    Thanks, an Exceptional Explanation! Looking Forward to More Videos!.
    Can you send me any links where we we can use Explainable on audio data.

  • @Gustavo-nn7zc
    @Gustavo-nn7zc Před 13 dny

    Hi, great video, thanks! Is there a way to use SHAP for ARIMA/SARIMA?

  • @Bill0102
    @Bill0102 Před 6 měsíci

    This is a tour de force. A book I read with like-minded themes was also magnificent. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @ramalaccX
    @ramalaccX Před 2 lety

    Amazing video again bro. It helped a lot! One question: Do you know the reference where I can find the proof of the 2nd theorem in the SHAP paper? I can't find it :(

    • @DeepFindr
      @DeepFindr  Před 2 lety +3

      Hey :) thanks!
      The supplement can be downloaded here: papers.nips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
      Plus there is another link in a discussion on Github, which might be helpful as well: github.com/slundberg/shap/issues/1054
      Hope this helps :)

    • @ramalaccX
      @ramalaccX Před 2 lety

      @@DeepFindr There's nothing else I can say because you're the boss! ❤

  • @codewithyouml8994
    @codewithyouml8994 Před 2 lety

    Great Video. I have one question.... Can I use this SHAP model for Graph classification techniques, to see which nodes contribution is how much and kind of show a grad cam like effect? If you got any sort of resoyrces regarding this, please share. Thank you.

    • @DeepFindr
      @DeepFindr  Před 2 lety

      Hi! I have a video called "how to explain graph neural networks" that exactly addresses this question :)

  • @catcatyoucatmedie1161

    Hi, May I know the dataset you used for the demo?

  • @zahrahsharif8431
    @zahrahsharif8431 Před 2 lety +1

    When you use summary force_plot to get an individual points contribution is the prediction shown the log of odds? If so how do I show the actual probability

    • @ea2187
      @ea2187 Před 2 lety

      i have the same issue ... did you find something out?

    • @DeepFindr
      @DeepFindr  Před 2 lety

      Hi! Sorry didn't see that comment somehow.
      Did you have a look at this post: github.com/slundberg/shap/issues/963
      ?
      For Tree Explainer you have an option to get probabilities, according to that.

  • @yashnarendra
    @yashnarendra Před 3 lety +1

    Might be a very stupid question but why is z' under modulus, if it represents number of features in the subset, it should be positive always, right.

    • @DeepFindr
      @DeepFindr  Před 3 lety

      Hi! Do you mean the bars wrapped around z? That comes from set theory in math and the notation stands for cardinality = the number of elements in the set. It doesn't mean the "abs" function.
      Is that what you were referring to? :)

    • @yashnarendra
      @yashnarendra Před 3 lety

      @@DeepFindr yes, thank you for clarifying. One more doubt, if I put z' =M then the (M -|z'| -1)! becomes (-1) ! which is not defined. What am I missing here?

    • @DeepFindr
      @DeepFindr  Před 3 lety +2

      @@yashnarendra Hi, good catch!
      In the original formula of Shapley values this can never happen, because the sum is over subsets without the feature i. Therefore F will always be greater than S (with the notation of the original shapley formula in the paper).
      But you are right, this is not really reflected in the SHAP formula. However in the paper they state that they excluded |z'| = 0 and |z'| = M as both are not defined.

    • @yashnarendra
      @yashnarendra Před 3 lety +1

      @@DeepFindr Thanks a lot, really appreciate your efforts in replying to my queries.

  • @shaz-z506
    @shaz-z506 Před rokem

    Can we use shap for multiclass classification, is there any resources which you can suggest.

    • @DeepFindr
      @DeepFindr  Před rokem

      Hi! Have a look at this discussion: github.com/slundberg/shap/issues/367

  • @PrabhjotSingh-mn2ku
    @PrabhjotSingh-mn2ku Před 2 lety

    Does the classification threshold have an effect on shapley values? The default threshold in binary classification is 0.5, what if one changes it to 0.7, how to incorporate this in the shap library?

    • @DeepFindr
      @DeepFindr  Před 2 lety

      Hi! This discussion might be what you are looking for :)

    • @PrabhjotSingh-mn2ku
      @PrabhjotSingh-mn2ku Před 2 lety

      @@DeepFindr did you mean to add a link to the discussion?

    • @DeepFindr
      @DeepFindr  Před 2 lety +1

      Yep, sorry here: github.com/slundberg/shap/issues/257

  • @minhaoling3056
    @minhaoling3056 Před 2 lety +1

    Does shap works on small dataset ?

    • @DeepFindr
      @DeepFindr  Před 2 lety

      LIME is independent from the size of the Dataset. The only question is if the (Blackbox) model works on the Dataset. Can you maybe share some more details what makes you raise this question? :)

    • @minhaoling3056
      @minhaoling3056 Před 2 lety

      I have a very small dataset that surprising do well on prediction of 15 different class of identical species. In my black box model, I use three layers of feature extraction methods and finish off with one random forest model. I am not sure whether I can implement LIME for this situation because my black box were mostly feature extractions rather than ensemble of models.

    • @DeepFindr
      @DeepFindr  Před 2 lety

      Which feature extraction layers are you using?
      Is it trained end-to-end with the RF?
      It doesn't really matter what is happening inside your model. LIME is able to explain the input-output relation in a local area of a single prediction :)

    • @minhaoling3056
      @minhaoling3056 Před 2 lety

      @@DeepFindr I see, thanks! I will try this in my project soon.

    • @DeepFindr
      @DeepFindr  Před 2 lety +1

      OK good luck! If you have any problems let me know :)