SHAP values for beginners | What they mean and their applications

Sdílet
Vložit
  • čas přidán 22. 08. 2024

Komentáře • 43

  • @adataodyssey
    @adataodyssey  Před 6 měsíci

    NOTE: SHAP course is no longer free but you will still get the XAI course for free :)
    SHAP course: adataodyssey.com/courses/shap-with-python/
    XAI course: adataodyssey.com/courses/xai-with-python/
    Newsletter signup: mailchi.mp/40909011987b/signup

  • @lakshman587
    @lakshman587 Před 9 měsíci +1

    This is a very clear video about shap!!

  • @fouried96
    @fouried96 Před 5 měsíci

    Love to see a fellow South African in this line of work!

    • @adataodyssey
      @adataodyssey  Před 5 měsíci

      Howzit! Will keep the videos coming :)

    • @fouried96
      @fouried96 Před 5 měsíci

      @@adataodyssey Sweet! I followed you on linkedin for any other posts outside of CZcams. I was just curious, how does Ireland's grading system work for masters, I see you have a 1.1. I have no idea what that means having only studied in SA lol :P

    • @adataodyssey
      @adataodyssey  Před 5 měsíci +1

      @@fouried96 that's 75% or above. They don't distinguish beyond that. The Irish are not so big on grading :D

    • @fouried96
      @fouried96 Před 5 měsíci

      @@adataodysseyCongrats! I am busy following this SHAP series. I'm looking to find the best features for this kaggle comp for a multiclass classification problem where I'm using XGBoost. I was wondering, are you on Kaggle?

  • @innocentjoseph9084
    @innocentjoseph9084 Před 4 měsíci

    Excellent explanation, just what I needed. Thank you.

    • @adataodyssey
      @adataodyssey  Před 4 měsíci

      I’m glad you found it useful, Innocent :)

  • @aakritiiacharya
    @aakritiiacharya Před 8 měsíci +2

    Hey Amazing explanation , I wanted to know more about the interpretation of SHAP Summary plot in terms of Survial Analysis

    • @adataodyssey
      @adataodyssey  Před 8 měsíci

      Thanks Aakriti! I don't know anything about survival analysis I'm afraid... If you are building models using well know packages (e.g. sklearn, XGBoost) then you should be able to use SHAP. I have this video on the more technical coding details. Let me know if that helps!
      czcams.com/video/L8_sVRhBDLU/video.html

  • @RHONSON100
    @RHONSON100 Před 9 měsíci +1

    wonderful explanation.

  • @hasnainayub2369
    @hasnainayub2369 Před 6 měsíci

    Very well explained! I have a question regarding SHAP dependency plots. On the right-Y axis, SHAP selects a particular interacting feature by default and I know we can manually change the interacting feature. Does the default selection by SHAP explainer tell us that that particular feature is the feature that interacts with the main feature the MOST as compared to other features? In other words, can we say that the main feature depends on (or interact with) the default interacting feature while making predictions?

    • @adataodyssey
      @adataodyssey  Před 6 měsíci

      Yes, I wasn't aware of this but it seems like it is true:
      shap-lrjball.readthedocs.io/en/latest/example_notebooks/plots/dependence_plot.html

  • @dantedt3931
    @dantedt3931 Před 4 měsíci

    This is awesome!

  • @satk4211
    @satk4211 Před 5 měsíci

    Excellent video ❤❤❤❤❤❤

    • @adataodyssey
      @adataodyssey  Před 5 měsíci

      Thank you ☺️ I’m glad it could help

  • @statistikochspss-hjalpen8335
    @statistikochspss-hjalpen8335 Před 10 měsíci +1

    Does it have to be about prediction?
    I just want to understand which features/independent variables are most important when my independent variables are highly correlated. I've heard people talking about "contribution".

    • @adataodyssey
      @adataodyssey  Před 10 měsíci

      No, you can also interpret a model used for analysis. In ML, when we say "prediction" we mean the output of the model. We use this term even if we are not trying to predict the future.

  • @keenanosullivan305
    @keenanosullivan305 Před rokem +1

    Shap means something a little different in South Africa. Love the content though👍🏼

  • @youtubeuser4878
    @youtubeuser4878 Před 4 měsíci

    Hello. Thanks for the tutorial. Regarding your XAI and SHAP courses, is there an order to how we should take the courses. Should we take the XAI before SHAP or vice versa. Thanks

    • @adataodyssey
      @adataodyssey  Před 4 měsíci

      No problem! It is better to take XAI first then SHAP. XAI covers more of the basics in the field and other useful model agnostic methods. But the SHAP course still gives some basics so it is not necessary to do the entire XAI course (or even any of it) if all you care about it learning SHAP :)

    • @youtubeuser4878
      @youtubeuser4878 Před 4 měsíci

      @@adataodysseyAwesome. Thank you.

  • @mahdihabibi6382
    @mahdihabibi6382 Před 7 měsíci

    How can we determine which interpretable models are appropriate for our deep learning models? For example, I have a CNN model for Malaria prediction, however, I am unsure whether LIME or SHAP is a better tool for interpreting my model. Could you please guide me through this situation?

    • @adataodyssey
      @adataodyssey  Před 7 měsíci +1

      For deep learning, you might want to look into a model specific method such as gradcam. Are you using images or tabular data?
      If you are using tabular data, I would change the model to XGBoost or random forest. Then use both LIME and SHAP. There are also other methods like ALEs, PDPs, ICE Plots and Freedman's H-statistic. It is also a good idea to use multiple methods.

    • @mahdihabibi6382
      @mahdihabibi6382 Před 7 měsíci

      Thank you for your reply. @@adataodyssey

  • @user-nf2zo3yt8j
    @user-nf2zo3yt8j Před 8 měsíci

    If I have one hot encoded on the categorical values, How should I know which main features are contributing ?

    • @adataodyssey
      @adataodyssey  Před 8 měsíci

      This is a great question! You have two options (see the articles below). Either you can add up the SHAP values for the individual one-hot encodings or use CatBoost. I also go over these concepts in more detail in my course.
      towardsdatascience.com/shap-for-categorical-features-7c63e6a554ea?sk=2eca9ff9d28d1c8bfde82f6784bdba19
      towardsdatascience.com/shap-for-categorical-features-with-catboost-8315e14dac1?sk=ef720159150a19b111d8740ab0bbac6d

  • @shubhanshisinghms7745
    @shubhanshisinghms7745 Před 3 měsíci

    Can you make a video on how recruitment decision is made?

    • @adataodyssey
      @adataodyssey  Před 3 měsíci

      Do you mean how automated decisions are made or decisions for data scientists in general?

  • @teguhprasetyo7505
    @teguhprasetyo7505 Před 7 měsíci

    Can this method be applied in multilabel classification?

    • @adataodyssey
      @adataodyssey  Před 7 měsíci

      Yes! I have a video on this exact topic: czcams.com/video/2xlgOu22YgE/video.html&lc=UgwSqpAiiG_ho6hDqDd4AaABAg

  • @keivansamani3437
    @keivansamani3437 Před 7 měsíci

    I want to be able to understand how the features affect the predictions along a 2D curve where the points are sequential, but it seems SHAP is only useful when there’s a single prediction not a curve :(

    • @adataodyssey
      @adataodyssey  Před 7 měsíci

      You could try using PDPs or ICE Plots for this. Or aggregate SHAP values using a dependence plot

  • @weii321
    @weii321 Před 11 měsíci

    Can shap value used for feature selection?

    • @adataodyssey
      @adataodyssey  Před 11 měsíci

      Yes! You can use the mean SHAP plot. I discuss it in this video: czcams.com/video/L8_sVRhBDLU/video.html

    • @weii321
      @weii321 Před 11 měsíci

      @@adataodyssey Thank you for your answer. I have another question, what is the difference between using SHAP values compared to using feature importance for feature selection? Does using SHAP values improve the model's performance more?

  • @aneesha123able
    @aneesha123able Před rokem

    👏

  • @fupopanda
    @fupopanda Před 2 měsíci

    Jumping between what you are explaining and yourself is distracting