The Science Behind InterpretML: SHAP

Sdílet
Vložit
  • čas přidán 15. 05. 2020
  • Learn more about the research that powers InterpretML from SHAP creator, Scott Lundberg from Microsoft Research
    Learn More:
    Azure Blog aka.ms/AiShow/AzureBlog
    Responsible ML aka.ms/AiShow/ResponsibleML
    Azure ML aka.ms/AiShow/AzureMLResponsi...
    The AI Show's Favorite links:
    Don't miss new episodes, subscribe to the AI Show aka.ms/aishowsubscribe
    Create a Free account (Azure) aka.ms/aishow-seth-azurefree
  • Věda a technologie

Komentáře • 22

  • @robertoooooooooo
    @robertoooooooooo Před rokem +1

    Insightful, thank you

  • @chandrimad5776
    @chandrimad5776 Před 2 lety +5

    Like SHAP, there is LIME as well for interpretability of the models. Would be great if you post a video on comparing these two. I have built one credit risk model using lightgbm regressor, there I used SHAP to represent model interptetability. I would also like to use LIME for the same purpose. If you are interested, will post my kaggle link to get your feedback on my work.

  • @user-xs9rm6hd5c
    @user-xs9rm6hd5c Před 3 lety +2

    great introduction of SHAP

  • @juanete69
    @juanete69 Před rokem +1

    In a linear regression what is the difference (interpretation) between the SHAP value and the partial R^2?

    • @Lucky10279
      @Lucky10279 Před rokem +1

      In linear regression, R^2 is literally just the proportion of the variance of the probability distribution of the dependent variable(s) that can be predicted/explained by the model. Or, in other words, if I understand correctly, it's the proportial difference between the variance of the actual variable we're attempting to model and the variance of the predicted variable. So if y is what we're trying to predict and the model is y_hat = mx+b,
      R² = |var(y)-var(y_hat)|/var(y). It's a measure of how well the model matches the actual relationship between the independent and dependent variables.
      Shap values, OTH, if I'm understandingly the video correctly, don't necessarily say anything, in themselves, about how well the model fits the actual data overall -- instead they tell us how much each independent variable is affecting the predictions/classifications the model spits out.

  • @caiyu538
    @caiyu538 Před rokem +1

    It looks that shap is a brutal force search of all features, consider all kinds of combinations. Is my understanding correct.

  • @muhammadusman7466
    @muhammadusman7466 Před 10 měsíci

    does shaply works well with categorical features?

  • @igoriakubovskii1958
    @igoriakubovskii1958 Před 3 lety

    How can we make a decision based on shap, when it’s not causality ?

  • @manuelillanes1635
    @manuelillanes1635 Před 3 lety +1

    Can shap values be used to interpret unsupervised models too?

    • @mananthakral9977
      @mananthakral9977 Před 2 lety

      No,I think we can't use it for unsupervised learning since it requires to look at the output value after each feature is added to the model.

    • @cassidymentus7450
      @cassidymentus7450 Před rokem

      For unsupervised learning there might not be a 1dimensional numeric output (e.g. credit risk). It still might be possible to make a useful one. Take PCA for example. You can define define the output as |x-x_mean| (|| = Euclidean distance aka Pythagorean formula).
      Shap will tell you how much principle component contributes to distance from mean... essentially it is the variance along that axis. (Depending on how you look at it.)

  • @SandeepPawar1
    @SandeepPawar1 Před 3 lety +2

    Scott - should the SHAP values for feature importance be based on training data or test data?

    • @lipei7704
      @lipei7704 Před 3 lety +4

      I believe the SHAP values for feature importance should be based on training data, Because It will cause a data breach If you choose test data.

    • @claustrost614
      @claustrost614 Před 3 lety +1

      @@lipei7704 I agree. It depends what you want to do with it. If you want to use it for more or less global feature importance/feature selection I would only use it on the training set.
      If you mean by "feature importance" the local sharpley values for one example of your test data you can have a look whetever it would be an outerlayer or something like that compared to your traing set.

  • @berniethejet
    @berniethejet Před 2 lety +2

    Seems Scott confounds the term 'linear models' with 'models with no interactions'. Linear regression models can still have cross products of variables, e.g. y = a + b1 x1 + b2 x2 + b3 x1 x2.

    • @sebastienban6074
      @sebastienban6074 Před 2 lety

      i think he meant that they dont interact the same way trees do.

  • @Burnwarcrimetemple
    @Burnwarcrimetemple Před rokem

    tentara tuh harus hitam

  • @zhanli9121
    @zhanli9121 Před rokem +1

    This is some most opaque presentation of SHAP. And the presentation style is also bad -- very rare to see a presenter uses formula when the formula is not really needed. However, this guy manages to be one of them.

    • @samirelzein1095
      @samirelzein1095 Před 8 měsíci

      after i wrote a similar sharp comment above, found yours. actually everyone agrees here. Good case for Microsoft staffing.

  • @samirelzein1095
    @samirelzein1095 Před 8 měsíci

    the dude managed to get many questions, but didnt manage getting many compliments. Neither will i give one. dont make videos to be uploaded for Microsoft, make videos when you feel the ideas is complete and the intuition is clear and you re capable of speaking 10 mins out of hours you could keep showing your idea. wasted my time.