ML Interpretability: SHAP/LIME
Vložit
- čas přidán 5. 06. 2024
- First in our series on ML interpretability and going through Christoph Molnar's interpretability book.
When we're applying machine learning models, we often want to understand what is really going on in the world, we don't just want get a prediction.
Sometimes we want to get an intuitive understanding for how the overall model works. But often, we often want to explain an individual prediction: Maybe your application for a credit card was denied and you want to know why. Maybe you want to understand the uncertainty associated with your prediction. Maybe you're going to take a real-world decision based on your model.
That's where Shapely values come in!
With Connor Tann and Dr. Tim Scarfe
References:
Whimsical canvas we were using:
whimsical.com/12th-march-chri...
We were using Christoph's book as a guide:
christophm.github.io/interpre...
christophm.github.io/interpre...
christophm.github.io/interpre...
SHAPLEY VALUES
Shapley, Lloyd S. "A value for n-person games." Contributions to the Theory of Games 2.28 (1953): 307-317.
www.rand.org/content/dam/rand...
SHAP
Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems. 2017.
papers.nips.cc/paper/2017/has...
LIME
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should I trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016).
arxiv.org/abs/1602.04938
That was by far the best introduction to Shapley values and SHAP I have seen. Thank you all !!! Perfect timings as well, as we're just looking to use Shapley values and SHAP.
You should do a podcast series about interpretable ML techniques.
I really like this format (just discovered this channel); it goes deeper but it is not intractable.
Thank you for editing the conversation =)
Really awesome explaination !!
In for every interpretability method!!!
Haha, when Tim says "bite-sized" he means 40 minutes. When Ms. Coffee Bean says bite-sized, it's about 10x less. 🤣
Now fun aside: I really appreciate the shorter format.
Thanks Letitia! 40 mins is "bite sized" for me! Haha, I need to get good at making shorter videos
@@machinelearningdojowithtim2898 Don't make them too short!
I suspect the 0.75 isn't empirical or arbitrary, but is the 3/4 scaling of the Epanechnikov kernel - the optimal (in the sense of requiring the fewest samples for a given accuracy) kernel for nonparametric density estimation: en.wikipedia.org/wiki/Kernel_%28statistics%29
This feels like therapy
Incredible work by Scott Lundberg and Su-In Lee. Why they are not sourced?
Good call out, I will add their paper link i.e. SHAP to the description!
First! But I'll watch tomorrow! Love from Italy
Great video! At 7:30 I think you meant that the house price went *down* due to the RM value in fact blue bars are negative contributions while red bars are positive, correct? Thanks
Does all the time, sum of shpa values gets equal to difference in model prediction and explanation mean values?
I find it interesting that Rob Tibshirani, co-author with Jerome Friedman on many of the LASSO papers, says LASSO as la-so, while Ryan Friedman (Jerome Friedman's son) says la-su, as is said here. I'm going to continue to say it the correct way, which you will simply have to guess.
Scott, you have been watching a lot of my videos today 😃😃
that intro go hard tho
Does this result hold for non-linear models?
can we say SHAP value = sum of squared residuals?
First 😎😎
Nuuuuu
haha
second