Shapley Values : Data Science Concepts

Sdílet
Vložit
  • čas přidán 31. 05. 2024
  • Interpret ANY machine learning model using this awesome method!
    Partial Dependence Plots : • Partial Dependence Plo...
    My Patreon : www.patreon.com/user?u=49277905

Komentáře • 116

  • @rbpict5282
    @rbpict5282 Před 2 lety +33

    I prefer the marker pen style. Here, my complete focus is on the paper in focus and not the surrounding region.

  • @adityanjsg99
    @adityanjsg99 Před 2 lety +11

    No fancy tools, yet you are so effective!!
    You must know that you provide deeper insights that even the standard books do not.

  • @reginaphalange2563
    @reginaphalange2563 Před 2 lety +1

    Thank you for the drawing and intuition explanation, which really help me understand Shapley value.

  • @whoopeedoopee251
    @whoopeedoopee251 Před 2 lety +18

    Great explanation!! Love how you managed to explain the concept so simply! ❤️

  • @kokkoplamo
    @kokkoplamo Před 2 lety

    Wonderful explanation! You explained a very difficult concept simply and concisely! Thanks

  • @MatiasRojas-xc5ol
    @MatiasRojas-xc5ol Před 2 lety +2

    Great video. The whiteboard is the better because of all the non-verbal communication: facial expressions, gestures,...

  • @niks4u93
    @niks4u93 Před 2 lety

    one of the easiest + thorough explanation thank you

  • @xxshogunflames
    @xxshogunflames Před 2 lety

    Awesome video, I don't have a preference on paper or whiteboard just keep the vids coming! First time I learn about Shapley Values, thank you for that

  • @djonatandranka4690
    @djonatandranka4690 Před rokem

    what a great video! such a simple and effective explanation. Thank you very much for that

  • @koftu
    @koftu Před 2 lety +5

    How well do Shapley values align with the composition of various Principal Components? Is there a mathematical relationship between the two, or is it just wholly dependent on the features of the dataset?

  • @SESHUNITR
    @SESHUNITR Před rokem

    very crisp explanation. liked it

  • @lythien390
    @lythien390 Před 2 lety

    Thank you for a very well-explained video on Shapley values :D. It helped me.

  • @PabloSanchez-ih2ko
    @PabloSanchez-ih2ko Před 3 měsíci

    Great explanation! Thanks a lot

  • @Mar10001
    @Mar10001 Před rokem

    This explanation was beautiful 🥲

  • @amrittiwary080689
    @amrittiwary080689 Před rokem

    Hats off to you. Understood most of the explanability techniques

  • @mahesh1234m
    @mahesh1234m Před 2 lety +1

    Hi Rithvik, Really a nice video. Please cover advanced concepts like Fast gradient sign method . Ur way of explaining those concepts would be really helpful for everyone.

  • @yulinliu850
    @yulinliu850 Před 2 lety +2

    Nicely explained. Thanks!

  • @ericafontana4020
    @ericafontana4020 Před 11 měsíci

    nice explanation! loved it!

  • @Aditya_Pareek
    @Aditya_Pareek Před rokem

    Great video, simple and easily comprehensible

  • @oliverlee2819
    @oliverlee2819 Před 4 měsíci

    This is very clear explanation better than most of the articles that I could find online, thanks! I have one question though: when getting the global shapley value (average across all the instances), why do we sum up the absolute value of the Shapley value of all the instances? Is it how we need to keep the desirable properties of the Shapley value? Is there any meaning of summing up the plain value of the Shapley value (e.g. positive and negative will now cancel off each other)?
    Another question is, when you said the expected value of the difference, is it just an arithmetic average of all the difference from all those permutations? I remember seeing something that Shapley value is actually the "weighted" average of the difference, which is related to the ordering of those features. Is the step 1 already taking into this into consideration, such that we only need the arithmetic average to get the final Shapley value for that instance?

  • @kanakorn.h
    @kanakorn.h Před rokem

    Excellent explaination, thanks.

  • @shre.yas.n
    @shre.yas.n Před rokem

    Beautifully Explained!

  • @JorgeGomez-kt3oq
    @JorgeGomez-kt3oq Před 3 měsíci

    Most underrated channel ever

  • @juanete69
    @juanete69 Před rokem

    Hello.
    In a linear regression model are SHAP values equivalent to the partial R^2 for a given variable?
    Don't they take into account the variance as the p-values do?

  • @000000000000479
    @000000000000479 Před rokem

    This format is great

  • @daunchoi8679
    @daunchoi8679 Před 2 lety

    Thank you very much for the intuitive and clear explanation! One question is, so is Step1~5 basically the classic Shapley value and is Step6 SHAP (Shapley Additive exPlanation )?

  • @florianhetzel9157
    @florianhetzel9157 Před 6 měsíci

    Thank you for the video, really appreciate it!
    I have a question about Step3:
    Is it necessary to 'undo' the permutation after creating the Frankenstein Samples and before feeding them in the model, since the model expects Temp to be in the first position from the training?
    Thank you very much for clarification

  • @nature_through_my_lens
    @nature_through_my_lens Před 2 lety +1

    Beautiful Explanation.

  • @niknoor4044
    @niknoor4044 Před 2 lety

    Definitely the marker pen style!

  • @cgmiguel
    @cgmiguel Před 2 lety

    I enjoy both!

  • @johanrodriguez241
    @johanrodriguez241 Před rokem

    great. How doy think we can apply it for stacking where we can create a stacknet of network of multiples layers with multiple models and for big data problems cuz this approach is based in monte Carlo to "approximate" the shapley values?

  • @Ali-ts6po
    @Ali-ts6po Před rokem

    simply aswesome!

  • @KetchupWithAI
    @KetchupWithAI Před 13 dny

    13:59 I did not fully understand how the values in the chart give you the contribution of variables to difference b/w given and avg prediction. I think what you were doing all along was take the difference in predictions b/w two vectors (x1 and x2) you generated from an OG vector and a randomly chosen vector from data. How does this give you the difference in prediction from OG vector and the mean cones sold (which is what you started with)?

  • @chakib2378
    @chakib2378 Před rokem

    Thank you for your explanation but with the SHAP library, one only gives the trained model without the training set. How the sampling from the original dataset can be done with only the trained model ?

  • @kancherlapruthvi
    @kancherlapruthvi Před 2 lety

    amazing video

  • @JK-co3du
    @JK-co3du Před rokem

    The SHAP function explainer expects a data set input called "background data". Is this the data set used to create the "Frankenstein" Vectors explained in the video?

  • @jacobmoore8734
    @jacobmoore8734 Před rokem

    So, if you had x features, say 50, instead of 4, would you randomly subset 15 (half) of them and create x1...x25? And in each of these x1...25, the differences will be that feature 1:i will be conditioned on the random vector whereas feature[i+n] will not be conditioned on the random vector? Trying to visualize what happens when more than 4 features are available.

  • @tamar767
    @tamar767 Před 2 lety

    Yes, this is the best !

  • @beautyisinmind2163
    @beautyisinmind2163 Před 2 lety

    what is the difference between the work done by Shapley value and the feature selection technique(filter,wrapper and embedded method)? aren't both of them trying to find the best feature?

  • @anmolchandrasingh2179
    @anmolchandrasingh2179 Před 2 lety +2

    Hey Ritvikmath, great video as always. I have a doubt, on step 5 the contributions of each of the features adds up to the difference btw the actual and predicted values. Will they always add up perfectly?

    • @Yantrakaar
      @Yantrakaar Před 2 lety

      I have the same question!
      I don't think they do. We are randomly creating the Frankenstein samples and taking the difference in their outputs, then doing this many many times and finding the average difference. This gives the Shapley value of just one feature for that sample. Because of the random nature of this process, and because this is done for each feature separately from the other features, I don't think the sum of the Shapley values for each feature necessarily add up to the difference between the expected and the sample output.

    • @juanorozco5139
      @juanorozco5139 Před 2 lety

      Please note that this method approximates the Shapley values, so I'd not expect the efficiency property to hold. If you were to compute exactly the Shapley values, their sum would certainly amount to the difference between the predicted value and the average response. However, the exact computation involves powersets (which increase exponentially w.r.t. the number of features), so we have to settle with approximations.

  • @sachinrathi7814
    @sachinrathi7814 Před 4 měsíci

    Thank you for the great explanation but I have one doubt here, how we get 200 there for temperature ? you said it is the expected difference so say when we run the sample 100 time and each time we get some difference so how that 200 number came out from those 100 difference , did we took average or what math's we applied there?
    Any response on this would be highly appreciated.

  • @nikhilnanda5922
    @nikhilnanda5922 Před 2 lety

    Can anyone recommend any good books for Data science in general and for such concepts and beyond? Thanks in advance!

  • @alphar85
    @alphar85 Před 2 lety

    Hey Ritvikmath, grateful for your content. Wanted to ask you how many data science / machine learning methods someone needs to know to start a career in data science ? I know the more the better lol

  • @saratbhargavachinni5544

    In Idea-1 slide: Aren't we getting more composite effect instead of isolated effect? As the feature is correlated, the second order interactions with other features is also lost by randomly sampling on this dimension.

  • @sawmill035
    @sawmill035 Před 2 lety

    Excellent explanation! The only question I have is that, sure, in practice you can (and probably should) probably calculate all these through random sampling of feature interactions (random permutations from step 1) because as the number of features increases, you would have a exponentially increasing number of feature interactions to have to be handled, rendering random sampling of features as the only viable method. My question is wouldn't you have to iterate through all possible feature interactions and all data set points for each in order to calculate exact Shapley values? In other words, is the method you proposed just an approximation of the correct values?

    • @justfacts4523
      @justfacts4523 Před rokem

      i know it's late but this is my understanding of it in case someone else has the same question.
      Yes, we are getting an approximation of the correct values. But if the sample is large enough and considering that we are taking the expected value, according to the law of big numbers we are pretty confident to get an appropriate estimation of the measure

  • @preritchaudhary2587
    @preritchaudhary2587 Před 2 lety

    Could you create a video on Gain and Lift Charts. That would be really helpful.

  • @geoffreyanderson4719
    @geoffreyanderson4719 Před 2 lety

    Shapley values were also taught in the AI for Medicine specialization online. There, it was intended for use with individual patients as opposed to groups or aggregates of patients. You would use Shapley to make individualized prognoses for patients, like what is the best course of treatment for this specific individual patient. Clearly valuable information, however it was super computationally expensive, requiring all permutations to have a different model trained. Therefore only the simplest of model was used, particularly linear regression. I have not yet watched Ritvikmath's video, and I'm curious how much different his material is from the AI for Medicine courses.

    • @geoffreyanderson4719
      @geoffreyanderson4719 Před 2 lety

      In this video there was only one model trained. Inferencing (predicting) was re-run as many times as needed with different inputs to the same trained model. Very interesting. Much more efficient, but I'm wondering about the correctness and if it's solving a slightly different problem than in the AI for Med course --- not sure.

  • @DivijPawar
    @DivijPawar Před 2 lety +2

    Funny, I was part of a project which dealt with this exact thing!

  • @pravirsinha5012
    @pravirsinha5012 Před 2 lety

    Very interesting video, Ritvik. Also very curious about your tattoo.

  • @mohitdwivedi4588
    @mohitdwivedi4588 Před 2 lety

    we stored difference in array or list after step 3 (must be many values). How can SHAP at T=80 can be a single value(200) in your example. Did we take average of that? this E(diff) value how it can be a single value basically?

  • @songjiangliu
    @songjiangliu Před 7 měsíci

    cool man!

  • @juanete69
    @juanete69 Před rokem

    What does it mean in your example that SHAP is a "local" explanation?

  • @apargarg9914
    @apargarg9914 Před 2 lety

    Hey Ritvik! May I know how to do this process for a multi-class classification problem? You have taken a regression problem as an example.

    • @thomassimancik1559
      @thomassimancik1559 Před 2 lety

      I would assume that for a classification problem, the approach remains the same. The only thing that differs for the classification problem, is that you would choose and observe the prediction for a single class value.

  • @mauriciotorob
    @mauriciotorob Před 2 lety

    Hi, great explanation. Can you please explain me how does Shapley values are calculated for classification problems?

    • @justfacts4523
      @justfacts4523 Před rokem

      Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question.
      Instead of considering the class as the output we can use the exact same concept by taking the probabilities generated by the last softmax layer (in case of a nn or any probabilistic like model)
      Or eventually I think we can compute that probability by checking how many times that class has been "outputted"

  • @junkbingo4482
    @junkbingo4482 Před 2 lety +1

    i would say that this vid points out the fact that most of the ML tools are black boxes; but now, people want ' black boxes' to be explained! it's a pb you don't have when you use statistics and/or econometrics
    as to me it's rather curious to calculate an average value in models that are supposed to be non linear; well in ann there is the sensitivity ( based on the gradient); can be a good start of course, but one have to be cautious

  • @dustuidea
    @dustuidea Před 2 lety

    Difference between adj r2 and shapley?

  • @aelloro
    @aelloro Před rokem

    Hello, Ritvik! Thank you for the video! The marker style works great! I'm curious, how to deal with the situation when a feature can have a great importance, but we lack of observations? Following the Ice-cream example, let's add a feature for the time of the day (ToD). And let assume for some reason, that 03:00AM-04:00AM there is a line of airport workers and passengers willing to buy. If we operate the shop at that time, we could sell 5000 cones in one hour regardless other features values. But among our observations there are only working hours (9AM-5PM), and the importance of this feature is quite low.
    It may sound as an imaginary problem, but in medicine field for rare diseases that's the case.

    • @justfacts4523
      @justfacts4523 Před rokem +1

      these are my two cents.
      You can't use that that are outside of your training data. Mainly because the prediction would not be reliable and as a consequence your explanation won't be reliable either.
      Let's remember that one of the assumptions of any machine learning model is that the production data must come from the same distribution of our training data. Hence using data for which you have no observations whatsoever would be dangerous.
      Different is the case in which you have very few data but you still have something. In that case I think you can still be able to solve the problem

    • @aelloro
      @aelloro Před rokem

      @@justfacts4523 Thank you very much! Your content is the best!

  • @ghostinshell100
    @ghostinshell100 Před 2 lety +2

    Can you put out similar content for other interpretable techniques like PDP, ICE etc.

    • @ritvikmath
      @ritvikmath  Před 2 lety +1

      Good suggestion! As a start, you can check out my PDP video linked in the description of this video!

  • @geoffreyanderson4719
    @geoffreyanderson4719 Před 2 lety

    Question: Which of the following two questions is the shown algorithm really answering: "How much does Temp=80 contribute to the prediction FOR THIS PARTICULAR EXAMPLE vs mean prediction?" versus "How much does Temp=80 contribute to the prediction FOR ALL REALISTIC EXAMPLES vs mean prediction?" Is there a link to the source reference used by Ritvikmath here? Thanks!

  • @yesitisme3434
    @yesitisme3434 Před rokem

    Great video as always !
    Would prefer more pen style

  • @ghostinshell100
    @ghostinshell100 Před 2 lety +1

    NICE!

  • @starkest
    @starkest Před 2 lety

    liked and subscribed

  • @juanete69
    @juanete69 Před rokem

    I like both the whiteboard and the paper. But I think it's even better to use something like a Powerpoint because it lets you reveal only important information at that moment, hiding future information which can distract you.

  • @bal1916
    @bal1916 Před 2 lety

    Thanks for the informative video.
    I just have one issue, I thought Shapley values measure the impact of feature absence. Is this correct? If so, how this was realized here?

    • @justfacts4523
      @justfacts4523 Před rokem +1

      Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question.
      We are realizing this because we are taking different samples. Hence the interested feature will be random hence it won't provide any meaningful information.
      I'm not 100% sure of this though

    • @bal1916
      @bal1916 Před rokem

      @@justfacts4523 thanks for your reply

  • @michellemichelle3557
    @michellemichelle3557 Před rokem

    hello, I guess it should be combination instead of permutation according to the coalitional game theory where SHAP method originates

  • @juanete69
    @juanete69 Před rokem

    I haven't understood how you decide what variables to keep fixed and what to change.
    Imagine you get the permutation [F,T,D,H] or [F,H,D,T]

  • @simranshetye4694
    @simranshetye4694 Před 2 lety

    Hello Ritvik, I love your videos. I was wondering if there is a way to contact you. I had a couple questions about learning data science. Hope to hear from you soon, thank you.

  • @aaronzhang932
    @aaronzhang932 Před 2 lety +1

    8:16 I don't get Step 2. It seems you're lucky to get H = 8. What if the second sample is [200, 5, 70, 7]?

    • @offchan
      @offchan Před 2 lety

      Why is H=8 a lucky thing? H can be anything. The original H is 4. The new H is 8. Just the fact that it changes is what's important.

    • @harshavardhanachyuta2055
      @harshavardhanachyuta2055 Před rokem

      ​@@offchan so the H value for form vectors is from the random sample ??

    • @offchan
      @offchan Před rokem +1

      @@harshavardhanachyuta2055 yes

  • @juanete69
    @juanete69 Před rokem

    OK, SHAP is better than PDP but...
    What are the advantages of SHAP vs LIME (Local Interpretable Model Agnostic Explanation) and ALE (Accumulated Local Effects)?

  • @abrahamowos
    @abrahamowos Před rokem

    I didn't get the part of how he got the 2000, c^

  • @lilrun7741
    @lilrun7741 Před 2 lety +2

    I prefer the marker pen style too!

    • @ritvikmath
      @ritvikmath  Před 2 lety

      Thanks for the feedback! Much appreciated

  • @baqirhusain5652
    @baqirhusain5652 Před 6 měsíci

    I still do not understand how this would be applied to text

  • @kisholoymukherjee
    @kisholoymukherjee Před rokem

    Great video but I do prefer the whiteboard style

  • @hassanshahzad3922
    @hassanshahzad3922 Před 2 lety

    The white board is the best

  • @oliesting4921
    @oliesting4921 Před 2 lety +2

    Pen and paper is better. It would be awesome if you can share the notes. Thank you.

  • @tariqkhasawneh4536
    @tariqkhasawneh4536 Před rokem

    Monginis Cake Shop?

  • @offchan
    @offchan Před 2 lety

    Let me try to put it into my own words. In order to make it easy to understand, I have to simplify it by lying first. So here's a soft lie version: you have a sample with temperature 80, you replace it by a temperature from a random sample. So if the random sample has temperature of 70, then replace 80 by 70. Then you ask a question "If I convert this 70 back to 80, what will be the predicted difference?" If the difference is positive, it means the temperature of 80 is increasing prediction value. If it's negative, it's decreasing the prediction value. And this difference is called the SHAP value. We call a feature with large absolute SHAP value as important.
    Now let's fix the lie a little bit: instead of only replacing the temperature, we also replace a few other features from the random sample to the original sample. But we still only try to convert back the temperature. Then we average the SHAP value by doing many random sampling to reduce variance.
    Another thing to do even more is to calculate SHAP value for every sample, then you will have a global SHAP value instead of a local SHAP for a specific sample.
    So this is pretty much an intense iterative process.
    And that's it done.

  • @rahulprasad2318
    @rahulprasad2318 Před 2 lety +5

    Pen and paper is better.

  • @taiwoowoseni9364
    @taiwoowoseni9364 Před 2 lety

    Not Fahrenheit 😁

  • @sorsdeus
    @sorsdeus Před 2 lety +1

    Whiteboard better :)

  • @jawadmehmood6364
    @jawadmehmood6364 Před 2 lety

    Whiteboard

  • @dof0x88
    @dof0x88 Před 2 lety

    for noobs like me trying to learn about new things, your handwriting makes me miss lots of things, Im not getting anything .

  • @vivekcp9582
    @vivekcp9582 Před 2 lety

    Marker- Pen style does help with focus. But the tattoo on your hand doesn't. :P
    I aborted the video mid-way and went on a google map hunt. :/

  • @a00954926
    @a00954926 Před 2 lety +1

    You made this so simple to understand, that I will get to Python and do this ASAP!! Thank you @ritvikmath