#047

Sdílet
Vložit
  • čas přidán 8. 09. 2024

Komentáře • 49

  • @yjmantilla
    @yjmantilla Před 3 lety +4

    These interviews give me life, so many thanks for this .

  • @stalinsampras
    @stalinsampras Před 3 lety +13

    This is a treat, I have been waiting to hear from Christoph Molnar ever seen I came across his book on Machine Learning Interpretability. Now, this podcast has satisfied my hunger for it. Thanks, guys

  • @oncedidactic
    @oncedidactic Před 3 lety +4

    The intros never disappoint, and this one takes the cake lately

  • @schubertludwig
    @schubertludwig Před 3 lety +2

    I’m excited for the project; and Shapley Values are a great start! Interesting and important conversations all around.

  • @machinelearningdojowithtim2898

    First! How many of you folks are interested in interpretability? Anyone here managed to make saliency maps do anything useful? 😂 We really loved this conversation! 😎

    • @stalinsampras
      @stalinsampras Před 3 lety +1

      Right now I'm halfway through the book Interpretable Machine Learning by Christopher Molnar, So far the book is well written and I am really enjoying reading the book.

  • @abby5493
    @abby5493 Před 3 lety +2

    Loving the graphics on this video!

  • @kirand.4122
    @kirand.4122 Před 2 lety

    Very well explanation of Interpretability about human brain using SDE example 49:00👍

  • @muhammadaliyu3076
    @muhammadaliyu3076 Před 3 lety

    I think I enjoy this channel more than Lex Friedman podcast simply because we have more people on the show and each have different ideas and opinion.

  • @NelsLindahl
    @NelsLindahl Před 3 lety +1

    Ok. I'm really enjoying these videos... thank you!

  • @photorealm
    @photorealm Před 2 lety

    In such a complex pursuit the KISS method is really important IMHO. You can tell a lot about how a persons mind works looking at their approach to a solution and their code to implement that solution. We have all looked at code that works well but is extremely painful to understand and code that is almost beautiful it is so simple and elegant. It would be cool if there was an AI assisted editor that would take angry convoluted working code and make it elegant. The KISS editor or converter.

  • @iamjameswong
    @iamjameswong Před 3 lety

    Great discussion!

  • @afafssaf925
    @afafssaf925 Před 3 lety +4

    1:06:00 -> you are missing the whole shtick of Judea Pearl: you *cannot* distinguish causality from the data. You need to know the structure of the problem. If you just put all the variables in, there is no theoretically sound reason as to why the true causal model will give you the best performance. The opposite is often true. Worse yet, it's common for different causal structures to give identical performance.
    I would recommend you interview Richard McElreath about this. He can talk at length about Pearl, philosophy of science and stuff, and he is also involved in STEM-type things.

  • @francistembo650
    @francistembo650 Před 3 lety +4

    How about a podcast to help you with your dissertation, don't mind me. SUBSCRIBED!!!

  • @scottmiller2591
    @scottmiller2591 Před 3 lety

    I wholeheartedly agree the lack of statistics in the current version of AIML is shocking. You need confidence values, and distributions if their not Gaussian. You also need verification, validation, rigorous design justification, and testing for critical design fields such as medical, defense, nuclear, transportation, etc., or you will have "Theriac" events - and of course the original Theriac events already had some of these safeguards in place. Agile is fine for website design (but see next sentence) but will not be enough for mission critical software design that can kill people, and perhaps is incompatible with the safety/trust requirement. Of course, if the AI safety guys are right, incorporating GPT-4 + RL + a hackable reward could result in the planet being converted to paper clips anyway, even if all the application does is marketing surveys.

  • @user-uc6cb6mh2n
    @user-uc6cb6mh2n Před 2 měsíci

    Thanks

  • @sebastianfischer3380
    @sebastianfischer3380 Před 3 lety

    Nice one! About the argument made at 50:00: we can still ask the software engineer why he made a certain decision so the argument is invalid I think

  • @saundersnecessary
    @saundersnecessary Před 2 lety +1

    This is wonderful. What program did you use to make the snippets of the papers (around 10:30 for example). That would be amazing on preparing some of my lectures.

  • @dennisestenson7820
    @dennisestenson7820 Před 2 lety

    I am not a data scientist, but I really enjoy being exposed to these discussions and the papers they reference. 👍👍

  • @PeterOtt
    @PeterOtt Před 3 lety +2

    I'm only about 10% of the way into the video but I like the way you take notes - would you be open to also including a link to those when you make new episodes? (I know its just another thing to do, but I'd use them as quick references for the topics!)

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 3 lety +1

      whimsical.com/12th-march-christoph-molnar-Uf4rpjDJqAiEv8FePJHg6j@7YNFXnKbYzrPxtQRexYbT Notes are semi-decent on his "brief history/challenges" paper

    • @PeterOtt
      @PeterOtt Před 3 lety

      @@MachineLearningStreetTalk this is great, thanks!

  • @willd1mindmind639
    @willd1mindmind639 Před 3 lety

    The issue is that features of the real world are based on a logical identity function. IE. Shape of a dog is a distinct logical identity. Shape of a dog is not equal to the shape of a cat. So the problem is how you encode a feature in a machine neural network and then transfer that network feature to another neural network that "interprets" the features and relationships between features as discrete entities with identity functions just like linear algebra. A is A because not A and A cannot be the same logically. The issue is the encoding of features (such as visual characteristics of the real world) have to be consistent. So green is green and not yellow as a practical application of this idea, where the encoding of green as a set of values and an "identity" has to be consistent in order for any other learning or intelligence behavior to take place. In brain terms, it means the feature of the color green corresponds to a set of neurons that fire when the lightwaves represnting green light are received in the eye. They have to be encoded consistently in order for the higher order parts of the brain to learn and understand. Green cant be encoded in random ways because then no other learning via reinforcement can take place in other parts of the brain. Because any equation assumes that each parameter has a discrete set of values as part of the identity of said parameters that are used for creating the output and our ability to understand this comes from the higher order parts of our brain. If your "artificial intelligence" cannot tell you that the real world is made up of shape, color, texture and perspective then it hasn't "learned" anything because that is what our brain does from birth. For example in more abstract terms, the idea would be that for some particular problem domain, a chemical process is made up of A, B, C where each of those are discrete things that are relatable as features of the real world, such as color, temperature or pressure are discrete features of a chemical process or income and age are discrete features of a customer in an insurance company. Current machine learning models do not express this idea of features of the real world as discrete entities with logical relationships that are used to reason vs statistical values that have no discrete logical relationship other than the model and data used in training. So if you have 5 different models you will get 5 different values statistical results for the same input because there is no implicit understanding of identity function at the parameter level.
    The other problem here, and of course I don't know the answer, is that in computers logical operations and math are handled by higher order languages, compilers and machine instructions. Doing things like addition in a neural networks or other kinds of logical mathematics is not something that current computer architectures are designed for. Normally that is expressed by coding and then compiled. Expressing parameters and higher order relationships (such as linear algebra) using neural networks presents a whole different set of challenges. Which is why the general purpose machine learning frameworks and models currently work so well without having to address those deeper architectural issues.

  • @SimonJackson13
    @SimonJackson13 Před 3 lety

    Is the self fitted model essential to understand how to change the self?

  • @scottmiller2591
    @scottmiller2591 Před 3 lety

    Medical diagnoses require small decision trees - the doctors will not cooperate with anything else. It has to be small enough that they believe they understand it, and they will reject anything that they don't understand. This usually means very shallow trees with few variables, and everything else driven to be zero. I agree that models cannot always be explained this way, but that's what needs to be made, or doctors need to be replaced, which is currently impractical for a variety of technical, operational, and legal reasons.

  • @scottmiller2591
    @scottmiller2591 Před 3 lety

    If the last year has taught us anything, it's that science and statistics will be tossed out the window in setting policy - the politicos will set the policy that feeds their grift, even if it means changing policy multiple times without any new data. Interpretability will fall to fairness in the same way, where fairness will mean the most power and money in my pocket.

  • @joeyvelez-ginorio8353
    @joeyvelez-ginorio8353 Před 3 lety

    Great video per usual, though what's the name of that BANGER playing in the background around 4:20?

  • @tenzin8131
    @tenzin8131 Před 3 lety

    Really cool channel, do you all also have a discord server? would be a great place to chat with like minded ppl. :)

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 3 lety

      Yes, check out about page! We just hang out in Yannic's Discord channel -- you can also find it linked from his channel (Yannic Kilcher). We have an amazing community

  • @minma02262
    @minma02262 Před 3 lety

    How about explainable models? Instead of interpretability.

  • @EloyVeit
    @EloyVeit Před 3 lety

    First the Plane was built- then they discovered the theory of aerodynamic- may black box (wood planes) have to crash, that we can create white box (rockets).

  • @scottmiller2591
    @scottmiller2591 Před 3 lety

    What is the game "Among Us" like?
    It's like "Secret Hitler."
    You just explained something I don't understand in terms of something else I don't understand.

  • @Chr0nalis
    @Chr0nalis Před 3 lety

    I think that most problems are so complex that they don't admit useful interpretations in terms of features which are aligned with our intelligence.

  • @amykim5248
    @amykim5248 Před 2 lety

    Keith and others, if you think this field is important and inevitable (as you say towards the end), why make the intro misleading, as if it is not? Why not invite many women researchers in the field, whose papers you and Molar cites and respected in the field (e.g., Finale Doshi-velez, Been Kim, Cynthia Rudin)? There is so many wrong things said in this video (not by Molar but by others) about the field--why not do your due diligence in researching the field properly before you do the interview (as you seem to have done with interview with Bengio)?

  • @sugamtyagi101
    @sugamtyagi101 Před 3 lety +1

    I like your podcast, but your editing is horrible. You constantly keep distracting your audience with focusing too much on yourself. Adding motion, filters, cameras poses only showing yourself etc. all focused on you.
    Stop making your edits so self-obssessive. It pulls away the audience from an otherwise a very good show.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Před 3 lety

      Ouch! We are noobs at editing, and think it's getting better all the time -- thanks for the feedback