[Own work] On Measuring Faithfulness or Self-consistency of Natural Language Explanations

SdĂ­let
VloĆŸit
  • čas pƙidĂĄn 7. 09. 2024
  • Excited to share my ACL 2024 presentation on my almost-last PhD paper about LLM self-explanations! 🎓📚
    Are you joining ACL 2024 in Bangkok? Ping me-let's chat!
    AI Coffee Break Merch! đŸ›ïž aicoffeebreak....
    📜 “On measuring faithfulness of natural language explanations” L Parcalabescu, A Frank arxiv.org/abs/...
    (follow-up paper for vision and language models):
    📜 “Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?” L Parcalabescu, A Frank arxiv.org/abs/...
    Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
    Dres. Trost GbR, Siltax, Vignesh Valliappan, Michael, Sunny Dhiana, Andy Ma
    ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    đŸ”„ Optionally, pay us a coffee to help with our Coffee Bean production! ☕
    Patreon: / aicoffeebreak
    Ko-fi: ko-fi.com/aico...
    Join this channel to get access to perks:
    / @aicoffeebreak
    ▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
    🔗 Links:
    AICoffeeBreakQuiz: / aicoffeebreak
    Twitter: / aicoffeebreak
    Reddit: / aicoffeebreak
    CZcams: / aicoffeebreak
    #AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research​ #ACL2024NLP #PhDLife
    Video editing: Nils Trost
    Music đŸŽ” : Bella Bella Beat - Nana Kwabena

Komentáƙe • 20

  • @theosalmon
    @theosalmon Pƙed měsĂ­cem +6

    Thank you Dr. Letitia.

  • @alexkubiesa9073
    @alexkubiesa9073 Pƙed měsĂ­cem +3

    This sounds very useful! LLM users tend to assume that just because it writes like a human, that it can introspect and reason about its thought processes, which of course not a given. But it’s great to see progress on measuring this ability (or at least self-consistency) so that newer models can be more ergonomic.

  • @DerPylz
    @DerPylz Pƙed měsĂ­cem +5

    Thanks for sharing your work! Always great so see what you're up to!

  • @MikeAirforce111
    @MikeAirforce111 Pƙed měsĂ­cem +4

    Congrats Doctor!! :-) Looking forward for your future work!

  • @Thomas-gk42
    @Thomas-gk42 Pƙed měsĂ­cem +6

    Congratulations to your doctorate🖖

  • @beatrixcarroll8144
    @beatrixcarroll8144 Pƙed měsĂ­cem +6

    Congrats Dr. Letitia!!!! Wow, YOU ROCK!!!!!!! :-D :-) P.S. We missed you!!

  • @fingerstyledojo
    @fingerstyledojo Pƙed měsĂ­cem +5

    Yay, new video!
    Thanks for letting me pass yesterday lol

    • @AICoffeeBreak
      @AICoffeeBreak  Pƙed měsĂ­cem +1

      Wow, you have a channel! It's amazing, just checked it out! đŸ€©

  • @serta5727
    @serta5727 Pƙed měsĂ­cem +4

    Cool 😎 your explanation was very understandable

  • @nitinss3257
    @nitinss3257 Pƙed měsĂ­cem +5

    1 minute ago for non members ... good to see ya

  • @MaxShawabkeh
    @MaxShawabkeh Pƙed měsĂ­cem +3

    Congrats on the PhD! This is really valuable work! I'm currently trying to squeeze out as much reasoning capabilities as I can out of small LLMs (7-15B) for my company's product, and I'd love a longer video or recorded talk going into details of your findings, any patterns you've found that contribute to improving or reducing self-consistency, or any insights on which existing models or training corpora result in better self consistency and reasoning capabilities. If you have any pointers, I'd appreciate it!

    • @AICoffeeBreak
      @AICoffeeBreak  Pƙed měsĂ­cem +2

      As far as we can see with this paper's experiments, RLHF helps improve self-consistency, but we have not yet any hints for what else had this effect. Maybe size, but for what we *could* test on our infrastructure, we did not measure an effect, but it might be there, we just couldn't test far enough.

    • @MaxShawabkeh
      @MaxShawabkeh Pƙed měsĂ­cem

      @@AICoffeeBreak Thanks!

  • @naromsky
    @naromsky Pƙed měsĂ­cem +4

    🎉

  • @Ben_D.
    @Ben_D. Pƙed měsĂ­cem +4

    No ASMR? 😟

    • @AICoffeeBreak
      @AICoffeeBreak  Pƙed měsĂ­cem +2

      It was an entire blooper. Next time for sure. 😅

  • @anluifb
    @anluifb Pƙed měsĂ­cem +1

    So you came up with a method, didn't have time to explain the method to us, and didn't show us that it works. Great.
    If you still have time before Bangkok I would suggest rerecording and focusing on the implementation and interpretation of results rather than the context and wordy descriptions.

    • @AICoffeeBreak
      @AICoffeeBreak  Pƙed měsĂ­cem +1

      Thanks for your feedback. The method is in the video, just not the tiny details.
      1. Interpret with SHAP prediction and explanation. (Mentioned in the video)
      2. Measure their alignment (mentioned) after:
      - normalisation: to bring the values to the same range (mentioned. Did not mention that shap properties make their value very different between output tokens with different probabilities)
      - aggregation: to collect the many values from many outputs. (mentioned. Did not mention we use the mean for this)
      For the results I've synthesized what we see with words and the main takeaways. For lengthy tables, please check the paper and its appendix. I don't know what you mean that the video doesn't show that it works. I've also shown an individual example before the takeaways. The problem that there is no ground truth, of course exists for us as well as for previous work. But for the first time in literature, we now *compare* existing works to each other-and to our method to them.
      This is why the context is important, namely to make this clear. Because our paper makes the contribution to evaluate and clarify the state of the field, and as a follow-up contribution, we have this new method by solving the shortcomings of existing tests.