Stanford Seminar - Human-Centered Explainable AI: From Algorithms to User Experiences

Sdílet
Vložit
  • čas přidán 28. 04. 2024
  • February 17, 2023
    Q. Vera Liao of Microsoft Research
    Artificial Intelligence technologies are increasingly used to aid human decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users' explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI.
    About the speaker:
    Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM and AAAI venues. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS), an Editor for CSCW, and an Area Chair for FAccT 2023.
    Chapters
    0:00 Introduction
    10:27 Explainability needs expressed as questions
    18:36 XAI for decision support
    22:13 A blind spot in XAI? Dual Cognitive Processes
    24:01 Model decision boundary vs. (human intuition about) task decision
    26:14 Understanding the human-XAI decision-making process
    27:13 Comparing feature-based vs. example-based explanations
    28:53 Pathways to override AI to have appropriate reliance
    28:58 An instantiation: augmenting explanation by relevance
    42:11 Selective explanation can signal model errors
    48:22 Conclusions: human-centered AI as bridging work

Komentáře •