Do We Really Want Explainable AI? - Edward Ashford Lee (EECS, UC Berkeley)

Sdílet
Vložit
  • čas přidán 11. 09. 2022
  • Conference Website: saiconference.com/IntelliSys
    Abstract: "Rationality" is the principle that humans make decisions on the basis of step-by-step (algorithmic) reasoning using systematic rules of logic. An ideal "explanation" for a decision is a chronicle of the steps used to arrive at the decision. Herb Simon’s "bounded rationality" is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. As a consequence, human decision making in complex cases mixes some rationality with a great deal of intuition, relying more on Daniel Kahneman's "System 1" than "System 2." A DNN-based AI, similarly, does not arrive at a decision through a rational process in this sense. An understanding of the mechanisms of the DNN yields little or no insight into any rational explanation for its decisions. The DNN is operating in a manner more like System 1 than System 2. Humans, however, are quite good at constructing post-facto rationalizations of their intuitive decisions. If we demand rational explanations for AI decisions, engineers will inevitably develop AIs that are very effective at constructing such post-facto rationalizations. With their ability to handle to handle vast amounts of data, the AIs will learn to build rationalizations using many more precedents than any human could, thereby constructing rationalizations for ANY decision that will become very hard to refute. The demand for explanations, therefore, could backfire, resulting in effectively ceding to the AIs much more power. In this talk, I will discuss similarities and differences between human and AI decision making and will speculate on how, as a society, we might be able to proceed to leverage AIs in ways that benefit humans.
    0:00 Introduction
    0:48: Deep Neural Networks (DNNs) as Realized on Today's Computers
    2:50: Explanations in Terms of Rational Thought
    6:07: Silver Bullets?
    8:27: Humans are Very Good at Synthesizing Explanations
    10:46: How to design such an Explanation Machine
    12:00: Possible (and Risky) Uses of Explanation Machines
    14:00: DARPA XAI Program Retrospective
    17:48: Explanation vs. Algorithm
    22:42: Reservoir Computing
    23:23: Provocative Conjecture
    23:50: Another Approach to Explanation: Architected DNNs
    26:18: Architected Compositions
    27:01: Conclusion
    Edward Ashford Lee has been working on software systems for 40 years. He currently divides his time between between software systems research and studies of philosophical and societal implications of technology. After education at Yale, MIT, and Bell Labs, he landed at Berkeley, where he is now Professor of the Graduate School in Electrical Engineering and Computer Sciences. His software research focuses on cyber-physical systems, which integrate computing with the physical world. He is author of several textbooks and two general-audience books, The Coevolution: The Entwined Futures and Humans and Machines (2020) and Plato and the Nerd: The Creative Partnership of Humans and Technology (2017).
  • Věda a technologie

Komentáře • 11

  • @jocelyndayrit6833
    @jocelyndayrit6833 Před rokem

    BROOO thankyou so much, this really helped and the tutorial was really easy to use as well :)

  • @AnshulTyagiMusic
    @AnshulTyagiMusic Před rokem

    i really apreciate your help with dowloanding this software

  • @DavidTateVA
    @DavidTateVA Před rokem

    The hypothesized AI (around 11:20) to produce convincing ex post facto explanations of decisions was a plot feature of Douglas Adams's 1987 novel _Dirk Gently's Holistic Detective Agency_.

  • @bobafetish1
    @bobafetish1 Před 8 měsíci +2

    11:20 AlphaGo was trained using adversarial self-play, not a GAN counterfeit-detector setup. I also disagree that the explanation machine needs to be trained on a GAN to produce human-like explanations - it only needs to interface using natural human language in a consistent manner.

    • @martinkunev9911
      @martinkunev9911 Před 8 měsíci

      I don't think he meant that it **needs** to be done with a GAN, he proposed one way in which it could be done.

    • @bobafetish1
      @bobafetish1 Před 8 měsíci +1

      @@martinkunev9911 sure, I guess I misrepresented his point. I think what I wanted to say is, why would you do it with a GAN at all? The point is not to imitate human explanations, but to provide explanations understandable by humans. Anyway this talk predates the 2023 foundation model revolution, so I suspect in the meantime he has made considerable revisions to these ideas

  • @martinkunev9911
    @martinkunev9911 Před 8 měsíci +4

    You cannot simply state "the universality of algorithms is a myth" and not offer any arguments to support it.

    • @JohnSmith-he5xg
      @JohnSmith-he5xg Před 7 měsíci +1

      Yeah, I was wondering if I missed something as well... Glad to see I'm not the only one

  • @distrologic2925
    @distrologic2925 Před rokem +1

    yes..? no?

    • @distrologic2925
      @distrologic2925 Před rokem

      I love this dissertation. I think DNNs are fundamentally unusable because humans can't understand the trained algorithm. We need a human readable description of an algorithm that is also trainable.

  • @GerardSans
    @GerardSans Před měsícem

    Why bringing human cognition into the discussion? Stop anthropomorphising AI or making weak links between the brain and ML. It’s not serious.