Stanford CS25: V4 I Transformers that Transform Well Enough to Support Near-Shallow Architectures

Sdílet
Vložit
  • čas přidán 22. 05. 2024
  • May 2, 2024
    Speaker: Jake Williams, Drexel University
    Transformers that Transform Well Enough to Support Near-Shallow Architectures
    The talk will discuss various effectiveness-enhancing and cost-cutting augmentations to language model (LM) learning process, including the derivation and application of non-random parameter initializations for specialized self-attention-based architectures. These are referred to as precision LMs (PLMs), in part, for their capability to effectively and efficiently train both large and small LMs. Highlighting their hallmark capability for training with only very limited resources, an introduction to PLMs will be followed by presentation of a developing application that localizes untrained PLMs on microprocessors to act as hardware-based controllers for small electronics devices. This will discuss their utility at training in air-gapped environments, training progressively bigger models on CPUs, as well as provide detail on a fully developed control system and its user interface, including recent experiments on Le Potato, where effective inference of user directives occurred after only 20 minutes of lay interaction over a microphone and light switch.
    About the speaker:
    Jake Ryland Williams is an Associate Professor of Information Science at Drexel University's College of Computing and Informatics in Philadelphia, Pennsylvania. Dr. Williams' has a background in physics and math with degrees from the University of Vermont, and his research leverages a quantitative linguistics perspective that applies math and statistical methodology to analyze and improve linguistic learning systems, alongside others that utilize shared neural methodology. Following a one-year Postdoctoral appointment at the University of California, Berkeley (Cal) studying large-scale machine learning in 2015, Dr. Willams became a data science (DS) faculty at Drexel, where he drove the foundation of a DS MS program and develops and instructs DS coursework, including on natural language processing with deep learning.
    More about the course can be found here: web.stanford.edu/class/cs25/
    View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

Komentáře • 6

  • @IOSALive
    @IOSALive Před 2 měsíci +8

    Stanford Online, This is so fun! I'm happy I found your channel!

  • @chrisavila1969
    @chrisavila1969 Před měsícem

    Watched this 3 times, really helps w/ understanding if you walk thru the math as well. Going to try to build this myself! 🎉

  • @shahriarabid191
    @shahriarabid191 Před 2 měsíci +1

    Thank you so much. Grateful!

  • @mixshare
    @mixshare Před 2 měsíci +2

    Great Vid

  • @hamidmojarrad
    @hamidmojarrad Před měsícem

    Hello everyone. Is there a talk on the use of transformers in automatic Speech Recognition? I need one that dives into ASR in detail.

  • @not_amanullah
    @not_amanullah Před 2 měsíci +2

    ❤🤍