Transformer Encoder vs LSTM Comparison for Simple Sequence (Protein) Classification Problem

Sdílet
Vložit
  • čas přidán 23. 06. 2024
  • The purpose of this video is to highlight results comparing a single Transformer Encoder layer to a single LSTM layer for a very simple problem. Several texts on Natural Language Processing describe the power of LSTM as well as the advanced sequence processing capabilities of Self Attention and the Transformer. This video offers very simple results in support of these notions in the field of Natural Language Processing.
    Previous Video:
    • A Very Simple Transfor...
    Code:
    github.com/BrandenKeck/pytorc...
    Interesting Post:
    ai.stackexchange.com/question...
    Music Credits:
    Breakfast in Paris by Alex-Productions | onsound.eu/
    Music promoted by www.free-stock-music.com
    Creative Commons / Attribution 3.0 Unported License (CC BY 3.0)
    creativecommons.org/licenses/...
    Small Town Girl by | e s c p | www.escp.space
    escp-music.bandcamp.com

Komentáře • 3

  • @Pancake-lj6wm
    @Pancake-lj6wm Před 9 dny

    Zamm!

  • @LeoDaLionEdits
    @LeoDaLionEdits Před 9 dny

    I never knew that transformers were that much more time efficient at large embedding sizes

    • @lets_learn_transformers
      @lets_learn_transformers  Před 9 dny +1

      Hey @LeoDaLionEdits - I'm very interested in ideas like these. I unfortunately lost my link to the paper - but there was an interesting arXiv article on why XGBoost still dominates Kaggle competitions in comparison to Deep Neural Networks. Based on the problem, I think RNN / LSTM may often be more competitive in the same way: the simpler, tried-and-true model winning out. From a performance perspective, this book notes the advantage in parallel processing of transformers in sections 10.1 (intro) and 10.1.4 (parallelizing self-attention): web.stanford.edu/~jurafsky/slp3/ed3book.pdf