LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 80

  • @lucidraisin
    @lucidraisin Před 3 lety +51

    Lol, was not expecting a shoutout at the end :D Thanks for another great video!

    • @linminhtoo
      @linminhtoo Před 3 lety +1

      good job! I really like the use of einsum.

    • @playfuladventurer
      @playfuladventurer Před 3 lety +1

      thanks for the code! have you tried reproducing some of the results?

  • @charlesfoster6326
    @charlesfoster6326 Před 3 lety +85

    In a nutshell, my understanding is the Lambda Layer works using a similar rearranging trick as in "Transformers are RNNs". Instead of doing attention over positions (i.e. NxN), it ends up doing attention over features (i.e. DxK). That's why it isn't O(N^2).

    • @charlesfoster6326
      @charlesfoster6326 Před 3 lety +17

      This is also why you need to change the positional encoding strategy to use a separate path. Otherwise it will be difficult for the network to properly route info based on positional information.

    • @3145mimosa
      @3145mimosa Před 3 lety

      This is an excellent insight. Thank you!

  • @nikitadiaconunichita
    @nikitadiaconunichita Před 3 lety +11

    "Attention mechanism extremely briefly, extremely briefly" 🤣😅🤣😅 I guess it's for the people watching your videos for the first time. Love your content

  • @scottmiller2591
    @scottmiller2591 Před 3 lety +1

    Thanks Yannic, you made my day.

  • @01FNG
    @01FNG Před 3 lety +46

    It feels like the stage is set for a more general theory that can unify all of these ideas into one.

    • @AvastarBin
      @AvastarBin Před 3 lety +5

      So much! It really feels like we're going around in circles but we're approaching to the right answer!

  • @maxkleinebrahm2174
    @maxkleinebrahm2174 Před 3 lety +8

    Grate Channel!!! It would be nice to see a review paper/video comparing all the longformers, sparse transformers, linear transformers, linformers, reformers, performers, lambdanetworks, ...

  • @yuangwang7772
    @yuangwang7772 Před 3 lety +7

    I was just thinking about the next Transformer Yannic would cover and here it comes!

  • @hitomihilbert5359
    @hitomihilbert5359 Před 3 lety +2

    I think the most important thing is in Appendix C : "LambdaNetworks can alternatively be viewed as an extension of HyperNetworks (Ha et al., 2016) that dynamically compute their computations based on the inputs contexts."
    It may be much easier to understand the paper from this perspective XD

  • @Navhkrin
    @Navhkrin Před 3 lety +3

    Another day, another great paper explanation.

  • @Cl0udn1n3
    @Cl0udn1n3 Před rokem

    “This is Quick Ben’s game, O Elder. The bones are in his sweaty hands and they have been for some time. Now, if at his table you’ll find the Worm of Autumn, and the once Lord of Death, and Shadowthrone and Cotillion, not to mention the past players Anomander Rake and Dessembrae, and who knows who else, well - did you really believe a few thousand damned Nah’ruk could take him down? The thing about Adaephon Delat’s game is this: he cheats.” To give the turtles some prime reading material..

  • @drhilm
    @drhilm Před 3 lety +5

    I was in the middle of asking Yanic to do this paper, and before I even finished - walla!

  • @FREELEARNING
    @FREELEARNING Před 3 lety +6

    Interesting explanation, Looking for Yannic Kilcher V2 i.e Code explainer maybe if you could add 10 to 15 min at the end to explain original paper codes. That would be much helpful. Thank you for the great effort.

    • @linminhtoo
      @linminhtoo Před 3 lety +1

      this would be super helpful, especially to budding researchers!

  • @nicolascarrara4890
    @nicolascarrara4890 Před 3 lety +2

    Please keep the nice work!

  • @nakshatrasingh8204
    @nakshatrasingh8204 Před 3 lety +10

    An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random features approach (FAVOR+) video????????????

  • @drukeri2
    @drukeri2 Před 3 lety +5

    23:30 - The authors fix this mistake. probably thanks to you Yannic (:

  • @granttao7504
    @granttao7504 Před 3 lety +1

    sir, you are wrong. lambda is not the direct result of matrix multiplication of K and V transpose, for each element you get a kxd (as in your notation) matrix from multiplying the transpose of each row of K and each row of V, adding m matrixes together, you get the lambda.

  • @kajalsinha2468
    @kajalsinha2468 Před 3 lety +6

    I am here after 40 seconds of upload

  • @anuragmalyala4863
    @anuragmalyala4863 Před 3 lety +5

    noob question: which app are you using for the paper annotations?

  • @TheNuttyNetterAlexLamb
    @TheNuttyNetterAlexLamb Před 3 lety +6

    Why do you keep going on about the "double-blind reviewing" thing? In ML right now, double blind reviewing gives the author an opt-in ability to protect their identity. Moreover, the author is not clearly listed on the page, so a reviewer won't know it by default. Reviewers and readers still have the option to search for the name of the paper and find it if it's on arxiv, where authors have the right to post it.
    I think this system is a good compromise, since it gives a pretty good amount of anonymity, especially for those who want the anonymity, and it doesn't restrict the authors much.

  • @elipersky1591
    @elipersky1591 Před 3 lety +7

    I know you said to ignore it, but what does the intra-depth hyperparameter actually mean?

    • @user-ks5tx6wk5j
      @user-ks5tx6wk5j Před 3 lety

      I think intra-depth is the intermediate dimension in the query, perhaps understood as the weight of each key relative to value as it acts on the information of the context element.

  • @CristianGarcia
    @CristianGarcia Před 3 lety +7

    This architecture has the downside that it has a fixed sequence length due to the learned positional embeddings, being independent of the sequence length is a nice property of MHA in the original Transformer.

    • @Neural_Causality
      @Neural_Causality Před 3 lety +2

      just in case:
      MHA = Multi-Headed Attention

    • @Atlantis357
      @Atlantis357 Před 3 lety

      as long as the pictures are the same resolution that shouldnt be a big problem, right?

    • @CristianGarcia
      @CristianGarcia Před 3 lety

      @@Atlantis357 yeah, for images it shouldn't matter much since you can always resize. But for text you don't have that luxurty.

    • @PrasenjeetRoyMPAI
      @PrasenjeetRoyMPAI Před 3 lety

      @@CristianGarcia We can do padding in case of texts. Please correct me if I am going wrong.

  • @pierregutierrez4332
    @pierregutierrez4332 Před 3 lety +2

    I may be mistaken but: the speed-accuracy chart they show is unclear. Are we talking about inference speed (I'm really unsure, the paper is not clear)? If so, how come baseline resnet and resnet+se seem better than efficientnet (all appear on the top left of the curve, contradicts the effnet paper)? Could it be because of the use of a bag of tricks during training (ex data augmentation)? In that case, the performance cannot be claimed to come from the architecture used. Also it seems it would contradicts table 6 where we see the amount of flops is marginally reduced for comparable accuracy. As a result, the 4.5x speedup they claim seems a bit missleading.

  • @weizhu2230
    @weizhu2230 Před 3 lety +1

    this is pretty similar to diff pooling in gnn, where we just get an indicator matrix through some blackbox transformation.

  • @adizhol
    @adizhol Před 3 lety +3

    So the lambda function is basically like the Embedding matrix E for text sequences? They learn embeddings of patches/pixels?

  • @gabby.suwichaya
    @gabby.suwichaya Před 3 lety +1

    Hi, I am quite new to the basic transformer .... And there seem to be many new transformers recently. Could any please share the fundamental video for the Attention? I am interested to see where it begins...

  • @SystemsMedicine
    @SystemsMedicine Před 3 lety +1

    Hi Yannic. Loading a large ram with a 40Kx40K matrix is certainly possible, even on modern pc pseudo-home computers. I know it sound ridiculous, but consider... a 4 stick ddr4 ram of size 512 giga bytes is about US$ 3500. A Supermicro motherboard may have 16 ddr4 memory stick slots. This means for about US$ 14000, one may have 2 tera bytes of RAM in a "home" computer. This is certainly enough to contain the matrices in question, and fast enough to do the operations. Some very special programming may be required, but if the task were very easy, it would have been done a long time ago. (Or wait 2 or 3 years, and the price will drop in half.)
    If you don't want to pay so much for memory, you might page the matrices in and out of disk memory. A 16tb Red Drive is about US$ 350. If you raid 4 of these together, you will have something like 64 tera bytes of disk space to work with. Your giant multiplies will take time, but so what?
    If you have access to a modern supercomputer, you may just be able to write something like Matlab code to do the job, tho I don't know if Matlab is set up for large matrices, or if you will have to get someone to recompile the Matlab kernel (a tough thing).
    In any event, it might be worth attempting to analyze some images with this direct very brute force method. Sometimes brute force is fun. And the sheer exercise of it might be valuable for the right student or researcher.
    Btw, a fourier xform, or a laplace xform, or some such a thing, of critical parts of the algorithm may make the whole thing a LOT more tractable; although, you would have to work for a while to show this. Cheers. Also: cool channel, dude.

  • @that_guy4690
    @that_guy4690 Před 3 lety +1

    27:36 I guess there's (matrix of shape k x v) instead of - a scalar

  • @yaaank6725
    @yaaank6725 Před 3 lety

    Bunch of ideas jumped into my head to apply it in vision, since the dimension problem can be solved. Time to hoarding up papers bois!

  • @austinmw89
    @austinmw89 Před 3 lety +1

    Hey, what about the diff between local attention and deformable convolutions (DCN)?

  • @lone0017
    @lone0017 Před 3 lety

    Hey, I would like to know what app are you using to annotate the pdf. Thank you.

  • @AllTheFishAreDead
    @AllTheFishAreDead Před 3 lety +3

    Seems a lot like it's replacing a pooling layer with a learnable fn

  • @deiviuds
    @deiviuds Před 3 lety +1

    Which software do you use to annotate this PDFs?

  • @moustafa_shomer
    @moustafa_shomer Před 3 lety +1

    did you make a video about efficient Nets ?
    i can't seem to find it

  • @BrainSlugs83
    @BrainSlugs83 Před 2 lety

    RE: "You can't just wait longer and have more memory..."
    Uhhh... this is exactly what swap files were invented for... -- the issue is that the software wants the entire transformer network in memory all at once... which is just a silly limitation of the software.

  • @cycman98
    @cycman98 Před 3 lety +1

    What is this program that you're using to annotate pdfs?

  • @gangmuklim9308
    @gangmuklim9308 Před 3 lety +1

    hello, Yannic. Thank you for your great review videos! I am currently doing Master degree and want to apply phD to ETH Zurich, once everything goes fine. However, I have almost no information how phD life is in ETH. Would you mind if I ask some question about phD course in ETH via email? (Actually, I couldn't find your email account on web.) Thanks.

  • @sohaibattaiki9579
    @sohaibattaiki9579 Před 3 lety

    Hi,
    Thank you for the great videos.
    I have a question, what is the name of the software you are using to read and annotate the paper?

    • @tresuvesdobles
      @tresuvesdobles Před 3 lety +1

      It is OneNote, most likely

    • @Neural_Causality
      @Neural_Causality Před 3 lety

      He did a video on the tools (including all the software) he uses, and another on how he reads papers, you can find those on his channel

    • @sohaibattaiki9579
      @sohaibattaiki9579 Před 3 lety

      Thank you for your responses. @daniel do you know what is the name of the video please!

    • @Neural_Causality
      @Neural_Causality Před 3 lety

      @@sohaibattaiki9579 czcams.com/video/H3Bhlan0mE0/video.html&ab_channel=YannicKilcher
      czcams.com/video/Uumd2zOOz60/video.html&ab_channel=YannicKilcher

  • @Lazauya
    @Lazauya Před 2 lety

    Why do so many papers not have these nice diagrams for how all their variables interact with each other, like you outlined here? It feels almost intentionally obtuse.

  • @herp_derpingson
    @herp_derpingson Před 3 lety +1

    33:40 If the keys are "fixed", doesnt that make it equivalent to a convolutional kernel?
    Quite a large number of papers try to get rid of the quadratic attention but I strongly believe that there is some no-free-lunch effect going on. You actually need the bandwidth of a quadratic attention so that enough information can be backpropagated.

    • @veedrac
      @veedrac Před 3 lety +1

      Performers claim to be a provably accurate approximation, so idk about that.

    • @charlesfoster6326
      @charlesfoster6326 Před 3 lety +2

      Fair, but if the pattern you're looking for is relatively low frequency (i.e big), bandwidth may not be a problem, since you already need to throw out high frequency details.

    • @YannicKilcher
      @YannicKilcher  Před 3 lety

      yea that's a reasonable claim, maybe not the convolution we know, but kind of

  • @MrJaggy123
    @MrJaggy123 Před 3 lety +3

    Tldr; I didn't completely hate this paper because I didn't completely understand it 😉

    • @MrJaggy123
      @MrJaggy123 Před 3 lety

      Before assuming I'm throwing shade, go to 01:48 in the video 😛

    • @VaclavKosar
      @VaclavKosar Před 3 lety +1

      Here are simplified the equations from the paper: vaclavkosar.com/ml/Lamda-Networks-Transform-Self-Attention

  • @JurekOK
    @JurekOK Před 3 lety

    nice :-)

  • @rgarthwood3881
    @rgarthwood3881 Před 3 lety

    Thanks for the videos. Yannic, can you play this back to yourself on high volume? You'll notice that your swallowing is **extremely** loud. This is because your mic is likely right next to your mouth - maybe back up a foot or two? Again, love your posts but they're really hard to listen to sometimes.

  • @yoloswaggins2161
    @yoloswaggins2161 Před 3 lety +1

    No offense to the first plot but who cares about latency in training

  • @dropoutjeep4193
    @dropoutjeep4193 Před 3 lety

    Can I join your discord?