Positional embeddings in transformers EXPLAINED | Demystifying positional encodings.

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 219

  • @anonymousanon4822
    @anonymousanon4822 Před rokem +30

    I found no explanation for this anywhere and when reading the paper missed the detail that each tokens positional encoding consists of multiple values (calculated by different sine functions). Your explanation and visual representation finally made me understand! Fourier transforms are genius and I'm amazed in how many different areas they show up.

  • @yimingqu2403
    @yimingqu2403 Před 3 lety +11

    love how the "Attention is all you need" paper appears with an epic-like bgm

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +2

      It wasn't on purpose, but it is funny -- in hindsight 😅🤣

  • @444haluk
    @444haluk Před 3 lety +13

    This video is a clear explaination of why you shouldn't add your positional encoding but concat.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +6

      Extra dimensions dedicated exclusively to encode position! Sure, but only if you have some extra to share. 😅

    • @444haluk
      @444haluk Před 3 lety +2

      @@AICoffeeBreak this method relocates the embeddings in a specific direction in the embeddings space, so that new position in the relevant embedding cluster have "another" meaning to (say there is another instance of the same word later) other words of "same kind". But that place should be reserved other semantics, else the space is literally filled with "second position" coffee and "tenth position" me, "third position" good etc etc. This can go wrong in soooo many ways. Don't get me wrong, I am a clear cut "Chinese Room Experiment" guy, I don't think you can translate "he is a good doctor" before imagining an iconic low resolution male doctor and recall a memory of satisfaction and admiration of consumatory reward, but again, the "he" in "he did again" and "man, he did it again" should literally have the same representation in the network to start discussing things.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +7

      You are entirely right. I was short in my comment because I commented on the same issue in Cristian Garcia's comment. But there is no way you would have seen it, so I will copy paste it here: 😅
      "Concatenating has the luxury of extra, exclusive dimensions dedicated to positional encoding with the upside of avoiding mixing up semantic and positional information. The downside is, you can afford those extra dimensions only if you have capacity to spare.
      So adding the positional embeddings to initial vector representations saves some capacity by using it for both semantic and positional information, but with the danger of mixing these up if there is no careful tuning on this (for tuning, think about the division by 10000 in the sine formula in "attention is all you need")."

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +6

      And you correctly read between the lines, because this was not explicitly mentioned in the video. In the video I explained what an act of balance it is between semantic and positional information, but you identified the solution: If adding them up causes such trouble, then... let's don't! 😂

    • @blasttrash
      @blasttrash Před 3 měsíci +1

      @@AICoffeeBreak new to AI, but what do you mean by the word "capacity"? Do you mean RAM? Do you mean that if we concat positional encodings to original vector instead of adding, it will take up more RAM/memory and therefore make the training process slow?

  • @sqripter256
    @sqripter256 Před 9 měsíci +9

    This is the most intuitive explanation of the positional encoding I have come across. Everyone out there explain how to do it, even with code, but not the why which is more important.
    Keep this up. You have earned my subscription.

  • @deepk889
    @deepk889 Před 3 lety +5

    I had my morning coffee with this and will make an habit!

  • @hannesstark5024
    @hannesstark5024 Před 3 lety +8

    + 1 for video on relative positional representations!

  • @yyyang_
    @yyyang_ Před rokem +5

    i've read numerous articles explaining the positional embedding so far.. however, it is surely the greatest & clearest ever

  • @woddenhorse
    @woddenhorse Před 2 lety +2

    Multi Dimensional Spurious Corelation Identifying Beast 🔥🔥
    That's what I am calling transformers from now on

  • @googlable
    @googlable Před rokem +2

    Bro
    Where have you been hiding all this time?
    This is next level explaining

  • @yusufani8
    @yusufani8 Před 2 lety +2

    Probably the clearest explanation for positional encoding:D

  • @adi331
    @adi331 Před 3 lety +20

    +1 for more vids on positional encodings.

  • @rahulchowdhury3722
    @rahulchowdhury3722 Před 2 lety +2

    You've solid understanding of Mathematics of Signal Processing

  • @ylazerson
    @ylazerson Před 2 lety +2

    Just watched this again for a refresher; thee best video out there on the subject!

  • @kryogenica4759
    @kryogenica4759 Před 2 lety +3

    Make Ms. Coffee Bean spill the beans on positional embeddings for images

  • @full-stackmachinelearning2385

    BEST AI channel on CZcams!!!!!

  • @jayjiyani6641
    @jayjiyani6641 Před 2 lety +1

    Very intuitive. I know there is sine cosine positional encoding but it is actually effective that I got it here..👍👍

  • @Phenix66
    @Phenix66 Před 3 lety +47

    Great stuff :) Would love to see more of that, especially for images or geometry!

  • @maxvell77
    @maxvell77 Před 2 měsíci +1

    Most insightful explanation I have found on this subject so far. I was looking for it for days... Thank you! Keep going, you rock!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 měsíci

      Thank you a lot! Also for the super thanks!

  • @ausumnviper
    @ausumnviper Před 3 lety +5

    Great explanation !! And Yes Yes Yes.

  • @tanmaybhayani
    @tanmaybhayani Před 3 měsíci +1

    Amazing! This is the best explanation for positional encodings period. Subscribed!!

  • @DeepakKori-vn8zr
    @DeepakKori-vn8zr Před 2 měsíci +1

    OMG, such a amazing video to explain Positional Embedding....

  • @magnuspierrau2466
    @magnuspierrau2466 Před 3 lety +9

    Great explanation of the intuition of positional encodings used in the Transformer!

  • @garisonhayne668
    @garisonhayne668 Před 3 lety +5

    Dang it, i learned something and my morning coffee isn't even finished.
    Its going to be one of *those* days.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +1

      Sound like a good day to me! 😅
      Whish you a fruitful day!

  • @ConsistentAsh
    @ConsistentAsh Před 3 lety +6

    I was browsing through some channels after first stopping on Sean Cannells and I noticed your channel. You got a great little channel building up here. I decided to drop by and show some support. Keep up the great content and I hope you keep posting :)

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +3

      Thanks for passing by and for the comment! I appreciate it!

  • @SyntharaPrime
    @SyntharaPrime Před rokem +2

    Great explanation - It might be the best. I think I finally figured it out. I highly appreciate it.

  • @sharepix
    @sharepix Před 3 lety +4

    Letitia's Explanation Is All You Need!

  • @20Stephanus
    @20Stephanus Před 2 lety +2

    "A multi-dimensional, spurious correlation identifying beast..." ... wow. Douglas Adams would be proud of that.

  • @tonoid117
    @tonoid117 Před 3 lety +9

    What a great video, I'm studying my Ph.D. at NLU, so this came in very handy. Thank you very much and greetings from Ensenada Baja California Mexico :D!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +3

      Thanks, thanks for visiting from so far away! Greetings from Heidelberg Germany! 👋

  • @elinetshaaf75
    @elinetshaaf75 Před 3 lety +5

    great explanation of positional embeddings. Just what I need.

  • @exoticcoder5365
    @exoticcoder5365 Před rokem +1

    The best explanation of how exactly position embeddings work !

  • @gauravchattree5273
    @gauravchattree5273 Před 2 lety +4

    Amazing content. After seeing this all the articles and research papers makes sense.

  • @mbrochh82
    @mbrochh82 Před rokem +3

    This is probably the best explanation of this topic on CZcams! Great work!

  • @MaximoFernandezNunez
    @MaximoFernandezNunez Před rokem +1

    I finally understand the positinal encoding! Thanks

  • @raoufkeskes7965
    @raoufkeskes7965 Před 6 měsíci +2

    the most brilliant positional encoding explanation EVER that was GOD Level explanation

  • @karimedx
    @karimedx Před 3 lety +3

    Nice explanation

  • @huonglarne
    @huonglarne Před 2 lety +2

    This explanation is incredible

  • @kevon217
    @kevon217 Před 11 měsíci +3

    Super intuitive explanation, nice!

  • @helenacots1221
    @helenacots1221 Před rokem +6

    amazing explanation!!! I have been looking for a clear explanation on how the positional encodings actually work and this really helped! thank you :)

  • @deepshiftlabs
    @deepshiftlabs Před 2 lety +2

    Brilliant video. This was the best explanation of positional encodings I have seen. It helped a TON!!!

    • @deepshiftlabs
      @deepshiftlabs Před 2 lety

      I also make AI videos. I am more into the image side(convolutions and pooling) so it was great to see more AI educators.

  • @ashish_sinhrajput5173
    @ashish_sinhrajput5173 Před rokem +2

    i watched bunch of videos on the positional embedding , but this video makes me very clear intuition behind the positional embedding , thank you very much for this great video , 😊

  • @WhatsAI
    @WhatsAI Před 3 lety +7

    Super clear and amazing (as always) explanation of sines and cosines positional embeddings! 🙌

  • @BaronSpartan
    @BaronSpartan Před 5 měsíci +1

    I loved your simple and explicit explanation. You've earned a sub and like!

  • @harshkumaragarwal8326
    @harshkumaragarwal8326 Před 3 lety +3

    great explanation :)

  • @khursani8
    @khursani8 Před 3 lety +5

    Thanks for the explanation
    Interested to know about rotary position embedding

  • @aterribleyoutuber9039
    @aterribleyoutuber9039 Před 7 měsíci +1

    This was very intuitive, thank you very much! Needed this, please keep making videos

  • @DerPylz
    @DerPylz Před 3 lety +6

    Thanks, as always, for the great explanation!

  • @javiervargas6323
    @javiervargas6323 Před 2 lety +2

    Thank you. One thing is to know the formula and applying it and other thing is to understand the intuition behind it. You made it very clear. All the best

    • @AICoffeeBreak
      @AICoffeeBreak  Před 2 lety +1

      Well said! -- Humbled to realize this was put in context with our video, thanks.
      Thanks for watching!

  • @shamimibneshahid706
    @shamimibneshahid706 Před 3 lety +5

    I feel lucky to have found your channel. Simply amazing ❤️

  • @aasthashukla7423
    @aasthashukla7423 Před 9 měsíci +1

    Thanks Letitia, great explanation

  • @markryan2475
    @markryan2475 Před 3 lety +5

    Great explanation - thanks very much for sharing this.

  • @ugurkap
    @ugurkap Před 3 lety +6

    Explained really well, thank you 😊

  • @PenguinMaths
    @PenguinMaths Před 3 lety +6

    This is a great video! Just found your channel and glad I did, instantly subscribed :)

  • @bartlomiejkubica1781
    @bartlomiejkubica1781 Před 7 měsíci +1

    Great! It took me forever, before I had found your videos, but finally I understand it. Thank you soooo much!

  • @oleschmitter55
    @oleschmitter55 Před 10 měsíci +2

    So helpful! Thank you a lot!

  • @clementmichaud724
    @clementmichaud724 Před rokem +1

    Very well explained! Thank you so much!

  • @jfliu730
    @jfliu730 Před rokem

    best video about position emb i have ever heard

  • @timoose3960
    @timoose3960 Před 3 lety +4

    This was so insightful!

  • @nicohambauer
    @nicohambauer Před 3 lety +6

    Sooo good!

  • @matt96920
    @matt96920 Před rokem +4

    Excellent! Great work!

  • @ColorfullHD
    @ColorfullHD Před 4 měsíci +1

    Lifesaver! Thank you for the explanation.

  • @hedgehog1962
    @hedgehog1962 Před rokem +2

    Really Thank you! Your video is just amazing!

  • @amirhosseinramazani757
    @amirhosseinramazani757 Před 2 lety +3

    Your explanation was great! I got everything I wanted to know about positional embedding. thank you:)

  • @subusrable
    @subusrable Před 23 dny +1

    this video is a gem. thanks!

  • @andyandurkar7814
    @andyandurkar7814 Před rokem +2

    Just an amazing explanation ...

  • @saurabhramteke8511
    @saurabhramteke8511 Před 2 lety +2

    Hey, Great Explanation :). Love to see more videos.

  • @jayktharwani9822
    @jayktharwani9822 Před rokem +1

    great explanation. really loved it. Thank you

  • @Galinator9000
    @Galinator9000 Před 2 lety +2

    These videos are priceless, thank you!

  • @jayk253
    @jayk253 Před rokem +1

    Amazing explanation! Thank you so much !

  • @bdennyw1
    @bdennyw1 Před 3 lety +5

    Nice explanation! I’d love to hear more about multidimensional and learned position encodings

  • @erikgoldman
    @erikgoldman Před rokem +2

    this helped me so much!! thank you!!!

  • @adeepak7
    @adeepak7 Před 6 měsíci +1

    Very good explanation!! Thanks for this 🙏🙏

  • @CristianGarcia
    @CristianGarcia Před 3 lety +9

    Thanks Letitia! A vid on relative positional embeddings would be nice 😃
    Implementations seems a bit involved so I've never used them in my toy examples.

    • @CristianGarcia
      @CristianGarcia Před 3 lety +2

      Regarding this topic, I've seen positional embeddings sometimes being added and sometimes being concatenated with no real justification for either 😐

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +4

      Concatenating has the luxury of extra, exclusive dimensions dedicated to positional encoding with the upside of avoiding mixing up semantic and positional information. The downside is, you can have those extra dimensions only if you have capacity to spare.
      So adding the positional embeddings to initial vector representations saves some capacity by using it for both semantic and positional information with the danger of mixing these up if there is no careful tuning on this (for tuning, think about the division by 10000 in the sine formula in "attention is all you need").

  • @montgomerygole6703
    @montgomerygole6703 Před rokem +1

    Wow, thanks so much! This is so well explained!!

  • @alphabetadministrator
    @alphabetadministrator Před 4 měsíci +1

    Hi Letitia. Thank you so much for your wonderful video! Your explanations are more intuitive than almost anything else I've seen on the internet. Could you also do a video on how positional encoding works for images, specifically? I assume they are different from text because images do not have the sequential pattern text data have. Thanks!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 4 měsíci +1

      Thanks for the suggestion. I do not think I will come to do this in the next few months. But the idea of image position embeddings is that those representations are most often learned. The gist of it is to divide the image into patches, let's say 9. And then to number them from 1 to 9 (from the top-left to bottom right). Then let gradient descent learn better representations of these addresses.

  • @user-fg4pr4ct6g
    @user-fg4pr4ct6g Před rokem +1

    Thanks, your videos helped the most

  • @ai_station_fa
    @ai_station_fa Před 2 lety +3

    Awesome. Thank you for making this great explanation. I highly appreciate it.

  • @EpicGamer-ux1tu
    @EpicGamer-ux1tu Před 2 lety +2

    Great video, many thanks!

  • @anirudhthatipelli8765
    @anirudhthatipelli8765 Před rokem +1

    Thanks, this was so clear! Finally understood position embeddings!

  • @gemini_537
    @gemini_537 Před 4 měsíci +1

    Gemini: This video is about positional embeddings in transformers.
    The video starts with an explanation of why positional embeddings are important. Transformers are a type of neural network that has become very popular for machine learning tasks, especially when there is a lot of data to train on. However, transformers do not process information in the order that it is given. This can be a problem for tasks where the order of the data is important, such as language translation. Positional embeddings are a way of adding information about the order of the data to the transformer.
    The video then goes on to explain how positional embeddings work. Positional embeddings are vectors that are added to the input vectors of the transformer. These vectors encode the position of each element in the sequence. The way that positional embeddings are created is important. The embeddings need to be unique for each position, but they also need to be small enough that they do not overwhelm the signal from the original data.
    The video concludes by discussing some of the different ways that positional embeddings can be created. The most common way is to use sine and cosine functions. These functions can be used to create embeddings that are both unique and small. The video also mentions that there are other ways to create positional embeddings, and that these methods may be more appropriate for some types of data.░

  • @richbowering3350
    @richbowering3350 Před rokem

    Best explanation I've seen - good work!

  • @aloksharma4611
    @aloksharma4611 Před rokem +1

    Excellent explanation. Will certainly like to learn about other encodings in areas like image processing.

  • @nitinkumarmittal4369
    @nitinkumarmittal4369 Před 7 měsíci +1

    Loved your explanation, thank you for this video!

  • @sborkes
    @sborkes Před 3 lety +3

    I really enjoy your videos 😄!
    I would like a video about using transformers with time-series data.

  • @yonahcitron226
    @yonahcitron226 Před rokem +3

    amazing stuff! so clear and intuitive, exactly what I was looking for :)

  • @antoniomajdandzic8462
    @antoniomajdandzic8462 Před 2 lety +2

    love your explanations !!!

  • @arishali9248
    @arishali9248 Před rokem +1

    Beautiful explanation

  • @justinwhite2725
    @justinwhite2725 Před 3 lety +5

    In another video I've seen, apparently it doesn't matter if positional embedding are learned or static. It seems as thiugh the rest of the model makes accurate deductions regardless.
    This is why I was not surprised that Fourier transforms seem to work nearly as well as self attention.

    • @meechos
      @meechos Před 2 lety

      COuld you please elaborate using an example maybe?

  • @user-gk3ue1he4d
    @user-gk3ue1he4d Před rokem +1

    Great work! Clear and deep explanation!

  • @ravindrasharma85
    @ravindrasharma85 Před 2 měsíci +1

    excellent explanation!

  • @bingochipspass08
    @bingochipspass08 Před 6 měsíci +1

    What a lovely explanation & video!.. Thank you!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 6 měsíci +2

      Glad you enjoyed it! Thanks for the visit and leaving a comment.

    • @bingochipspass08
      @bingochipspass08 Před 6 měsíci +1

      @@AICoffeeBreak Thank you again!.. subscribed!!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 6 měsíci +2

      @@bingochipspass08 Oh, great, then I'll see you on future videos as well.

  • @pypypy4228
    @pypypy4228 Před 4 měsíci +1

    This was awesome! I don't have a complete understanding but it definitely pushed me to the side of understanding. Did you make a video about relative positions?

    • @AICoffeeBreak
      @AICoffeeBreak  Před 4 měsíci +2

      Yes, I did! czcams.com/video/DwaBQbqh5aE/video.html

  • @conne637
    @conne637 Před 3 lety +2

    Great content! Can you do a video about Tabnet please? :)

  • @noorhassanwazir8133
    @noorhassanwazir8133 Před 2 lety +2

    Nice madam ...what a video !... outstanding

  • @Cross-ai
    @Cross-ai Před 7 měsíci +2

    This is the best and most intuitive explanation of positional embeddings. THANKYOU so much for this video. btw: what software did you use to create these lovely animations?

    • @AICoffeeBreak
      @AICoffeeBreak  Před 7 měsíci +2

      Thanks, glad you like it! . For everything but Ms. Coffee Bean, I use the good old Powerpoint (morph and redraw functionality FTW). The rest is Adobe Premiere or kdenlive (video editing software).

  • @klammer75
    @klammer75 Před rokem +1

    This is an amazing explanation! Tku!!!🤓🥳🤩

  • @gopikrish999
    @gopikrish999 Před 3 lety +4

    Thank you for the explanation! Can you please make a video on Positional information in Gated Positional Self Attention in ConViT paper?

  • @user-ru4nb8tk6f
    @user-ru4nb8tk6f Před 11 měsíci +1

    so helpful, appreciate it!

  • @omniscienceisdead8837
    @omniscienceisdead8837 Před 2 lety +2

    you are a genius!!

  • @machinelearning5964
    @machinelearning5964 Před rokem +1

    Cool explanation

  • @zhangkin7896
    @zhangkin7896 Před 2 lety +2

    Really great!