Vision Transformer Basics

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 39

  • @rldp
    @rldp Před 8 měsíci +34

    This is one of the best explanations of not just ViT, but transformers in general that I have watched. Excellent video

  • @whale27
    @whale27 Před 8 měsíci +21

    Unbelievable quality. Happy to be here before this channel blows up.

  • @srinjoy.bhuiya
    @srinjoy.bhuiya Před 2 měsíci +4

    One of the greatest explanations of the concepts of transformers to a Computer Vision Reserach

  • @capsbr2100
    @capsbr2100 Před 5 měsíci +7

    Goodness, what a remarkable video. This is by far the best explanation video I have watched about vision transformers.

  • @thetechnocrack
    @thetechnocrack Před 6 měsíci +5

    This is one of the cleanest explanation of ViTs I have come across. Amazing work Samuel! Inspiring.

  • @continuallearning8366
    @continuallearning8366 Před 9 měsíci +5

    Excellent video! Honored to be here before it goes viral 🙏🏾

  • @user-iy6gq8yd3p
    @user-iy6gq8yd3p Před 8 měsíci +5

    Thank you for making this wonderful video. So clear! Please continue your awesome video work!

  • @jesusalpaca7170
    @jesusalpaca7170 Před 5 měsíci +1

    for a beginner like me, I would say, this is the introduce video that we were waiting for :')

  • @ShravanKumar147
    @ShravanKumar147 Před měsícem

    Beautifully put together. Keep it going @Sam

  • @PotatoKaboom
    @PotatoKaboom Před 9 měsíci +4

    I've held guest lectures on the inner workings of transformers myself, but I still learned a bunch from this! Everything after 22:15 was very exciting to watch, very well presented and easy to understand! Very well done, I dubscribed for more :)

  • @abhimanyuyadav2685
    @abhimanyuyadav2685 Před 8 měsíci +2

    Your weekly ai news was really useful
    Please bring it back

  • @piclkesthedrummer6439
    @piclkesthedrummer6439 Před 3 měsíci

    This is by far one of the most accurate, yet understandable and intuitive explaination of such a hard concept, you did a better job at explaining it than the authors! very impressive!

  • @thecheekychinaman6713
    @thecheekychinaman6713 Před 6 měsíci

    I was studying up on Transformers and ViTs half a year ago, and recently checked back to find this (to my surprise). Great clear explanations, can tell CAML is in great hands!

  • @aminkarimi1068
    @aminkarimi1068 Před 3 měsíci

    The best video to easily understand VIT

  • @amoghjain
    @amoghjain Před 8 měsíci +2

    Thank you so very much for sharing your insights and intuition behind soooo many concepts.

  • @mattsong6875
    @mattsong6875 Před 9 měsíci +2

    Thanks for such a informative and educational video

  • @vil9386
    @vil9386 Před 6 měsíci

    Wow, this video helped me a lot in understanding Attention and ViT. Packed with all the logics needed to design a solution using the latest as of this day.

  • @user-fv5oj4qk1l
    @user-fv5oj4qk1l Před 8 měsíci +2

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 *The Evolution of AI and Computer Vision*
    - General methods leveraging computation prove most effective in AI development.
    - Evolution from handcrafted features to Convolutional Neural Networks (CNNs) and then to Transformers, showcasing a reduction in inductive biases and an increase in data-driven approaches.
    01:09 🤖 *Neural Network Architectures*
    - Importance of network architecture in building intelligent machines.
    - Distinction between network architecture and network parameters, focusing on resource limitations and efficient design.
    02:32 💡 *Introduction to Transformers*
    - Transformers' dominance in AI, initially in Natural Language Processing (NLP) and then in Computer Vision.
    - Discussion on why Transformers took time to transition from NLP to Computer Vision.
    03:57 🌐 *Understanding Transformers: Encoder and Decoder*
    - Explanation of the Transformer architecture with its encoder and decoder components.
    - Different variants of Transformers: Encoder-only, Decoder-only, and Encoder-Decoder architectures.
    05:33 🔍 *Applying Transformers to Computer Vision*
    - Vision Transformers (ViT) process images by slicing them into patches, using position embeddings and Transformer encoders.
    - The methodology of transforming images into a sequence of embeddings for the Transformer encoder.
    07:08 🔗 *Multi-Head Attention in Transformers*
    - Detailed explanation of the multi-head attention mechanism in Transformers.
    - Role of queries, keys, and values in facilitating communication between different embeddings.
    09:12 🧩 *Transformer Encoder Blocks and Scaling*
    - The structure and function of Transformer encoder blocks, including multi-head attention and MLP.
    - Importance of residual connections and layer normalization in optimizing Transformer models.
    11:05 🚀 *Scaling and Hardware Influence in AI*
    - The impact of scaling and hardware advancements on Transformer model performance.
    - Discussion on the exponential increase in computational resources for training large models.
    13:50 🛠 *MLP and Optimization in Transformers*
    - Role of the multi-layer perceptron (MLP) in Transformer architecture for independent processing of embeddings.
    - Importance of non-linearities like ReLU and GELU in Transformer models.
    15:00 ⚙️ *Residual Connections and Layer Normalization*
    - Implementation and significance of residual connections and layer normalization in Transformers.
    - These components facilitate gradient flow and stable learning in deep network training.
    17:05 🌐 *Positional Embeddings in Transformers*
    - Explanation of positional embeddings in Transformers, necessary for maintaining spatial information in sequences.
    - Different methods of implementing positional embeddings in Transformer models.
    19:27 🔄 *Cross Attention and Causal Attention in Transformers*
    - Discussion of
    Made with HARPA AI

  • @MdAkmolMasud
    @MdAkmolMasud Před 2 měsíci

    The best explanation of ViT..

  • @rmmajor
    @rmmajor Před 4 měsíci

    That is a masterpiece of a video! Many thanks for your work!

  • @sbdzdz
    @sbdzdz Před 8 měsíci +2

    Very well presented!

  • @soylentpink7845
    @soylentpink7845 Před 9 měsíci +2

    Very good video - contents & it’s presentation!

  • @shyb8079
    @shyb8079 Před 3 měsíci

    Thank you for ur content.

  • @gnorts_mr_alien
    @gnorts_mr_alien Před 4 měsíci

    man, what a video. thank you!

  • @minute_machine_learning5362
    @minute_machine_learning5362 Před 3 měsíci

    great explanation

  • @EigenA
    @EigenA Před 5 měsíci

    Great work!

  • @zainbaloch5541
    @zainbaloch5541 Před 4 měsíci

    Thank you so much!

  • @tomrichter9021
    @tomrichter9021 Před 6 měsíci

    Great video

  • @flamboyanta4993
    @flamboyanta4993 Před 9 měsíci +2

    Excellent and clearly communicated. Thanks.
    question in 20:05 when discssing positional embeddings, the legend of the waves says dim 4,....dim 7. Here, does dim refer to the length of the pathch D? as in, we'll get as many sine waves as D dims ?

  • @geomanisgod
    @geomanisgod Před 5 měsíci

    A+++ quality from other planets.

  • @flamboyanta4993
    @flamboyanta4993 Před 9 měsíci +1

    Another question:
    in 30:00 discussing how early attention layers tend to focus on local features and deeper ones on more global features of the input. I didn't understand the significance of the x-axis (sorted attention head). is this just a count of how many attention head there are in the respective block? Which suggests that in the large data regime, even early attention blocks with 14+ heads will also tend to observe the features globally? Is this correct?
    And thank you in advance!

  • @capsbr2100
    @capsbr2100 Před 5 měsíci

    So for someone approaching this now, working on resource-constrained devices, both for training and inference, it makes more sense to just stick to CNNs?

  • @miraclemaxicl
    @miraclemaxicl Před 5 měsíci

    More Compute Is All You Need

  • @iez
    @iez Před 6 měsíci

    any ViTs that are open source?

  • @felipesuarez5041
    @felipesuarez5041 Před 23 dny

    Crazy how transformers are beating all these other classical architectures like CNNs, that have been used since ancient Greece times.

  • @AKD-le2kb
    @AKD-le2kb Před 2 měsíci

    w