Masked Autoencoders (MAE) Paper Explained

Sdílet
Vložit
  • čas přidán 17. 07. 2024
  • Paper link: arxiv.org/abs/2111.06377
    In this video, I explain how masked autoencoders work by inspiring ideas from BERT paper and pertaining a vision transformer without requiring any additional label.
    Table of Content:
    00:00 Intro
    00:19 BERT idea
    02:09 Language and vision difference
    05:29 Proposed Architecture
    11:30 After pertaining
    14:03 Masking ratio

Komentáře • 14