DINO: Self-Supervised Vision Transformers

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 14

  • @marioparreno24
    @marioparreno24 Před 2 měsíci

    Thanks for the intuitions, faqs and clearly explained topics!

    • @soroushmehraban
      @soroushmehraban  Před 2 měsíci +1

      Glad you liked it Mario🙂

    • @marioparreno24
      @marioparreno24 Před 2 měsíci

      ​​@@soroushmehraban Just one question. Why is centering only applied to the teacher and sharpening to both the student and the teacher? Could we not apply centering to both?
      Maybe if we add both operations to both sides we play a sum 0 game and we have the collapse problem again, I dont know 😅 Maybe we need then artificially create an unbalance

    • @soroushmehraban
      @soroushmehraban  Před 2 měsíci +1

      @@marioparreno24 From my understanding, sharpening makes the model more confident that this sample belongs to a certain sudo-class (the output label of model that we don't have ground truth).
      And we want the student to be kept certain about it and we sharpen it. The less certain the student is, the less certain it is to differentiate samples from different images.
      But for images we do both to prevent the mode collapse.
      But this is just based on my intuition. Don't quote me on that lol.

  • @yiqian22
    @yiqian22 Před 10 měsíci

    As always, thank you very much for the clear explanation - I truly appreciate it! 👏

  • @ericsy78
    @ericsy78 Před 11 měsíci

    This is a great video I really appreciate the dedication in each video you post, I learn a lot watching your videos and it has always been helpful to me.

  • @AshishJain-iw5md
    @AshishJain-iw5md Před 11 měsíci +1

    Very informative!!!

  • @alihadimoghadam8931
    @alihadimoghadam8931 Před 11 měsíci

    Great video, as always 🤘

  • @pulakgautam3536
    @pulakgautam3536 Před 11 měsíci

    I love your channel!

    • @soroushmehraban
      @soroushmehraban  Před 11 měsíci

      Thanks for the kind comment! This is really encouraging. Will try my best to come up with more paper reviews in the future.