Pre-training of BERT-based Transformer architectures explained - language and vision!

Sdílet
Vložit
  • čas přidán 29. 08. 2024

Komentáře • 11

  • @taehyeokjang6951
    @taehyeokjang6951 Před 3 měsíci +1

    Thanks, This is the most easy-to-understand-intuitively video I've even seen.
    Great explanation!

  • @thatipelli1
    @thatipelli1 Před 3 lety +4

    Thanks, this was the best explanation on the Internet. Top-class animation too!!

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +4

      Thanks a lot for visiting and appreciating the content!

  • @stalinsampras
    @stalinsampras Před 3 lety +6

    Hey, This video was very informative! Thanks for producing such good and high quality content

  • @huonglarne
    @huonglarne Před 2 lety +2

    I love your content. Very detailed and easy to understand explanations! Keep up the good work and let us muggles benefit from it.

  • @rogi-player
    @rogi-player Před 3 lety +2

    Thanks Letitia! Very good explanation, it helps me a lot.

    • @AICoffeeBreak
      @AICoffeeBreak  Před 3 lety +1

      Glad it was helpful! Thanks for leaving this comment!

  • @mianzhipan3327
    @mianzhipan3327 Před 3 lety +2

    great explanation!! hope to see more videos about latest progress about multimodal

  • @arigato39000
    @arigato39000 Před 3 lety +3

    thank you