Machine Learning 53: Skip-Gram

Sdílet
Vložit
  • čas přidán 28. 11. 2022
  • We present Skip-Gram for representing words as vectors. We present it as an alternative to Continuous Bag of Words, CBOW, and discuss the few differences between the two.

Komentáře • 8

  • @tjeaue
    @tjeaue Před 9 měsíci +1

    thank you so much, great content, seriously.

  • @user-me7mo1iw2x
    @user-me7mo1iw2x Před 10 měsíci

    Wow. This is actually golden. Keep it up!

  • @anikaroy8311
    @anikaroy8311 Před 7 měsíci

    amazing explanation!

  • @paninilal8322
    @paninilal8322 Před rokem

    Great explanation sir

  • @noone-iv7tm
    @noone-iv7tm Před 3 měsíci

    If in Skip-gram I multiply output from Identity activation with W', would it not give same vectors corresponding to all four words ??

  • @revathik9225
    @revathik9225 Před 4 měsíci

    But what if the gradient changes a lot after each step - then using sum of the losses OR using each loss one after the other, would be different, right?

    • @kasperglarsen
      @kasperglarsen  Před 3 měsíci

      Yes indeed. It is merely a practical heuristic that seems to work well and is inspired by this intuition.

  • @user-go9zc6xh2g
    @user-go9zc6xh2g Před 7 měsíci

    u almost killed me