Understanding Word2Vec

Sdílet
Vložit

Komentáře • 62

  • @cherisykonstanz2807
    @cherisykonstanz2807 Před 4 lety +4

    orange sweater over orange polo - my man is rocking the full lobster swagger

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety +3

      It works well with my green screen. Plus, it is the school color for both Caltech and Princeton (so showing my school pride).

  • @exxzxxe
    @exxzxxe Před 3 lety +4

    Exceptionally well done. Thank you!

  • @navneethegde5999
    @navneethegde5999 Před 3 lety +5

    Nice presentation, perfect blend of pace, voice quality and slide data.
    Information is not repeated unnecessarily.

  • @cu7695
    @cu7695 Před 5 lety +2

    Nice explanation of NLP terms. I would like to learn more in terms of probability distribution and it's effect on some real data set.

  • @dipaco_
    @dipaco_ Před 4 měsíci

    This is an amazing video. Very intuitive. Thank you.

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 Před 3 lety +1

    Great explanation of W2V especially NS...

  • @leliaglass1568
    @leliaglass1568 Před 4 lety +1

    thank you for the video! Very helpful!

  • @DebangaRajNeog
    @DebangaRajNeog Před 4 lety +1

    Great explanation!

  • @hiepnguyen034
    @hiepnguyen034 Před 5 lety +5

    best word2vec explanation I have seen so far

  • @junmeizhong9526
    @junmeizhong9526 Před 3 lety +1

    For the negative sampling, the negative examples are word pairs with the same focus word for a number of noisy context words randomly sampled. But here it is done in a reverse way. Please let me know if the two ways are the same or it is a mistake here.

  • @mohammadsalah2307
    @mohammadsalah2307 Před 3 lety +1

    Best explanation ever watch; much better than Stanford lecture in my opinion.

    • @JordanBoydGraber
      @JordanBoydGraber  Před 3 lety +2

      Thanks! That's high praise. Chris and Dan know much more than I do, but I like to think that my ignorance helps me sometimes explain things better, because I know what confuses people (from experience).

  • @hgkjhjhjkhjk7270
    @hgkjhjhjkhjk7270 Před 4 lety

    Upload more stuff your videos are good

  • @GoracyKanal
    @GoracyKanal Před 4 lety

    great explanation

  • @coc2912
    @coc2912 Před rokem

    Your video helps me a lot.

  • @BrunoCPunto
    @BrunoCPunto Před 3 lety

    Great explanation

  • @user-qg3hv5ji1j
    @user-qg3hv5ji1j Před rokem

    Nice explanation and thank you!

  • @ruizhenmai1194
    @ruizhenmai1194 Před 5 lety +6

    On 3:42 similarities should be |V| x 1 if multiplying Wv^T that way

    • @xruan6582
      @xruan6582 Před 3 lety

      I totally agree with you. We should avoid such casual expressions, which could be very misleading in a more complex scenario.

    • @navneethegde5999
      @navneethegde5999 Před 3 lety

      I think it can be represented in both ways, column or row vector. However I think row vector is more efficient to store in memory

  • @alayshah1995
    @alayshah1995 Před 4 lety +1

    Richard Hendricks from Pied Pieper? Yes!

  • @alecrobinson7124
    @alecrobinson7124 Před 4 lety +18

    Good god, it's nice to watch an informative video not done in the style of Siraj.

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety +7

      I've been making ML CZcams videos long before Siraj ...

    • @alecrobinson7124
      @alecrobinson7124 Před 4 lety +2

      @@JordanBoydGraber Touche, very true. Siraj should have copied yours, then.

    • @wahabfiles6260
      @wahabfiles6260 Před 3 lety

      @@alecrobinson7124 Siraj just pretends! His videos are not informative

    • @trexmidnite
      @trexmidnite Před 3 lety

      That numbers is nothing but a particular vector..

  • @vinayreddy8683
    @vinayreddy8683 Před 4 lety

    I'm still confused about n-gram model and skip-ngram model.
    Did he made any mistake or I'm confused?
    Basically, n-gram models uses n-1 words to predict nth word, so it means its somehow using context words wo predict target word(n). Here in this video he said skip-ngram uses target word(focus) to predict context words. They both contradict each other!!! Any experts opinion on this is highly appreciated.

  • @amarnathjagatap2339
    @amarnathjagatap2339 Před 4 lety +1

    Ultimate reeeeee baba

  • @mdazimulhaque
    @mdazimulhaque Před 4 lety

    Thank you for the detailed explanation.

  • @taylorsmurphy
    @taylorsmurphy Před 4 lety +23

    I can't believe I already watched all these videos somehow. Oh wait, there's a partial red bar on the bottom of most thumbnails for some reason. 😋

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety +2

      I know. CZcams added this feature after I adopted my Beamer template. And impossible to fix on old videos.

  • @zahrash7864
    @zahrash7864 Před rokem

    what is the sigmoid sum on W.c used for ? don't we need just the softmax on every row of the C.W matrix?

    • @JordanBoydGraber
      @JordanBoydGraber  Před rokem

      But a word has multiple words in the context, we need to consider each words' effect

  • @Han-ve8uh
    @Han-ve8uh Před 3 lety +1

    At 11:00, what does "Features" and "Evidence" refer to? How is that formula similar to logistic regression? (I was expecting some e^()/1+e^() on the RHS).
    In the same formula, what does c' refer to? Is it all the words that are NOT in the context of a particular word w?
    How did this formula become the 6 sigmoids at 12:00?

    • @JordanBoydGraber
      @JordanBoydGraber  Před 3 lety +2

      1) The sigma function encodes the exponential function that you're looking for
      2) The features and evidence are word and context vectors
      3) c' are the negative samples
      4) This akin to the positive examples in logistic regression, while c' is like the negative examples

    • @Han-ve8uh
      @Han-ve8uh Před 3 lety

      @@JordanBoydGraber For 3) Aren't the negative samples the focus word as shown at 12:30? I'm confused because sometimes the negative sample is context word and sometimes focus word. Does this depend on whether CBOW or skipgram is used? (like negative sampling CBOW means negative the focus word and negative sampling skipgram means negative the context words).

  • @ariwahyono4004
    @ariwahyono4004 Před 4 lety

    Hi, My name is Ari. i am from Indonesia.
    can you help me explain about the sent2vec (Unsupervised Learning of Sentence Embeddings
    using Compositional n-Gram Features) model as you make a video about word2vec?

  • @gabrield801
    @gabrield801 Před 4 lety

    Ignoring the negative samples, why do we need to optimize by gradient descent of dot products rather than merely counting the occurrence of context words for each occurrence of each focus word in the training data? (and then normalizing)

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety +1

      That's a great question! What you're proposing is essentially PMI, which word2vec is an approximation of (projected into a lower dimension). word2vec is throwing some information away through this projection, but it seems to help.

    • @gabrield801
      @gabrield801 Před 4 lety

      @@JordanBoydGraber I see, it's a lower dimension because you simply initialize random vectors (of arbitrary, lower length) and consider dot products, rather than having a (# of words)-long vector for each word. Thanks a ton!

  • @JordanBoydGraber
    @JordanBoydGraber  Před 2 lety +1

    On the slide numbered 16, the sum should be over f(w'), not f(w)

  • @xruan6582
    @xruan6582 Před 3 lety +5

    10:13 should the first equation be p(c|w; θ) rather than log(p(c|w; θ)) ?

  • @pardisranjbarnoiey6356
    @pardisranjbarnoiey6356 Před 4 lety +1

    Thank you! But please get rid of that red bar. The thumbnail gets confusing

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety +1

      Haha. I never thought about that odd interaction with CZcams. I don't want everyone to think they've watched 2/3 of all of my videos. :)

  • @JP-re3bc
    @JP-re3bc Před 5 lety

    It would be helpful if on 9:56 you talked a bit what exactly d means.

    • @JordanBoydGraber
      @JordanBoydGraber  Před 5 lety

      It's the length of the embedding. It really doesn't mean much other than the size of the representation that you're using. I.e., how complicated your model is going to be.

  • @compilationsmania451
    @compilationsmania451 Před 4 lety

    10:20 in the probability function, you're using exp vc.vw. But, didn't you say that the context and focus word have different vectors? Then why are we choosing the context and focus words from the same vector v?

    • @JordanBoydGraber
      @JordanBoydGraber  Před 4 lety

      @michael jo That's right! The "v" means that it's for the same word type (e.g., "dog") but from two different matrices.

  • @oleksandrboiko7261
    @oleksandrboiko7261 Před 3 lety

    Red line on the bottom of the thumbnail makes it think you already saw the video, and skip it

    • @JordanBoydGraber
      @JordanBoydGraber  Před 3 lety +1

      I know. I recorded the videos before CZcams started doing this ... my new videos won't have this.

  • @username-notfound9841
    @username-notfound9841 Před 3 lety

    I like the part where you almost said *Bit* correctly.
    7:24

  • @kevin-fs5ue
    @kevin-fs5ue Před 5 lety

    10:07

  • @cyrilgarcia2485
    @cyrilgarcia2485 Před 3 lety

    Wait, did I miss how the words are vectorized?

    • @JordanBoydGraber
      @JordanBoydGraber  Před 3 lety +1

      Each word has a corresponding vector; it's initialized randomly and then updated, as discussed in 13:09

  • @isleofdeath
    @isleofdeath Před 3 lety

    Apart from some errors (the theta parameter never occurs on the right side on your equations and it is even incorrect, as the "probability" given by exp)=/sum(exp(...)) IS basiclly the theta parameter), worse is that is looks like you copied most of the math from the stanford lecture on NLP and did not even give them credits. BTW, the theta parameter is explained in that lecture...

    • @JordanBoydGraber
      @JordanBoydGraber  Před 7 měsíci

      I did draw on Yoav Goldberg's lectures (and credited him). I suspect the Stanford folks did the same, but the equations themselves come from the original word2vec paper. Using Theta as a general catchall for parameters of a model is quite common in ML.

  • @KoltPenny
    @KoltPenny Před 4 lety

    Really cool videos... but I just can't get out of my head that you sound like the jewish kid in Big Mouth.