What are Embedding Layers in Keras (11.5)

Sdílet
Vložit
  • čas přidán 26. 07. 2024
  • If you have a categorical variable (non-numeric) with a high cardinality (many items) an embedding layer can be an effective way to reduce this dimension when compared with dummy variables. In this video I show how to both transfer and train embedding layers in Keras.
    Code for This Video:
    github.com/jeffheaton/t81_558...
    Course Homepage: sites.wustl.edu/jeffheaton/t8...
    Follow Me/Subscribe:
    / heatonresearch
    github.com/jeffheaton
    / jeffheaton
    Support Me on Patreon: / jeffheaton
  • Věda a technologie

Komentáře • 43

  • @raulsaenz6177
    @raulsaenz6177 Před 4 lety +17

    Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.

    • @giovannimeono8802
      @giovannimeono8802 Před rokem

      I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.

  • @WisamMechano
    @WisamMechano Před 3 lety +1

    This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you

  • @suryagaur7440
    @suryagaur7440 Před 4 lety +5

    don't have words to explain how great this series is.!! speechless.!!

  • @nitroflap
    @nitroflap Před 4 lety +4

    The best explaining of the Embeddings in tensorflow which I've whenever seen.

  • @SatyaBhambhani
    @SatyaBhambhani Před 2 lety

    this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!

  • @amitraichowdhury8148
    @amitraichowdhury8148 Před 3 lety

    Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)

    • @HeatonResearch
      @HeatonResearch  Před 3 lety

      Awesome, thank you! Subscribe so you do not miss any :-)

  • @franklyvulgar1
    @franklyvulgar1 Před 10 měsíci

    thank you very much, i'm working on a problem that involves sparse categorical data and your explanation and practical examples were superb, will be frequenting your channel often (subscribed) :) thanks Jeff

  • @ashishpatil1716
    @ashishpatil1716 Před 3 lety

    Best explanation of embedding layers ever !

  • @netfission
    @netfission Před 4 lety

    Professionally done! Good job!

  • @sambitmukherjee1713
    @sambitmukherjee1713 Před 3 lety

    Great explanation Jeff.

  • @alexanderk5835
    @alexanderk5835 Před 2 lety +1

    Really good video, very digestible. Thank you Jeff!

  • @AlexeyMatushevsky
    @AlexeyMatushevsky Před 3 lety

    The discovery of the year! Thank you for your lectures!

  • @allaabdella4794
    @allaabdella4794 Před 4 lety +1

    Thank you.
    You mention a good point, how to ensure that whatever encoding algorithm we use in the training, still encodes the incoming data in the real-time production with the same value using the embedding?

  • @FiveJungYetNoSmite
    @FiveJungYetNoSmite Před rokem +1

    Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs

  • @sanjaykrish8719
    @sanjaykrish8719 Před 4 lety

    Awesome.. Love it

  • @mohajeramir
    @mohajeramir Před 4 lety

    This was very helpful. Thank you

  • @sebastian81ism
    @sebastian81ism Před 3 lety

    awesome Explanation!

  • @davidporterrealestate
    @davidporterrealestate Před 2 lety

    This was great, esp. the 2nd half

  • @guzu672
    @guzu672 Před 3 lety

    Finally! My struggle ended 😁👍

  • @mukherjisandeep
    @mukherjisandeep Před 2 lety

    Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses

  • @ankitmaheshwari7310
    @ankitmaheshwari7310 Před 3 lety

    Expecting more information

  • @RH-mk3rp
    @RH-mk3rp Před rokem

    An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice

  • @stackexchange7353
    @stackexchange7353 Před 4 lety

    Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.

  • @beizhou2488
    @beizhou2488 Před 5 lety +2

    We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.

  • @blasttrash
    @blasttrash Před rokem

    now how to do find_similar using that embedding weights layer?

  • @himanshutanwani_
    @himanshutanwani_ Před 4 lety +3

    At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.

  • @tonycardinal413
    @tonycardinal413 Před 2 lety

    Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !

  • @beizhou2488
    @beizhou2488 Před 5 lety +1

    Hi, will we learn the attention model in the near future? Like LSTM and attention.

    • @HeatonResearch
      @HeatonResearch  Před 5 lety

      Attention, not currently, but I may do a related video on it outside the course.

    • @beizhou2488
      @beizhou2488 Před 5 lety

      @@HeatonResearch Great. Thank you so much. Look forward to that tutorial.

  • @suryagaur7440
    @suryagaur7440 Před 4 lety

    While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??

    • @sachink7955
      @sachink7955 Před 4 lety

      10 is the number of unique words we have.

  • @coobit
    @coobit Před 3 lety

    i can't get it..
    6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible?
    9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?

  • @apratimgholap2930
    @apratimgholap2930 Před 4 lety

    you mention its dimension reduction but then again point and say not exactly can you elaborate?

  • @ramonolivier57
    @ramonolivier57 Před 4 lety

    Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!