What are Embedding Layers in Keras (11.5)
Vložit
- čas přidán 26. 07. 2024
- If you have a categorical variable (non-numeric) with a high cardinality (many items) an embedding layer can be an effective way to reduce this dimension when compared with dummy variables. In this video I show how to both transfer and train embedding layers in Keras.
Code for This Video:
github.com/jeffheaton/t81_558...
Course Homepage: sites.wustl.edu/jeffheaton/t8...
Follow Me/Subscribe:
/ heatonresearch
github.com/jeffheaton
/ jeffheaton
Support Me on Patreon: / jeffheaton - Věda a technologie
Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.
I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.
This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you
don't have words to explain how great this series is.!! speechless.!!
The best explaining of the Embeddings in tensorflow which I've whenever seen.
this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!
Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)
Awesome, thank you! Subscribe so you do not miss any :-)
thank you very much, i'm working on a problem that involves sparse categorical data and your explanation and practical examples were superb, will be frequenting your channel often (subscribed) :) thanks Jeff
Best explanation of embedding layers ever !
Professionally done! Good job!
Great explanation Jeff.
Really good video, very digestible. Thank you Jeff!
Thanks! Glad it was helpful.
The discovery of the year! Thank you for your lectures!
You're very welcome!
Thank you.
You mention a good point, how to ensure that whatever encoding algorithm we use in the training, still encodes the incoming data in the real-time production with the same value using the embedding?
Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs
Awesome.. Love it
This was very helpful. Thank you
Glad it was helpful!
awesome Explanation!
Thanks!
This was great, esp. the 2nd half
Finally! My struggle ended 😁👍
Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses
Expecting more information
An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice
Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.
We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.
now how to do find_similar using that embedding weights layer?
At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.
my exact same thought
Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !
Hi, will we learn the attention model in the near future? Like LSTM and attention.
Attention, not currently, but I may do a related video on it outside the course.
@@HeatonResearch Great. Thank you so much. Look forward to that tutorial.
While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??
10 is the number of unique words we have.
i can't get it..
6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible?
9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?
you mention its dimension reduction but then again point and say not exactly can you elaborate?
Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!