Lecture 55 - Latent Factor Recommender System | Stanford University
Vložit
- čas přidán 9. 07. 2024
- 🔔 Stay Connected! Get the latest insights on Artificial Intelligence (AI) 🧠, Natural Language Processing (NLP) 📝, and Large Language Models (LLMs) 🤖. Follow ( / mtnayeem ) on Twitter 🐦 for real-time updates, news, and discussions in the field.
Check out the following interesting papers. Happy learning!
Paper Title: "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction"
Paper: aclanthology.org/2023.finding...
Dataset: huggingface.co/datasets/tafse...
Paper Title: "Abstractive Unsupervised Multi-Document Summarization using Paraphrastic Sentence Fusion"
Paper: aclanthology.org/C18-1102/
Paper Title: "Extract with Order for Coherent Multi-Document Summarization"
Paper: aclanthology.org/W17-2407.pdf
Paper Title: "Paraphrastic Fusion for Abstractive Multi-Sentence Compression Generation"
Paper: dl.acm.org/doi/abs/10.1145/31...
Paper Title: "Neural Diverse Abstractive Sentence Compression Generation"
Paper: link.springer.com/chapter/10.... - Věda a technologie
very clear and straight forward, thank you for making this video
I tried to understand this in multiple sites and the best way was watching this entire video full length. Don't try to rush and you won't end up wasting a lot of time like me :)
the notation for factoring R as Q and P can be made better as follows :
Q ----> matrix of shape (k x m) where m = no. of movies and k = latent vector dimensions
P -----> matrix of shape (k x n) where n = no. of users and k = latent vector dimensions
R ~ transpose(Q)*P
this gives a nice view as the latent vectors are all packed as columns in both matrices P and Q
Great explanation, helps to visualize the concepts so well. Thank you!
This is a really great explanation, was struggling to grasp the concept from other sites and videos but it came crystal clear with this one, the idea of the mapping of users and movies to a k-dimensional space was crucial for me for making sense of the method, it wasn't explicitly explained in other sources, also the explanation of the modified version of SVD goes really to the point, this coming from someone that didn't know the original SVD algorithm.
Thank you!
Full playlist for next video in the series from the same channel: czcams.com/play/PLLssT5z_DsK9JDLcT8T62VtzwyW9LNepV.html
Very good explanation.
If we have original User-Item matrix only, then we take two smaller matrices i.e., User x Factors and Item x Factors, so how the number of Factors are decided and what are those factors.
Perfect!
perfect. thanks for easy explanation
Is the equation at the slide 22 correct? I mean, the SVD there is not the approximation, right? The optimal error equation only makes sense when applied over the truncated SVD approximation. Otherwise, that subtraction would be equal zero.
Da best !!!! Appreciate
Great video. Just one correction though, at 7:50 in the video you basically say that users and movies that are closest together will get the best ratings (I know thats not exactly what you said but thats what I understood and I think many others will interpret in the same manner) but this isnt true. We arent taking a nearest neighbour approach here to predict ratings in which case it would have been through. We are taking a dot product of the user and the item in the latent space which means a movie in that same direction as the user but with twice the vector length of the user will give a higher rating than a movie with the same direction and same vector length as the user, i.e, when the user and movie overlap.
how we find what is GOOD THRESH-HOLD of RMSE?
You would simply use this as a metric to compare models. The idea is that the model which achieves the smallest RMSE is the best
nice