Lecture 55 - Latent Factor Recommender System | Stanford University

Sdílet
Vložit
  • čas přidán 9. 07. 2024
  • 🔔 Stay Connected! Get the latest insights on Artificial Intelligence (AI) 🧠, Natural Language Processing (NLP) 📝, and Large Language Models (LLMs) 🤖. Follow ( / mtnayeem ) on Twitter 🐦 for real-time updates, news, and discussions in the field.
    Check out the following interesting papers. Happy learning!
    Paper Title: "On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction"
    Paper: aclanthology.org/2023.finding...
    Dataset: huggingface.co/datasets/tafse...
    Paper Title: "Abstractive Unsupervised Multi-Document Summarization using Paraphrastic Sentence Fusion"
    Paper: aclanthology.org/C18-1102/
    Paper Title: "Extract with Order for Coherent Multi-Document Summarization"
    Paper: aclanthology.org/W17-2407.pdf
    Paper Title: "Paraphrastic Fusion for Abstractive Multi-Sentence Compression Generation"
    Paper: dl.acm.org/doi/abs/10.1145/31...
    Paper Title: "Neural Diverse Abstractive Sentence Compression Generation"
    Paper: link.springer.com/chapter/10....
  • Věda a technologie

Komentáře • 16

  • @musashifanboy
    @musashifanboy Před měsícem

    very clear and straight forward, thank you for making this video

  • @rob5393
    @rob5393 Před 2 lety +4

    I tried to understand this in multiple sites and the best way was watching this entire video full length. Don't try to rush and you won't end up wasting a lot of time like me :)

  • @patrick_bateman-ty7gp
    @patrick_bateman-ty7gp Před 6 měsíci

    the notation for factoring R as Q and P can be made better as follows :
    Q ----> matrix of shape (k x m) where m = no. of movies and k = latent vector dimensions
    P -----> matrix of shape (k x n) where n = no. of users and k = latent vector dimensions
    R ~ transpose(Q)*P
    this gives a nice view as the latent vectors are all packed as columns in both matrices P and Q

  • @RufengXie
    @RufengXie Před 4 lety +2

    Great explanation, helps to visualize the concepts so well. Thank you!

  • @lucianoinso
    @lucianoinso Před rokem

    This is a really great explanation, was struggling to grasp the concept from other sites and videos but it came crystal clear with this one, the idea of the mapping of users and movies to a k-dimensional space was crucial for me for making sense of the method, it wasn't explicitly explained in other sources, also the explanation of the modified version of SVD goes really to the point, this coming from someone that didn't know the original SVD algorithm.
    Thank you!

  • @BrunsterCoelho
    @BrunsterCoelho Před 5 lety +4

    Full playlist for next video in the series from the same channel: czcams.com/play/PLLssT5z_DsK9JDLcT8T62VtzwyW9LNepV.html

  • @immidikalipradeep
    @immidikalipradeep Před 6 lety +1

    Very good explanation.

  • @parthsahu7702
    @parthsahu7702 Před 4 lety +1

    If we have original User-Item matrix only, then we take two smaller matrices i.e., User x Factors and Item x Factors, so how the number of Factors are decided and what are those factors.

  • @mazenezzeddine8319
    @mazenezzeddine8319 Před 5 lety

    Perfect!

  • @aramun7614
    @aramun7614 Před 2 lety

    perfect. thanks for easy explanation

  • @miguelfsousa
    @miguelfsousa Před 7 měsíci

    Is the equation at the slide 22 correct? I mean, the SVD there is not the approximation, right? The optimal error equation only makes sense when applied over the truncated SVD approximation. Otherwise, that subtraction would be equal zero.

  • @ben7590
    @ben7590 Před 3 lety

    Da best !!!! Appreciate

  • @hardikagarwal6510
    @hardikagarwal6510 Před 4 lety +1

    Great video. Just one correction though, at 7:50 in the video you basically say that users and movies that are closest together will get the best ratings (I know thats not exactly what you said but thats what I understood and I think many others will interpret in the same manner) but this isnt true. We arent taking a nearest neighbour approach here to predict ratings in which case it would have been through. We are taking a dot product of the user and the item in the latent space which means a movie in that same direction as the user but with twice the vector length of the user will give a higher rating than a movie with the same direction and same vector length as the user, i.e, when the user and movie overlap.

  • @vikrantdhawan4062
    @vikrantdhawan4062 Před 5 lety +1

    how we find what is GOOD THRESH-HOLD of RMSE?

    • @Jack-lg9mq
      @Jack-lg9mq Před 4 lety +3

      You would simply use this as a metric to compare models. The idea is that the model which achieves the smallest RMSE is the best

  • @rahul_bali
    @rahul_bali Před 6 lety

    nice