Stanford CS25: V4 I Overview of Transformers

Sdílet
Vložit
  • čas přidán 22. 04. 2024
  • April 4, 2024
    Steven Feng, Stanford University [styfeng.github.io/]
    Div Garg, Stanford University [divyanshgarg.com/]
    Emily Bunnapradist, Stanford University [ / ebunnapradist ]
    Seonghee Lee, Stanford University [shljessie.github.io/]
    Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, applications, and remaining challenges/weaknesses. Also discussion about AI agents. Slides here: docs.google.com/presentation/...
    More about the course can be found here: web.stanford.edu/class/cs25/
    View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

Komentáře • 33

  • @3ilm_yanfa3
    @3ilm_yanfa3 Před 21 dnem +9

    Can't believe, ... Just today, we started the part about LSTM and transformers in my ML course, and here it comes
    Thank you guys !

  • @fatemehmousavi402
    @fatemehmousavi402 Před 21 dnem +5

    Awesome, thank you Stanford online for sharing these amazing video series

  • @Drazcmd
    @Drazcmd Před 21 dnem +5

    Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)

  • @mjavadrajabi7401
    @mjavadrajabi7401 Před 21 dnem +5

    Great!! Finally It's time for CS25 V4🔥

  • @marcinkrupinski
    @marcinkrupinski Před 17 dny +3

    AMazing stuff! Thank you for publishing this valuable material!

  • @benjaminy.
    @benjaminy. Před 16 dny +2

    Hello Everyone! Thank you very much for uploading these materials. Cheers

  • @lebesguegilmar1
    @lebesguegilmar1 Před 21 dnem +2

    Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil

  • @JJGhostHunters
    @JJGhostHunters Před 15 dny +1

    I recently started to explore using transformers for timeseries classification as opposed to NLP. Very excited about this content!

  • @styfeng
    @styfeng Před 21 dnem +12

    it's finally released! hope y'all enjoy(ed) the lecture 😁

    • @laalbujhakkar
      @laalbujhakkar Před 21 dnem

      Don't hold the mic so close bro. The lecture was really good though :)

    • @gemini22581
      @gemini22581 Před 20 dny

      What is a good course to learn NLP?

    • @siiilversurfffeeer
      @siiilversurfffeeer Před 18 dny

      hi feng! will there be more cs25 v4 lectures upload in this channel?

    • @styfeng
      @styfeng Před 17 dny +1

      @@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)

  • @liangqunlu1553
    @liangqunlu1553 Před 15 dny +2

    Very interesting summarization

  • @RishiKaura
    @RishiKaura Před 16 dny +1

    Sincere students and smart

  • @GerardSans
    @GerardSans Před 20 dny +25

    Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.

    • @ehza
      @ehza Před 19 dny +2

      That's a pretty important observation imo

    • @junyuzheng5282
      @junyuzheng5282 Před 18 dny +3

      Then what is “reason” “thoughts” “knowledge”?

    • @DrakenRS78
      @DrakenRS78 Před 9 dny +1

      Do individual Neurons have thoughts , reason or knowledge - or is it once again the collective which we should be assessing

    • @TheNewton
      @TheNewton Před 5 dny

      This mis-anthropomorphism problem will only grow because each end of the field/industry is being sloppy with it , so calls for sanity will just get derided as time goes on.
      On the starting side we have academics title baiting like they did with "attention" so papers get attention instead of just making a new word|phrase like 'correlation network' , 'word window' , 'hyper hyper-networks' etc ; or just overloading existing terms 'backtracking', backpropagation etc.
      And the on other end of the collective full court press is corporations continuing to pass assistant(tools) off as human like with names such cortana, siri etc for the sake of branding and marketing.

    • @TheNewton
      @TheNewton Před 5 dny

      @@junyuzheng5282 `Then what is “reason” “thoughts” “knowledge”?`
      reason,thoughts,knowledge, etc are more than are hallucinated in your linear algebra formulas

  • @TV19933
    @TV19933 Před 21 dnem

    future artificial intelligence
    i was into talk this
    probability challenge
    Gemini ai talking ability rapid talk i suppose so
    it's splendid

  • @GeorgeMonsour
    @GeorgeMonsour Před 14 dny

    I want to know more about 'filters.' Are they human or computer processes or mathematical models? The filters are a reflection, I'd like to understand more about. I hope they are not an inflection, that would be an unconscious pathway.
    This is a really sweet dip into the currency of knowledge and these students are to be commended however, in the common world there is a tendency developing towards a 'tower of babel'.
    Greed may have an influence that we must be wary of. I heard some warnings in the presentation that consider this tendency.
    I'm impressed by these students. I hope they aren't influenced by the silo system of capitalism and that they remain at the front of the generalization and commonality needed to keep bad actors off the playing field.

  • @IamPotato_007
    @IamPotato_007 Před 19 dny

    Where are the professors?

  • @hussienalsafi1149
    @hussienalsafi1149 Před 21 dnem +1

    ☺️☺️☺️🥰🥰🥰

  • @riju1956
    @riju1956 Před 21 dnem +6

    so they stand for 1 hour

    • @rockokechukwu3343
      @rockokechukwu3343 Před 21 dnem

      Is it okay to cheat in an exam if you have the opportunity to do so?

  • @Anbu_Sampath
    @Anbu_Sampath Před 20 dny

    it would be great if CS25: V4 created another playlist in youtube.

  • @ramsever5087
    @ramsever5087 Před 13 dny

    what is said in 13:47 is incorrect.
    Large language models like ChatGPT or other state-of-the-art language models do not only have a decoder in their architecture. They employ the standard transformer encoder-decoder architecture. The transformer architecture used in these large language models consists of two main components:
    The Encoder:
    This encodes the input sequence (prompt, instructions, etc.) into vector representations.
    It uses self-attention mechanisms to capture contextual information within the input sequence.
    The Decoder:
    This takes in the encoded representations from the encoder.
    It generates the output sequence (text) in an autoregressive manner, one token at a time.
    It uses self-attention over the already generated output, as well as cross-attention over the encoder's output, to predict the next token.
    So both the encoder and decoder are critical components. The encoder allows understanding and representing the input, while the decoder enables powerful sequence generation capabilities by predictively modeling one token at a time while attending to the encoder representations and past output.
    Having only a decoder without an encoder would mean the model can generate text but not condition on or understand any input instructions/prompts. This would severely limit its capabilities.
    The transformer's encoder-decoder design, with each component's self-attention and cross-attention, is what allows large language models to understand inputs flexibly and then generate relevant, coherent, and contextual outputs. Both components are indispensable for their impressive language abilities.

  • @laalbujhakkar
    @laalbujhakkar Před 21 dnem +2

    Stanford's struggles with microphones continue.

    • @jeesantony5308
      @jeesantony5308 Před 21 dnem +1

      it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼

    • @laalbujhakkar
      @laalbujhakkar Před 21 dnem

      @@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.