Memory in LLM Applications

Sdílet
Vložit
  • čas přidán 5. 09. 2024

Komentáře • 13

  • @jaceyang3375
    @jaceyang3375 Před měsícem

    very nice introduction. good articulation. thanks for upload❤

  • @alesanchezr_
    @alesanchezr_ Před 10 měsíci

    Such an important part of LLM

  • @anselm94
    @anselm94 Před rokem

    Thank you Harrison!

  • @Adrian_Galilea
    @Adrian_Galilea Před 11 měsíci

    Very good talk.

  • @fgfanta
    @fgfanta Před rokem +7

    Perhaps you could post a link to the mentioned paper?

  • @andresfelipehiguera785
    @andresfelipehiguera785 Před 2 měsíci

    While LangChain's memory types seem primarily geared towards constructing optimal prompts, focusing on retrieving relevant information for the next step, I believe there's another avenue worth investigating. This involves modifying the model's internal world representation, potentially by adjusting weights or even its overall size.
    This approach could offer a means to constrain the large language model (LLM), potentially enhancing the believability of the simulation it generates. Do you have any references I could explore that delve into this concept further?

  • @deeplearningpartnership

    Cool.

  • @ekkamailax
    @ekkamailax Před 8 měsíci

    Would it be possible to save the entire conversation history as a text file and use that text file to fine tune ?

    • @rewindcat7927
      @rewindcat7927 Před 8 měsíci

      From what I understand, yes it’s possible but at this point (dec 2023) extremely slow and expensive. Have a look at the recent fireship video about the dolphin llm.

  • @samarammar1593
    @samarammar1593 Před 6 měsíci

    this work with local llms ?