LlamaIndex Webinar: Long-Term, Self-Editing Memory with MemGPT

Sdílet
Vložit
  • čas přidán 30. 06. 2024
  • Long-term memory for LLMs is an unsolved problem, and doing naive retrieval from a vector database doesn’t work.
    ​The recent iteration of MemGPT (Packer et al.) takes a big step in this direction. Taking the LLM as an OS analog, the authors propose “virtual context management” to manage both memory in-context window and in external storage. ​Recent advances in function calling allow these agents to read and write from these data sources, and modify their own context.
    ​We're excited to host Charles Packer, lead author of MemGPT. Charles presents an excellent overview of the project, gives a demo, and we also do a Q&A session in the middle and towards the end.
    Timeline:
    00:00 - 24:20 Presentation
    24:20 - 41:18 Demo
    41:18 Q&A

Komentáře • 4

  • @gustavstressemann7817
    @gustavstressemann7817 Před 3 měsíci

    Nice project. This is awesome. Greetings from old Europe.

  • @ScriptGurus1
    @ScriptGurus1 Před 3 měsíci +1

    Pretty cool anyone know where I can get the persona temp prompt that generates perplexity like responses .. shown in the demo?

  • @chrike45
    @chrike45 Před 2 měsíci +1

    When will the developer portal be available?

  • @aadityamundhalia
    @aadityamundhalia Před měsícem

    how to use memgpt with llamaindex as memory