Augmented Language Models (LLM Bootcamp)

Sdílet
Vložit
  • čas přidán 27. 07. 2024
  • New course announcement ✨
    We're teaching an in-person LLM bootcamp in the SF Bay Area on November 14, 2023. Come join us if you want to see the most up-to-date materials building LLM-powered products and learn in a hands-on environment.
    www.scale.bythebay.io/llm-wor...
    Hope to see some of you there!
    --------------------------------------------------------------------------------------------- In this video, Josh walks through the patterns for augmenting language models with external context: retrieval augmentation, chaining, and tool use.
    Download slides from the bootcamp website here: fullstackdeeplearning.com/llm...
    Intro and outro music made with Riffusion: github.com/riffusion/riffusion
    Watch the rest of the LLM Bootcamp videos here: • LLM Bootcamp - Spring ...
    0:00:00 Why augmented LMs?
    0:06:40 Why retrieval augmentation?
    0:08:25 Traditional information retrieval
    0:13:05 Embeddings for retrieval
    0:22:15 Embedding relevance and indexes
    0:28:42 Embedding databases
    0:38:50 Beyond naive embeddings
    0:41:10 Patterns & case studies
    0:46:16 What are chains and why do we need them?
    0:52:50 LangChain
    0:54:55 Tool use
    0:59:17 Plugins
    1:02:38 Recommendations for tool use
    1:04:07 Recap & conclusions
  • Věda a technologie

Komentáře • 17

  • @jeromeeusebius
    @jeromeeusebius Před 10 měsíci

    Great resource for understanding RAG and the various ways to improve reliability and accuracy of LLM's. Thanks for sharing.

  • @ayanghosh8226
    @ayanghosh8226 Před rokem +12

    Love the logical sequence of presenting the complications in advanced LLM applications. One of the best resources on the web, if one wants a solid mental map of how and when to augment LLMs.

  • @loic7572
    @loic7572 Před rokem +7

    This is the best bootcamp I've ever watched. I only wish I had known about the CZcams channel before.

  • @lukeliem9216
    @lukeliem9216 Před rokem

    This talk is very informative about building LLM-based apps with proprietary datasets.

  • @robertcormia7970
    @robertcormia7970 Před rokem

    Another fantastic video (webinar) helping to build on foundational knowledge of LLMs. Clear explainations of chains, tools, APIs, and "process". Can't wait to watch the next one (LLMOPs)

  • @za_daleko
    @za_daleko Před 10 měsíci

    Thanx for this knowledge. Greetings from Poland.

  • @RohanKumar-vx5sb
    @RohanKumar-vx5sb Před rokem +6

    u’re the best. this has been the singular most useful and up to date analysis of LLM advancements

  •  Před rokem +2

    Awesome! Thanks Josh for the presentation!

  • @saratbhargavachinni5544
    @saratbhargavachinni5544 Před rokem +1

    Great talk! Thanks for sharing

  • @fudanjx
    @fudanjx Před rokem +3

    Quick Summary:
    Introduction:
    Language models are powerful but lack knowledge of the world. We can augment them by providing relevant context and data.
    Witnesses:
    - Retrieval: Searching a corpus and providing relevant documents as context.
    - Chains: Using one language model to develop context for another.
    - Tools: Giving models access to APIs and external data.
    Testimonies:
    Retrieval:
    - Simplest way is adding relevant facts to context window.
    - As corpus scales, treat it as an information retrieval problem.
    - Embeddings and vector databases can improve retrieval.
    Chains:
    - Use one language model to develop context for another.
    - Can help encode complex reasoning and get around token limits.
    - Tools like Langchain provide examples of chain patterns.
    Tools:
    - Give models access to APIs and external data.
    - Chains involve manually designing tool use.
    - Plugins let models decide when to use tools.
    Key Takeaways:
    - Start with rules and heuristics to provide context.
    - As knowledge base scales, think about information retrieval.
    - Chains can help with complex reasoning and token limits.
    - Tools give models access to external knowledge.
    Conclusion:
    Augmenting language models with relevant context and data can significantly improve their capabilities. There are a variety of techniques to provide that augmentation, each with trade-offs around flexibility, reliability,
    and complexity.

  • @deeplearningpartnership

    Cool

  • @user-cq5ue1cj3n
    @user-cq5ue1cj3n Před rokem +1

    great. I had to listen at a 0,75 speed, not to miss anything.

  • @domlahaix
    @domlahaix Před rokem

    Crocodile, Ball.... unless you're working for Lacoste 😀

  • @SavanVyas91
    @SavanVyas91 Před rokem

    What’s his name where can I find him?

  • @kennethcarvalho3684
    @kennethcarvalho3684 Před 8 měsíci

    isnt this a search like google