Augmented Language Models (LLM Bootcamp)
Vložit
- čas přidán 27. 07. 2024
- New course announcement ✨
We're teaching an in-person LLM bootcamp in the SF Bay Area on November 14, 2023. Come join us if you want to see the most up-to-date materials building LLM-powered products and learn in a hands-on environment.
www.scale.bythebay.io/llm-wor...
Hope to see some of you there!
--------------------------------------------------------------------------------------------- In this video, Josh walks through the patterns for augmenting language models with external context: retrieval augmentation, chaining, and tool use.
Download slides from the bootcamp website here: fullstackdeeplearning.com/llm...
Intro and outro music made with Riffusion: github.com/riffusion/riffusion
Watch the rest of the LLM Bootcamp videos here: • LLM Bootcamp - Spring ...
0:00:00 Why augmented LMs?
0:06:40 Why retrieval augmentation?
0:08:25 Traditional information retrieval
0:13:05 Embeddings for retrieval
0:22:15 Embedding relevance and indexes
0:28:42 Embedding databases
0:38:50 Beyond naive embeddings
0:41:10 Patterns & case studies
0:46:16 What are chains and why do we need them?
0:52:50 LangChain
0:54:55 Tool use
0:59:17 Plugins
1:02:38 Recommendations for tool use
1:04:07 Recap & conclusions - Věda a technologie
Great resource for understanding RAG and the various ways to improve reliability and accuracy of LLM's. Thanks for sharing.
Love the logical sequence of presenting the complications in advanced LLM applications. One of the best resources on the web, if one wants a solid mental map of how and when to augment LLMs.
💯
This is the best bootcamp I've ever watched. I only wish I had known about the CZcams channel before.
Second that!
This talk is very informative about building LLM-based apps with proprietary datasets.
Another fantastic video (webinar) helping to build on foundational knowledge of LLMs. Clear explainations of chains, tools, APIs, and "process". Can't wait to watch the next one (LLMOPs)
Thanx for this knowledge. Greetings from Poland.
u’re the best. this has been the singular most useful and up to date analysis of LLM advancements
Awesome! Thanks Josh for the presentation!
Great talk! Thanks for sharing
Quick Summary:
Introduction:
Language models are powerful but lack knowledge of the world. We can augment them by providing relevant context and data.
Witnesses:
- Retrieval: Searching a corpus and providing relevant documents as context.
- Chains: Using one language model to develop context for another.
- Tools: Giving models access to APIs and external data.
Testimonies:
Retrieval:
- Simplest way is adding relevant facts to context window.
- As corpus scales, treat it as an information retrieval problem.
- Embeddings and vector databases can improve retrieval.
Chains:
- Use one language model to develop context for another.
- Can help encode complex reasoning and get around token limits.
- Tools like Langchain provide examples of chain patterns.
Tools:
- Give models access to APIs and external data.
- Chains involve manually designing tool use.
- Plugins let models decide when to use tools.
Key Takeaways:
- Start with rules and heuristics to provide context.
- As knowledge base scales, think about information retrieval.
- Chains can help with complex reasoning and token limits.
- Tools give models access to external knowledge.
Conclusion:
Augmenting language models with relevant context and data can significantly improve their capabilities. There are a variety of techniques to provide that augmentation, each with trade-offs around flexibility, reliability,
and complexity.
Cool
great. I had to listen at a 0,75 speed, not to miss anything.
Crocodile, Ball.... unless you're working for Lacoste 😀
What’s his name where can I find him?
isnt this a search like google