Build your own Local Mixture of Agents using Llama Index Pack!!!
Vložit
- čas přidán 21. 08. 2024
- I built my own Mixture of Agents a new innovative Model Grouping technique to use Multiple LLMs (by Wang et al.) using Ollama and Llama Index Pack.
This tutorial specifically deals with Local Models. We can leverage Llama 3 and Mistral and any other open model to build this MoA system.
🔗 Links 🔗
Local MoA - Notebook used in the code - github.com/amr...
MoA pack from Llama index - llamahub.ai/l/...
Ollama.ai to download ollama
Ollama Installation tutorial • Ollama on CPU and Priv...
❤️ If you want to support the channel ❤️
Support here:
Patreon - / 1littlecoder
Ko-Fi - ko-fi.com/1lit...
🧭 Follow me on 🧭
Twitter - / 1littlecoder
Linkedin - / amrrs
Thanks ,this was in my thoughts way back in May ,thank fully some body implemented it as next step for me :)
How does it managers the memory when we use multiple LLMs
Nice. Tutorial, thank you.
Fantastic. Thanks so much , l have an issue for some reason getting this error Round 3/3 to collecting reference responses.
An error occurred:
thanks! how is title of previous video? i hope for next video of mixture of experts! :)
Could you elaborate? You mean like mixture of experts with llama index ?
Can we customize system prompt for both aggregator and proposers?
Great video as always. I have a question.
I'm trying to build an RAG that answers questions about a dataset that I have using the create_pandas_dataframe_agent. I also have a long list of sample questions and answers that I want the RAG to imitate but not exactly copy. These questions contain some domain knowledge and I've also added some information about the columns at the end of these sample questions and answers.
I'm currently passing this as a prefix parameter but I'm not sure if that's the best way to do this.
The idea is to have this pandas agent as something that can answer questions that don't require pandas as well. What's the best way to build this? Thanks in advance!
On which GPU did the models run? Integrated GPU?
Yep Integrated. but mostly on CPU