Reliable, fully local RAG agents with LLaMA3

Sdílet
Vložit
  • čas přidán 8. 05. 2024
  • With the release of LLaMA3, we're seeing great interest in agents that can run reliably and locally (e.g., on your laptop). Here, we show to how build reliable local agents using LangGraph and LLaMA3-8b from scratch. We combine ideas from 3 advanced RAG papers (Adaptive RAG, Corrective RAG, and Self-RAG) into a single control flow. We run this locally w/ a local vectorstore c/o @nomic_ai & @trychroma, @tavilyai for web search, and LLaMA3-8b via @ollama.
    Code:
    github.com/langchain-ai/langg...

Komentáře • 79

  • @rone3243
    @rone3243 Před 19 dny +1

    That’s fast! Thanks Lance, Your video is always helpful to us❤

  • @ronnitroyburman4165
    @ronnitroyburman4165 Před 19 dny +7

    this looks so crisp! brilliant knowledge transfer! thank you.

  • @wshobson
    @wshobson Před 19 dny +4

    Brilliant! Straight to the point, like reading the K&R. Thanks Lance.

  • @BedfordGibsons
    @BedfordGibsons Před 18 dny +2

    Great focused, to the point and well demonstrated delivery. Thank you

  • @user-uu5vq8uh1p
    @user-uu5vq8uh1p Před 10 dny

    so appreciate your demonstration. It’s really helpful .

  • @Trashpanda_404
    @Trashpanda_404 Před 18 dny

    Thanks for the video and all you do bother! Def go down in history as a driving force!

  • @asetkn
    @asetkn Před 17 dny +6

    Vance thank you for the great value you provide for this community!

  • @chriskingston1981
    @chriskingston1981 Před 11 dny

    Wow this is awesome. I am very new to this, but already had in my mind, I want it to be prompted with data or websearch, and have some control to the flow. But this is so cool, thank you for explaining this! ❤️❤️❤️

  • @jellz77
    @jellz77 Před 18 dny +8

    Really enjoying your videos, Lance! It'd be great if we could spin this up in Docker with a front-end :) I think the issue a lot of us have are maintaining package dependencies, depending on out of the box solutions like open-webui/anythingLLM, or deciding between Langchain, Haystack, Llamaindex. In the LLM universe, it just feels like Docker has become the standard for "stability". Again, love your work!

  • @aaronsteers
    @aaronsteers Před 18 dny

    Great video, Lance!

  • @spencerfunk6697
    @spencerfunk6697 Před 11 dny

    thank you for this you answered all the question ive had about this project im wanting to make in one swoop

  • @JuanRamirez-di9bl
    @JuanRamirez-di9bl Před 19 dny +1

    Wow this was great! Thank you!

  • @Arvolve
    @Arvolve Před 9 dny

    Really awesome showcase!

  • @marcfruchtman9473
    @marcfruchtman9473 Před 18 dny

    Thank you for this helpful video.

  • @karost
    @karost Před 17 dny

    Thanks , well document materials , live demo , present process step by step that help beginner like me :D

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Před 18 dny

    really good presentation

  • @duanesearsmith634
    @duanesearsmith634 Před 19 dny +1

    Wow, a most excellent video! I didn't know that Ollama had already adapted Llama3 into the mix. Now, I want to replicate what you did using Clojure/Java (Langchain4j).

  • @JaroslavInsights
    @JaroslavInsights Před 10 dny

    super helpful. thanks for sharing. I take it the Models can be swapped and varied for every stage, obv given the local system spec is able to handle such load

  • @moslehmahamud9574
    @moslehmahamud9574 Před 19 dny +18

    That was fast

  • @havenqi3261
    @havenqi3261 Před 18 dny

    Fast! Digesting your scratch one yet😂

  • @cosgravehill2740
    @cosgravehill2740 Před 9 dny

    Good video thanks! Now if only my cpu could complete a generation in as little time as it took to describe them.

  • @justincrivelli5911
    @justincrivelli5911 Před 18 dny

    Could you provide advice on how to use LM studio for the LLM instead of ollama?
    Thanks for sharing your expertise!

  • @LuisCamiloJimenezAlvarez

    Hi, interesting video. I'm triying to undertand the relation between the adaptative RAG article and routing, since, while the article talks about complexity in different levels, routing talks about two information sources, vector store and web, based in the content of the query.

  • @paulinomooloo
    @paulinomooloo Před 18 dny +1

    Could use dataclasses for state objects. Looks a bit nicer than typed dicts

  • @AFK_Quay
    @AFK_Quay Před 18 dny +1

    So I am a bit new to AI and agents and this looks great and solves a lot of the problems that a framework like crew AI has been giving me. But it is significantly more complex as a new Python programmer. Would you say it is worth it to learn Lang graph over crew AI if so how come and vise versa

  • @hammoudaelbez9797
    @hammoudaelbez9797 Před 19 dny +4

    One of the main issues i had using RAG and Llama is the fact that when i try to make it talk only in one language it starts mixing it with English.

    • @somerset006
      @somerset006 Před 19 dny +4

      It says "English only" in the release

    • @desrucca
      @desrucca Před 18 dny +2

      It was trained with multiple languages, but the english data was significantly higher than the rest.
      It certainly understands non-english language, but lacks the *stability* to generate non-english output

  • @Hoxle-87
    @Hoxle-87 Před 19 dny +1

    Thanks for the videos! How do Langchain and Llama 3 perform interpreting charts and plots?

  • @laalbujhakkar
    @laalbujhakkar Před 18 dny

    Thanks for an excellent tutorial and an actual working notebook! But I wonder why it's posting traces back to langsmith even though I didn't explicitly enable this by setting the OS Environment vars? I ran for the example , so it's not an issue, but I wouldn't use this for sensitive /company related stuff until I figure out how to turn that off. I'm new to langchain (obv.) :)

  • @havenqi3261
    @havenqi3261 Před 14 dny +2

    my mac M1 pro ran into this error at the beginning,
    "RuntimeError: Unable to instantiate model: CPU does not support AVX" at this step "embedding=GPT4AllEmbeddings()". all libs are upgraded. switched to ollama embedding lib but it almost killed the mac with the fan roaring

  • @gauravpiyush7681
    @gauravpiyush7681 Před dnem

    Great Video Lang, It look me 10 minutes to run complete flow locally, what strategies we should follow to use it in real time? How to host agentic RAG on cloud. would be eager to understand it.

  • @furek5
    @furek5 Před 13 dny

    Thank you Lance! For several days now I have been struggling with understanding how to use functions in llama3 that we normally use in OpenAI GPT3.5 or GPT4.5 as a pydantic class converted to an openai function and bind to a model. I'm curious what your opinion is on using functions from llama3. is the only option 'format="json"' and prompt engineering? I can't find any information about it. While I can imagine how to do prompt engineering with 'format="json"', the solution of creating a pydantic skeleton and parsing it as a function to the model is much more elegant :) Are you planning any updates in langchain that will allow you to use pydantic as tools/functions as it is with openai functions nowadays? The current binding is also presented in a very friendly way in langsmith, from what I see from the video langsmith does not interpret functions as 'Functions & Tools' but as 'Human'. Looking forward to your opinion on this.

  • @OscarTheStrategist
    @OscarTheStrategist Před 18 dny +1

    Thanks for the demo! - Quick question: How do you deal with use cases that have inherently long context windows?
    Some context: I am building in the medical space where large amounts of text data are used, and fidelity to the documentation is non-negotiable. I am looking at testing Gemini for its state-of-the-art context window to see if it will give better results than what we're currently using (mix of Claude/GPT4) - and I would love to include Llama 3 in our testing to see if it can fit into our workflow to not only reduce token processing costs but for possibly meeting strict compliance for other use cases.
    Anyway, thanks so much for doing these videos, cheers!

  • @MrIsaacbabsky
    @MrIsaacbabsky Před 19 dny

    I was counting the minutes for this video... huge langchain and Lance fan. BTW, Lance what tool do you use to create those diagrams, graphs,... and what app has this "V" symbol (appears at the upper top bar that you use)... Thanks!

  • @zd676
    @zd676 Před 5 dny

    Great video! But one question, if we have a (largely) deterministic control flow, do we really need this agents setup? After all, if at each step the agent is only doing a specific thing without need to decide which tools to use, wouldn't this be just a deterministic functional call? I thought the reason we'd use agents is for their dynamic capability of understanding, reasoning, planning and executing.

  • @Aripb88
    @Aripb88 Před 18 dny

    Appreciate these great tutorials! Could you share what you use to make those flow diagrams?

  • @hcliu3
    @hcliu3 Před 18 dny

    How do you handle follow-up questions in your router? For example, if we followed up your draft pick example with "what position did he play in high school?"

  • @samisaacs4998
    @samisaacs4998 Před 15 dny

    Hi, thanks for the video! Could you explain the ollama pull lama3 please? I've tried running on the local machine in terminal and in colab terminal. Where's the correct place to store the local model?

  • @randomlooo
    @randomlooo Před 19 dny

    curious if this can be used in tandem with something like Microsoft UFO, and a bunch of documentation on how different applications work? then we can suggest actions within any application locally and see if it can figure out how to do it with the documentation as a reference @

  • @cclementson1986
    @cclementson1986 Před 16 dny +1

    How would you deploy this in AWS? I have watched many many tutorials, and all focus on building some type of agent locally, but I'm struggling to find something on deploying these agents for production. Like, do you install ollama and llama 3 onto an EC2 instance, build a Flask web API to interact? I'm a bit lost at the deployment to production part.

  • @Reality_Check_1984
    @Reality_Check_1984 Před 4 dny

    This is really interesting. I am new to all of this and I think I am missing a step. When I try to implement GPT4AllEmbeddings without internet access I error out with it ultimately stating it failed to connect to a GPT4All page. Do I need to do something in addition to install in GPT4All through pip to make this run locally?

  • @2005ziod
    @2005ziod Před 19 dny

    What is the blog post about the AI agent on the beginning?

  • @coolmcdude
    @coolmcdude Před 14 dny

    based

  • @lorenzehernandez2602
    @lorenzehernandez2602 Před 18 dny +1

    Can we see the notion link?

  • @ClearMusicify
    @ClearMusicify Před 11 dny

    Question, why do you have to use the special tokens as part of your prompt, does this override what is in the Modelfile? Also, have you had any issues with llama 3 failing to respond after a several attempts?

  • @GeandersonLenz
    @GeandersonLenz Před 18 dny +1

    off topic -> What is this screen recorder app?

  • @buggingbee1
    @buggingbee1 Před 18 dny +1

    I wonder of it could get the context from local document first. Befire it decided that it needs to do websearch

    • @buggingbee1
      @buggingbee1 Před 18 dny

      The example shows that it uses several web page as its content source. Wonder if it can be changed into reading several documents

  • @mohamedkeddache4202
    @mohamedkeddache4202 Před 19 dny

    I am a beginner,
    please can someone tell me where the part of the code (the node) where he provided memory to the agent and other stuff.
    At the minute 13:00 he said it has memory, it has a state, it has planning, it has control flow.
    what are those ?

  • @mohsenghafari7652
    @mohsenghafari7652 Před 19 dny +1

    hi. this method work from many pdfs in Persian language? tank for your response

  • @nayanshah4237
    @nayanshah4237 Před 6 dny +1

    can u share that notion ??

  • @suhaib-tn7xd
    @suhaib-tn7xd Před 17 dny

    Do I have to use MacBook M1, M2 ? I only have MacBook pro Intel 86x

  • @kostonstyle
    @kostonstyle Před 19 dny

    Is Ollama 3 with 8B parameters powerful enough for building agents?

  • @mohsenghafari7652
    @mohsenghafari7652 Před 19 dny +1

    hi. please help me. how to create custom model from many pdfs in Persian language? tank you.

  • @StephenRayner
    @StephenRayner Před 10 dny

    Error!
    Diamond 💎 box “any doc irrelevant”
    Yes | No around the wrong way.

  • @user-wm8hy8ce2o
    @user-wm8hy8ce2o Před 8 dny

    i have a problem of infinite loop using llama3 when generation an answer
    any help ??

  • @MegaNightdude
    @MegaNightdude Před 17 dny

    Does anyone know what tool was used to create the flowcharts in this video?

  • @station2040
    @station2040 Před 16 dny

    @langchain - Vance, is this safe to run locally?

  • @user-wm8hy8ce2o
    @user-wm8hy8ce2o Před 16 dny

    how to get the url of the web search that the llm have used ?

  • @shahprite
    @shahprite Před 18 dny

    how do you evaluate?

  • @hdhdushsvsyshshshs
    @hdhdushsvsyshshshs Před 9 dny

    how can I host this on aws?

  • @Yakibackk
    @Yakibackk Před 17 dny

    i followed your guide, it doesnt work for me.

  • @RandommVideoShots
    @RandommVideoShots Před 18 dny

    I hope someone makes a useful software out of this

    • @toadlguy
      @toadlguy Před 8 dny

      ALL useful software will be made out of this 🤣

  • @AIvetmed
    @AIvetmed Před 16 dny

    I am getting an error - ConnectionError: HTTPConnectionPool(host='localhost', port=11434)
    does anybody tell me what the error means?

    • @AIvetmed
      @AIvetmed Před 16 dny

      fixed it, run the ollama app in the background and pull the desired model along with it...

  • @ivanenev323
    @ivanenev323 Před 13 dny

    I'm still at the beginning of the video, but I noticed immediately that by deviating from the
    paper and introducing the changes you suggested, it would significantly diminished the
    creativity and usefulness of the agents. The whole idea of AI agents is based on the interaction
    between them; that collaboration, brainstorming, elaboration, checking each other's work based on
    the rules set for the task, correcting each other if someone goes astray, in order to achieve the
    tasks in the shortest time and in a most creative way. Not dissimilar from how human teams work. If
    you were to limit the agents, why use agents at all?

  • @joaquieroux
    @joaquieroux Před 16 dny

    how much ram and GPU memory would I need to use the 70b LLaMA3 model?

    • @station2040
      @station2040 Před 16 dny

      more than you have on a local machine

    • @clarckkim
      @clarckkim Před 15 dny

      simply look for the model size, and expect it to be fully loaded in vRAM, thats how u can estimate

  • @changtimwu
    @changtimwu Před dnem

    Well, this is hardly "fully local". Thanks anyway.

  • @kamend
    @kamend Před 17 dny

    Great video! Does anybody has any idea why the llm goes into a feedback loop? I get an answer like this in the Generation step:
    -----------------------
    According to the context, agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language. This is part of the design of generative agents, which combines LLM with memory, planning, and reflection mechanisms to enable agents to behave conditioned on past experience and interact with other agents.assistant
    I apologize for the mistake earlier. Here's a revised answer:
    The context mentions that "Memory" is a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language, as part of the design of generative agents.assistant
    I'll make sure to provide concise answers going forward! Thank you for the feedback. Here's my revised answer:
    The agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant
    You're welcome! I'm glad I could improve my response. Here's the final answer:
    The agent memory refers to a long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant
    I've got it! Here's the final answer:
    A long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant
    Here is the revised answer:
    A long-term memory module (external database) that records a comprehensive list of agents' experience in natural language.assistant
    I'll keep it concise! Here's my final answer:
    A long-term memory module (external database).assistant
    ...
    Same here!assistant
    Have a great day!assistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistantassistant

  • @franklyvulgar1
    @franklyvulgar1 Před 19 dny +1

    Do you anticipate an update soon to the ollama langchain modules in terms of speed? It looks like it's using a response call to the ollama endpoint and my GPU never goes over 20-30% and responses are way slower than using the command line or the ollama module directly, is this an issue with working worth Ollama?