LangGraph Crash Course with code examples

Sdílet
Vložit
  • čas přidán 21. 07. 2024
  • Colab 01. Learning LangGraph Agent Executor: drp.li/vL1J9
    Colab 02. Learning LangGraph - Chat Executor: drp.li/HAz3o
    Colab 03. Learning LangGraph - Agent Supervisor: drp.li/xvEwd
    Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    Github:
    github.com/samwit/langchain-t... (updated)
    github.com/samwit/llm-tutorials
    Time Stamps:
    00:00 Intro
    00:19 What is LangGraph?
    00:26 LangGraph Blog
    01:38 StateGraph
    02:16 Nodes
    02:42 Edges
    03:48 Compiling the Graph
    05:23 Code Time
    05:34 Agent with new create_open_ai
    21:37 Chat Executor
    27:00 Agent Supervisor
  • Věda a technologie

Komentáře • 87

  • @samwitteveenai
    @samwitteveenai  Před 5 měsíci +7

    If you are interested in building LLM Agents Fill out the form below for what type of agents you want some examples of ?
    Building LLM Agents Form: drp.li/dIMes

  • @viktor4207
    @viktor4207 Před 5 měsíci +20

    I really like the idea of integrating Grpah theory into this. You can experiment with different agents and tools for certain types of tasks. Then you can start playing around with network measures and give edges weight based on the successful completion of types of tasks. The network essentially will end up balancing itself out as you start to direct traffic along your high-weight edges. You can run another network and experiment with different models for different tasks. It's like a simulation of a workplace where people end up going to the most productive people to accomplish tasks.

  • @kenchang3456
    @kenchang3456 Před 5 měsíci

    This video is timely as I was ready to start exploring LangGraph to get a feel for what use cases can fit. Deeper dive video will be much appreciated.

  • @avidlearner8117
    @avidlearner8117 Před 5 měsíci +5

    This is ABSOLUTELY FANTASTIC!!! I've been dealing with manual "orchestrator" that felt so dummy before... This is a game changer! You effin deliver on your content, man! Holyfack...

  • @paulmiller591
    @paulmiller591 Před 5 měsíci +1

    Great video as usual. Yes, more videos and use cases on building agents with the updated version of LangChain would be great.

  • @joffreylemery6414
    @joffreylemery6414 Před 5 měsíci +2

    Awesome work once again !
    Very interested by this langGraph for more complexe use cases ! For us building a team platform augmentation with many Agent (which are Agent, or just Chains), it can allows use to have a big and powerful super agent, an supervisor as you in third part. To be continued

  • @tvaddict6491
    @tvaddict6491 Před měsícem

    Thank you for going through the notebooks line by line. Helps noobs like me follow along.

  • @touchthesun
    @touchthesun Před měsícem

    Thanks, this is great stuff. I've been teaching myself to build agents in langchain for some months and it is slow going. I think I need to step back and re-architect to use LangGraph instead. Looking forward to seeing more of your material on this stuff!

  • @luisguillermopardo7792
    @luisguillermopardo7792 Před 5 měsíci +1

    Sam I watch all your videos from Colombia. They are awesome!! they explain really well.

  • @guanjwcn
    @guanjwcn Před 5 měsíci

    very insightful but heavy stuff to master. thank you, Sam. ❤

  • @rupjitchakraborty8012
    @rupjitchakraborty8012 Před měsícem

    This is such a great intro, thank you so much for the effort.

  • @narutocole
    @narutocole Před 5 měsíci

    This is sick Sam! Keep it up!

  • @user-ew8ld1cy4d
    @user-ew8ld1cy4d Před 4 měsíci +1

    You. Are. Fantastic... Thank you Sam

  • @wuhaipeng
    @wuhaipeng Před 5 měsíci

    Thank you so much for the course!

  • @shobhitagnihotri416
    @shobhitagnihotri416 Před 5 měsíci +5

    Sir please make a full course using langchain with open , hugging face ,lama and fine thing models and chatbot. Keep little bit affordable like 100$ it would be really great . Lots of love from India

  • @HoldMyData
    @HoldMyData Před 5 měsíci

    Thanks again, Sam!

  • @andrewandreas5795
    @andrewandreas5795 Před 5 měsíci

    thanks for the very informative video. do you know wich OSS models support function calling?

  • @nattyzaddy6555
    @nattyzaddy6555 Před 5 měsíci +1

    @ what timestamp is the demo where we can see it in use

  • @ahmedennaifer3693
    @ahmedennaifer3693 Před 5 měsíci

    thanks again for the awesome content i ve learned a lot from your videos please keep doing what do. I was also wondering if you plan on making videos about production ready RAGs with the methods that you talked about in your rag series. Thanks a lot and please keep enriching us with your content.

    • @samwitteveenai
      @samwitteveenai  Před 5 měsíci +1

      Yeah I will go back to the RAG stuff again.

  • @micbab-vg2mu
    @micbab-vg2mu Před 5 měsíci

    Thank you for the great video:)

  • @AmarGupta-dz5wy
    @AmarGupta-dz5wy Před 4 měsíci

    Interesting, Thank you!!

  • @user-ye6ks6xn8l
    @user-ye6ks6xn8l Před 5 měsíci

    Beautiful......... Thank you so much.

  • @alizhadigerov9599
    @alizhadigerov9599 Před 5 měsíci +1

    how is it different from agent executor?

  • @ShearyTan
    @ShearyTan Před 5 měsíci

    How does this work with open source LLM instead of OpenAI?

  • @ankit85jain
    @ankit85jain Před 5 měsíci

    Thanks Sam. For colab 01, I tried inputs = {"input": "Give me a random number and then write in words", "chat_history": []}.. it is still calling to_lower_case tool.. is it expected or we have to be more vocal in our input?

  • @fengshi9462
    @fengshi9462 Před 4 měsíci

    Thanks a lot. I wanna know, how does the agent excuter know to write the anwer "4" to capitalized "FOUR" and then send it to the lower_case tool? Is there another bulit-in LLM doing that?

  • @zhiyanliu7068
    @zhiyanliu7068 Před 5 měsíci

    Thanks for making this video. A question, iiuc, imo It would be perfect if Coder node could be routed by the supervisor and executed to generate the chart by leverage the PythonREPLTool. Did you try to remove the PythonREPLTool tool from the Lotto_Manager agent, and only provide it in Coder agent? Make sense?

  • @RADKIT
    @RADKIT Před 5 měsíci

    you mentioned something on point to what i was wondring sam, with you experience which oif the open-source LLM support function calling as of today? which one would you try out first? and if you do please make a video about langgraph and HF LLM and function calling maybe! ☺, love you work btw!

  • @RaviRanjan1989
    @RaviRanjan1989 Před 5 měsíci

    Can we use Local LLM using the HuggingFaceTextGenInference?

  • @PrashantSaikia
    @PrashantSaikia Před 4 měsíci

    Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.

  • @jhachirag7
    @jhachirag7 Před 13 dny

    how can you do with anthropic-claude, i done this and created my own parser after that it gives error: A conversation must alternate between user and assistant roles

  • @alivecoding4995
    @alivecoding4995 Před 2 měsíci

    What do you think about Microsoft's Semantic Kernel and PromptFlow?

  • @realCleanK
    @realCleanK Před měsícem

    Thank you! 🙏

  • @jessezwamborn6526
    @jessezwamborn6526 Před 5 měsíci

    I tried to follow along in the provided file, but in the third example, my supervisor tells the coder to run, then keeps telling it to run over and over. The supervisor keeps choosing "coder" as the next step. Any idea why this difference in result even though I haven't changed anything about the code and simply ran it as-is?

  • @Chicle777
    @Chicle777 Před 5 měsíci

    Well explained. Could you please show these examples using VS Code with production file structure?

  • @seththunder2077
    @seththunder2077 Před 5 měsíci

    at 9:30 you said you can have multiple agents with langgraph but isnt langchain originally a single agent framework? Unlike Autogen and CrewAI? I'm a bit confused

  • @jatinnandwani6678
    @jatinnandwani6678 Před měsícem

    Thanks so much

  • @VibudhSingh
    @VibudhSingh Před 5 měsíci +1

    Super useful. I would say this was explained in a better way than the official Langchain channel.
    Next Video: it would be cool to build the perplexity’s copilot feature. So, ask for clarifying questions if needed with human-in-the-loop feature. Then give access to the internet to get the results.

  • @emanueleielo6660
    @emanueleielo6660 Před 4 měsíci

    Amazing! But Is not clear how the agent can understand to repeat the function random_number() for 10 times, everytime that it finish it will recall again OpenAI and ask if the task is accomplished? If is like that why we don't see it on LangSmith?

  • @8eck
    @8eck Před 5 měsíci

    Wait, why do you need to explicitly define an array of tools and forward it into an agent creator, if you have decorators in there? What use of decorators then? I'm confused.

  • @luisguillermopardo7792
    @luisguillermopardo7792 Před 4 měsíci

    Hey Sam, do you know if it is possible to integrate memory in a graph and how to do it?

    • @samwitteveenai
      @samwitteveenai  Před 4 měsíci +1

      yeah you can save and load etc and use the normal ways of making I will make some more vids for LangGraph when I get a chance

  • @john849ww
    @john849ww Před 5 měsíci +1

    What open source models support function calling? I must admit, I don't know what it is about function calling that needs to be supported. Recently, I tried AutoGen function calling with a few different _local_ 7B parameter models without any luck.

    • @robxmccarthy
      @robxmccarthy Před 5 měsíci +1

      There aren't many which excel. The ones that do are fine tuned on the task. First was gorilla. Now we have functionary. And apparently qwen 1.5 (even the 0.5b) model can reach near gpt4 reliability (though not in my personal testing).
      As far as I know there aren't any great drag and drop solutions. You may also need to use multiple models (one for function calling and one for higher level reasoning)

  • @HiteshGulati
    @HiteshGulati Před 5 měsíci

    Hi Sam, your videos are always very insightful and has helped me keep up with the latest development LLM space. I do have a question, when we pass a python function as tool into LLM, how does the execution works? Let's say there is a very long function which is to be executed next, now is the whole function along with its parameters passed on to LLM (using precious tokens) and LLM runs function on its server and gets the output. Or is it that LLM just assigns the function to run, the function then runs locally and provides the output to LLM for next action.
    Also is the behaviour same in python REPL functions?

    • @pnhbs392
      @pnhbs392 Před 2 měsíci +1

      The functions are executed on the process that executes the runnable chain, not remotely on the LLM. The LLM only determines which function to run and what the parameters should be, then LangChain / LangGraph executes the code "locally."

    • @HiteshGulati
      @HiteshGulati Před měsícem

      @@pnhbs392 Thanks, this was really helpful.

  • @stacy9698
    @stacy9698 Před 3 měsíci

    Can I ask what you used to draw the StateGraph slide? Looks cool

    • @samwitteveenai
      @samwitteveenai  Před 3 měsíci +1

      Excalidraw. It works very well for things like this.

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha Před 5 měsíci

    📝 Summary of Key Points:
    📌 Langgraph is a graph-based system for building custom agents in the Langchain ecosystem. Nodes represent different components of an agent, and edges connect these nodes to enable decision-making and conditional routing within the agent.
    🧐 The video provides coding examples to demonstrate Langgraph's functionality. Examples include building an agent executor using custom tools, using a chat model and a list of messages for more complex conversations, and creating an agent supervisor to route user requests to different agents based on predefined conditions.
    💡 Additional Insights and Observations:
    💬 "Langgraph is a powerful tool for building custom agents with decision-making capabilities."
    📊 No specific data or statistics were mentioned in the video.
    🌐 The Langchain ecosystem and Langgraph provide a flexible framework for creating various types of agents.
    📣 Concluding Remarks:
    Langgraph is an innovative tool within the Langchain ecosystem that allows users to build custom agents with decision-making capabilities. The video showcases coding examples to demonstrate the functionality of Langgraph and encourages viewers to explore different use cases. Langgraph provides a flexible and powerful framework for creating agents, making it a valuable tool for developers.
    Generated using TalkBud

  • @xunyang7126
    @xunyang7126 Před 5 měsíci

    🎯 Key Takeaways for quick navigation:
    00:18 🐉 *庆祝华人传统*
    - 强调作为龙的后代的自豪感,象征深厚的根基和丰富的历史。
    01:18 📱 *文化矛盾*
    - 讨论传统价值与现代实践之间的对立,如尽管历史冲突仍购买iPhone和在日本度假。
    02:17 🎉 *强调团结与文化自豪*
    - 鼓励庆祝华人文化,展示龙舞等元素,强调与根源保持联系的重要性。
    Made with HARPA AI

  • @manishmandal5240
    @manishmandal5240 Před 4 měsíci

    This is really excellent tutorial. If I want to develop a use case wherein I have to do an API (e.g., Google Map API) based on a location and use returned result to filter down some customers around the location (wihin 2 kilometer around location) which (the customer and ordering information) is stored in a relational data store (say Sqllite or Postgresql or MySQL). Can you provide any implementation suggestion. Just to clarify that user input could lead to 3 scenarios for queries, 1) API only 2) API + RDBMS 3) RDBMS only.

    • @samwitteveenai
      @samwitteveenai  Před 4 měsíci

      Put the effort into making it as a took and then the Agent just uses the tool with simple commands. Eg put the heavy lifting on the tools side. I have a CrewAI tutorial coming out later this week that goes into this.

  • @caiyu538
    @caiyu538 Před 5 měsíci

    Great. Great

  • @user-yy9bl3ds1p
    @user-yy9bl3ds1p Před 5 měsíci

    Please do one deep dive on dspy also.

  • @ramp2011
    @ramp2011 Před 5 měsíci

    I am curious how you compare this with CrewAI for setting up agents. I feel setting up an agent with Langragh has too many steps....

    • @samwitteveenai
      @samwitteveenai  Před 5 měsíci +1

      CrewAI is much higher level yes but it is not as flexible as LangGraph. That said both are LangChain so should be able to do much of the same stuff. I will make some vids soon about that.

  • @AdamTwardoch
    @AdamTwardoch Před 5 měsíci

    Pretty much every LLM API has a large set of parameters: temperature, max output length, top P, [top K], frequency penalty, presence penalty.
    Shrink-wrapped UIs like ChatGPT don't give access to these. The defaults differ in some APIs: sometimes temperature is set to 1, sometimes 0.8.
    Some experiments I've done indicate that changing these parameters has serious impact on the results. But I've hardly ever seen benchmarks, papers, videos that discuss this. As far as I can tell, most LLM benchmarks only test the "default" settings.
    I'd love to see some more in-depth experiments that compare models and change these parameters.
    The community has been trying a lot of elaborate optimizations to get the most desired results out of LLMs. But my partial experiments suggest that there's a fair bit of untapped potential with the model parameters.

    • @AdamTwardoch
      @AdamTwardoch Před 5 měsíci

      Another matter is the way the community discusses ChatGPT: late last year OpenAI added a new model to the ChatGPT app: GPT 4 Turbo. This is a model that's as different from GPT-4 as GPT 3.5 is from GPT-4. Or at least, it's different. It's smaller, distilled, simplified, dumber.
      Yet some discussion has shown that users didn't accept that as a fact: they've thought they GPT 4 Turbo us just some "faster version" of GPT 4. Magically :)
      But there's no magic. In the ChatGPT app, you can select the "ChatGPT" mode, which is GPT-4 Turbo with Vision, DALL-E and tool switching, or you can choose ChatGPT Classic mode, which is the real GPT-4 model. They're very different, and should be treated as separate models in comparisons.

    • @Ken129100
      @Ken129100 Před 5 měsíci

      How do you change to other LLMs? I tried but not it was not successful

    • @jessezwamborn6526
      @jessezwamborn6526 Před 5 měsíci

      @@Ken129100 You can simply import a new model when setting llm (for example, llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0, verbose=True, streaming=True)), or use gemeni, or claude-2. (dont forget to include the API key at the top).

  • @souvickdas5564
    @souvickdas5564 Před 5 měsíci

    I have been doing research on nlp and software engineering for 6 years. I have some good research publications as well in IEEE transactions on Software Engineering, journal of systems and software, Requirements Engineering conference. I have also developed skills on RAG, agent based frameworks. Can I get a good job in the field of GenAI, LLM orchestration? If you have please ask for my cv. Thanks in advance.

    • @samwitteveenai
      @samwitteveenai  Před 5 měsíci

      I would say yes. I have some research background with papers at EMNLP and NeurIPS workshops etc. and I see that as an advantage for a lot of the new skills. Understanding the basics of NLP and NLU really helps for a lot of these skills. That said you certainly need to update the skills etc.

  • @hasani511
    @hasani511 Před 5 měsíci

    This is a great video, it seems overly complicated tho compared to AutoGen which seems to be hiding a lot of the complexity. We built so,etching similar using regular agents as tools (node) which then have their own tools. A more dynamic agent with multiple personalities can be built with this but it would be hard to manage.

  • @lesptitsoiseaux
    @lesptitsoiseaux Před 4 dny

    Your video is great. However, it presumes prior knowledge of the Langgraph ecosystem. For example, @11:23, the Trace page and the inherent setup that was done there is not explained. Your collab 01 as well: try running it incognito as a viewer: once you reach the 'prompt' cell, things start breaking apart. Overall, you are engaging, knowledgeable but the video could use a warning at the beginning to inform the viewer the requirements like knowing the Langgraph ecosystem etc. I'm subscribing nonetheless, hope you see this as a constructive comment Sam.

  • @SashaBaych
    @SashaBaych Před 3 měsíci

    By the way, why use agents and agent executors?
    I have seen some many tutorials with just models with binded tools. What is the benefit/difference in using AgentExecutor?
    What would I do with memory if I am using agent executor? Create agent executor with memory or create memory that saves the state of the graph? How multiple agents access the memory then?... omg, langchain...

    • @samwitteveenai
      @samwitteveenai  Před 3 měsíci

      The AgentExecutor was more the old way of doing agents before LangGraph. Think of the graph as a big state machine and you just pass that around. multiple agents can be like different nodes on the graph. I am still thinking of some simple examples to show off the basics. but these are great questions and I will address them in a video.

    • @SashaBaych
      @SashaBaych Před 3 měsíci

      @@samwitteveenai thank you so much for responding to the comments) Thank you for your attention. Keep up the great work, while I am integrating self-queryng RAG for my startup based on your tutorial)

  • @jordenvanforeest8741
    @jordenvanforeest8741 Před 5 měsíci

    It would be a gratis idea to make an agent that makes other agents

  • @SashaBaych
    @SashaBaych Před 3 měsíci

    I extremely respect what Sam does. He is one of the few youtubers who avoids just hyping up trendy things and simply makes very useful videos.
    But am I the only one who thinks that langchain's syntax is just insane? Having looked at the 3rd notebook, I find that I create an agent that has tools to then create agent executor passing the agent and aforementioned tools (why again?). Then I create agent node that invokes some kind of agent, then I pass created agent executor as agent argument to the node... How can anyone be able to understand this Russian doll...

    • @samwitteveenai
      @samwitteveenai  Před 3 měsíci +1

      😀 Hey Sasha I can totally relate to how you feel. It is very low level and they have also change some things I think since this video. Also they are finally supporting function Calling better across multiple models. I have been playing with some new notebooks for this and will make a video about it soon. LangGraph is good at a low level but I agree it can be insanely frustrating at times. You can use something like CrewAI if you want to stay really high level but I find that frustrating when it runs into issues as well. I promise I will try to get some new vids on this out soon, hopefully with some open source models like the new Llama etc as well.

  • @antwierasmus
    @antwierasmus Před 5 měsíci +1

    Great job sir, all the example in the docs are using openai, can you please do a video where you use a different model like Gemini for this? Also if I have a complex input like a list of objects containing messages from different users and I want to work on each of them. Can you show us how to go about this. Maybe send a response message to each of the users in the list after reading their messages?

  • @khadiravanabv7417
    @khadiravanabv7417 Před 5 měsíci

    is it me or does he sound very close to @3Blue1Brown

  • @user-yq8yp3nk2d
    @user-yq8yp3nk2d Před 5 měsíci

    ok

  • @IdPreferNot1
    @IdPreferNot1 Před 5 měsíci

    Nope…. Still lazy after a while. It takes time just to search the output code to see where they are summarizing etc instead of full code. It really breaks the flow when your actually working well together, then program generates NEW error and you’ve copied over goodccode with a bunch fixed code interspersed with some random “put in your stuff” here sections. 😮

  • @bhargavchoithwani3961
    @bhargavchoithwani3961 Před 4 měsíci

    @samwitteveen very nicely explained

  • @thecooler69
    @thecooler69 Před 3 měsíci

    Thanks for the video + code xamples Sam. I have consistent trouble with early stopping, is there anyway to prevent it? Like, the should_continue function receives an AgentFinish message but the output will look like this: 'agent_outcome': AgentFinish(return_values={'output': 'Please call the following function: {"function":{...
    So it knows it should keep calling functions, but fires a Finish anyway. I've tried to change the system prompt to make it not finish until all its functions are done, but it will still do this. Any suggestions?

    • @samwitteveenai
      @samwitteveenai  Před 3 měsíci +1

      Try adding in another self check step, so it having another node check and then if it things all is done it can trigger the Agent END etc.

    • @thecooler69
      @thecooler69 Před 3 měsíci

      @@samwitteveenai Your response is much appreciated. If I understand you correctly, it would mean looking for isinstance of AgentFinish in the outcome returned by agent.invoke(), then ignoring that msg and generating an AgentAction manually inside of run_agent. I couldn't figure out how to create the AgentAction yet, but also that almost feels like a hack to me--maybe the better solution is to split the tasks better among multiple agents, using your supervisor code (I will try this next). However, it feels like one agent should be able to handle a few tools each.
      Additional information on my setup is that I consistently get the early stop problem when trying to get the same agent to call the same tool twice (for different inputs). Presumably, the agent looks at the log and sees it has already called the tool, then gives up. I have tried altering the sys/user prompts to avoid that behavior, to no success.
      Let me know if there is an error in my comprehension.

    • @thecooler69
      @thecooler69 Před 2 měsíci

      @@samwitteveenai Looking at the other examples, I think I see what is happening. Instead of putting the function request in the additional_kwargs of the last message, the agent sometimes puts 'Please use function x' in the body of the response, which results in an AgentFinish firing.

  • @amirbehbehani4844
    @amirbehbehani4844 Před 4 měsíci

    Hi @samwitteveenai I sent you a LI request :). Great video!

  • @FranckePeixoto
    @FranckePeixoto Před 5 měsíci

    great