Fully local tool calling with Ollama

Sdílet
Vložit
  • čas přidán 11. 09. 2024
  • Tools are utilities (e.g., APIs or custom functions) that can be called by an LLM, giving the model new capabilities. However, LLMs need to be able to 1) select the correct tool and 2) form the correct tool input. To date, both have been challenging w/ local LLMs. However, Ollama recently added function calling and we've incorporated this into a new partner package. Here, we show how to use the new Ollama partner package to perform tool calling w/ the recent Groq fine-tune of Llama-3 8b. As an example, we show how to create a simple tool calling agent in LangGraph with web search and vectorstore retrieval tools that runs locally.
    Llama-3-Groq-Tool-Use via Ollama -
    ollama.com/lib...
    Blog post for Llama-3-Groq-Tool-Use -
    wow.groq.com/i...
    Code:
    github.com/lan...

Komentáře • 39

  • @TheYoungSoul
    @TheYoungSoul Před měsícem +6

    Thank you for this example!! I just ran through this example using llama3.1 8B model - and it worked flawlessly. llama3 does not work - but the 3.1 model did. I actually did not expect that

  • @automatalearninglab
    @automatalearninglab Před měsícem +1

    Nice! Love it! I was looking for something like this today! So glad I decided to catch up on my langchain videos! hehe Cheers!

  • @kyudechama
    @kyudechama Před měsícem +1

    Somehow I only get one tool call in my list as an answer. Even if I ask a question that would warrant multiple tool calls as a response. The ollama API is able to return multiple tool calls, openAI as well.
    I tried several models, including llama3.1, llama3, firefunctionv2 and the groq versions.
    Could it be your system prompt that prevents returning multiple function calls?

  • @davesabra4320
    @davesabra4320 Před měsícem

    very very clearly explained. Thanks.

  • @JDWilsonJr
    @JDWilsonJr Před měsícem

    Hello Lance. Great presentation. Looking everywhere for your jupyter notebook. You introduce so many new concepts in your tutorials that it is almost impossible to reproduce visually from the video. I see the version you used in the video remained untitled through the end. Will you be posting the notebook in github examples like you have in the past? Your work is amazing and valuable and we are scrambling to catch up!

    • @LangChain
      @LangChain  Před měsícem

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

    • @JDWilsonJr
      @JDWilsonJr Před měsícem

      @@LangChain Sooo appreciate your response and the link. Keep up the great work.

  • @blanky0230
    @blanky0230 Před měsícem

    Still killing it

  • @IdPreferNot1
    @IdPreferNot1 Před měsícem +2

    This is THE content! Please take it to the top ---> source code link for longer script?

    • @JDWilsonJr
      @JDWilsonJr Před měsícem

      Hello @IdPreferNot1. Apologies as I do not understand. May I trouble you for specific instructions to see the link to the notebook. I am clearly missing something. Thank you for your help.

    • @IdPreferNot1
      @IdPreferNot1 Před měsícem

      @@JDWilsonJr Im saying its great content.. he'd make it better if he shared the source code he went through. :)

    • @LangChain
      @LangChain  Před měsícem +4

      @@IdPreferNot1 Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @omni9796
    @omni9796 Před měsícem

    Great video!
    Is this also available for Node?

  • @dimosdennis
    @dimosdennis Před měsícem

    It is good, but it is still not there. I did several tests where i give it two dummy tools to use, and it is able to distinguish quite effectively - however it will always call the tools, even when asked not too. Tried different prompts, no good. Still, it is better than it was, and the package is nice :)

  • @jonasopina3529
    @jonasopina3529 Před měsícem

    I'm trying to use it with ollama, but from my other compute in the same network, and can't set the base_url.
    I'm trying to set it like llm = ChatOllama(model="modelName", base_url="http::11434"...), but it doesn't work.

  • @Imran-Alii
    @Imran-Alii Před měsícem

    Awesome!!!

  • @alenjosesr3160
    @alenjosesr3160 Před měsícem +2

    def bind_tools(
    self,
    tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
    **kwargs: Any,
    ) -> Runnable[LanguageModelInput, BaseMessage]:
    raise NotImplementedError()
    ollama bind tool says, not implemented.

    • @utkucanaytac5417
      @utkucanaytac5417 Před měsícem

      use from langchain_ollama import ChatOllama not with the one with community models

  • @kuruphaasan
    @kuruphaasan Před měsícem

    3:32 I get an empty array when I run the exact same code, can you help me here? My langchain-ollama package version is 0.1.1 and I have tried both llama3-groq fine-tuned model and llama3.1

    • @BushengZhang
      @BushengZhang Před měsícem

      Yes, I have encountered the same problem!!! I'm puzzled for half a day...

    • @kuruphaasan
      @kuruphaasan Před měsícem

      @@BushengZhang I am still not able to figure out the reason, I have checked with github issues also. Not sure if it's a bug or something else.

    • @BushengZhang
      @BushengZhang Před měsícem

      Oh, I have just find a solution, I changed ollamafunction to structuralize LLM's outputs, and it workded

    • @kuruphaasan
      @kuruphaasan Před měsícem

      @@BushengZhang Oh, great. Can you please share the example code?

  • @ai-touch9
    @ai-touch9 Před měsícem +1

    excellent work, as usually..:)
    can you share the code link.

    • @LangChain
      @LangChain  Před měsícem +1

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @user-mi8gf5ez5g
    @user-mi8gf5ez5g Před měsícem

    could you please share a notebook link? thanks for making these videos

    • @LangChain
      @LangChain  Před měsícem

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @BnmQwr-e2n
    @BnmQwr-e2n Před 7 dny

    Lopez Linda Anderson Carol Martinez Jose

  • @LyuboslavPetrov
    @LyuboslavPetrov Před 27 dny

    Would be neat of NO proprietary/paid tools are used (e.g. for embedding or websearch). But, of course, no big deal to to this ourselves. Thank you

  • @eMotionAllDamAge_
    @eMotionAllDamAge_ Před měsícem

    Great content! Please, share the code 😃

    • @LangChain
      @LangChain  Před měsícem +1

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

    • @eMotionAllDamAge_
      @eMotionAllDamAge_ Před měsícem

      @@LangChain Thanks !

  • @bhaibhai-qe8tt
    @bhaibhai-qe8tt Před 28 dny

    response = ChatOllama(
    ^^^^^^^^^^^
    TypeError: 'method' object is not subscriptable

  • @AnthonyGarland
    @AnthonyGarland Před měsícem

    this is the code at about 3:48.
    from typing import List
    from typing_extensions import TypedDict
    from langchain_ollama import ChatOllama
    def validate_user(user_id: int, addresses: List) -> bool:
    """
    Validate user using historical addresses.
    Args:
    user_id: (int) the user ID.
    addresses: Previous addresses.
    """
    return True
    llm = ChatOllama(
    model="llama3-groq-tool-use",
    temperature=0,
    )
    # %%
    llm_with_tool =llm.bind_tools([validate_user])
    # %%
    result = llm_with_tool.invoke(
    "Could you validate user 123? They previously lived at "
    "123 Fake St in Boston MA and 234 Pretend Boulevard in "
    "Houston TX."
    )
    result.tool_calls

  • @hor1zonLin
    @hor1zonLin Před měsícem

    why i use the same code but return [ ], the empty list?

    • @hor1zonLin
      @hor1zonLin Před měsícem

      from typing import List
      from langchain_ollama import ChatOllama
      from typing_extensions import TypedDict
      def validate_user(user_id: int, addresses: List) -> bool:
      """Validate user using historical addresses.
      Args:
      user_id: (int) the user ID.
      addresses: Previous addresses.
      """
      return True
      llm = ChatOllama(
      model="llama3-groq-tool-use",
      temperature=0,
      ).bind_tools([validate_user])
      result = llm.invoke(
      "Could you validate user 123? They previously lived at "
      "123 Fake St in Boston MA and 234 Pretend Boulevard in "
      "Houston TX."
      )
      result.tool_calls
      [ ]

    • @kuruphaasan
      @kuruphaasan Před měsícem

      ​@@hor1zonLin were you able to figure out and fix the issue?

    • @BushengZhang
      @BushengZhang Před měsícem

      @@kuruphaasanhor1zonLin I have the same problem!