How does OpenAI Function Calling work?
Vložit
- čas přidán 20. 04. 2024
- In this video, we're going to dig into OpenAI function calling. We'll explore what's happening under the hood before working through an example of how to use it to call a Weather API to see how warm (or not!) it is right now.
#llms #openai #functioncalling
Code - github.com/mneedham/LearnData...
OpenAI Documentation - platform.openai.com/docs/guid... - Věda a technologie
The best simple explanation I have found about Function Calling. Thanks for making it so easy to understand!
As a developer, this is great. Thanks for clarifying right at the beginning that its us who have to make the function call and 👍 to the sequence diagram.
Great, glad it was useful!
very clear introduction for open AI function calls
this video is super useful for me to understand function calls.
Great! Glad it was useful - let me know if there are any other topics in this area you'd like me to cover next.
Perfect explanation, all other videos were long winded
haha, thanks! I'm trying my best to explain things in under 5 minutes :)
Thanks
I have several functions implemented in my project, each responsible for a specific task. However, frequently, when the user requests the extraction of information that should trigger function A, other functions (like B or C) are called instead. Although function A is correctly triggered on some occasions, this does not happen consistently. Why is this happening? I am using OpenAI's GPT-4 model.
Thanks for the great explanation :)
I have one query: How many API calls does it support? like in AWS it supports 5 APIs per agent (Bedrock)
if I have multiple functions how does the
"for tool_call in tool_calls" loop work with different functions? the function_response parameter has the latitude and longitude hardcoded as arguments. what if my other functions deal with other stuff and don't require lat and long as arguments, but other arguments? I'm really confused by that bit. If I have more functions do I just add a "match case" check to pass the correct arguments for each function?
Yeh this is a bit hardcoded for a single function. The arguments are in that function_args variable, so you could pass them in using the kwargs syntax. Maybe I'll make another video to show how to do that.
How did you get the longitude and latitude to pass to the function ? Was it in response by the LLM ?
Yes - the LLM works out the lat/long based on the location that we used in our prompt.
Hi mark. I have an array of about 60 objects. Each object has a length property with a non null value. Each object has a text property with a null value. I've been trying for weeks after work to craft a prompt that will force OpenAI to generate a word - any english word for now - that has a character amount equal to the length property. I have about an 80% error rate - Rarely can it generate a word with characters equal to the length value. The error rate goes down to about 60% when I say it doesn't have to be a word. Do you feel that this function calling double API request is the only way to fix this issue to guarantee 100% success rate? thanks.
I wonder whether something like Instructor might be better for trying to get structured output. I haven't tried it with something like what you're trying, but I might play around with it over the weekend - github.com/jxnl/instructor
@@learndatawithmark Thanks for the reply mark. I should have been more clear, the structure is always intact, the issue is that LLM's don't seem to be able to count. Very ironic - the technology that will eventually "take over our planet" can't tell you how many characters are in this message, nor how many objects are in a given array, let alone create strings with lengths equal to a value *you supply. Hopefully that issue gets solved before we hand over military systems to our AI bots ;)
@@chriswooohoo4518 I know, it's not intuitive at all. It's got me thinking whether the problem could be solved with Guidance, another tool that tries to control the output of LLMs github.com/guidance-ai/guidance
Couldn't we do the same approach with LangChain's Agents?
Yes I think so. I don't know for sure, but I would think the agents might use this API if/when they make calls to OpenAI?
then why langchain agents?
get more control by using openai function calling
I haven't played with langchain agents yet. I assume they are more powerful than what we showed in this video
@@learndatawithmark
I am developing an app where users can choose a topic or upload a book/PDF for conversation and create multiple personas to get responses. I am not using LangChain; I am using OpenAI function calling and high level prompting.
Why the fuck is everybody on the web giving just this weather example?
Can't you be more creative and authentic?
What the hell do I do if for example I need to translate an array of strings and always return a consistent and formated result?
Hey - I thought it'd be easier to explain if I used the example that people are familiar with.
I did make another video a while back where I had it generate an array of objects and their sentiment. You can see that here - czcams.com/video/lJJkBaO15Po/video.html