Colab Notebook: colab.research.google.com/drive/1uRbaePS1_YnxSRoHK5LKmpneodwEuqFL#scrollTo=4qkdslxC8e0K My upcoming Build In Public project: fintrack.ai 0:00 Introduction 1:53 Fintrack.ai 2:20 Extracting a List of Related Companies 5:33 Pydantic Models and Responses 7:21 Extracting a List of Company Objects with Tickers and Commentary 9:37 Wrapping Up
OpenAI turned adhoc prompting into precise programming... if this reliably works, it is a significant boost for building AI application.. bravo to those passionate and brilliant OpenAI engineers
This is brilliant work! I'm excited to see if we can have it return high level strings or objects that are then sent to a kernel or agent that calls functions based on that output. Its wild to have 100% schema accuracy with something like an LLM and can see a lot of wild use cases coming from this in the following months
Why don't you focus more on freqAi? It's easier to use and ready, and you can improve it. You have a lot of passion, and we are expecting a lot from you.
Hey I may do a video on freqAI at some point, but most of my work is focused on LLMs and equities at the moment. I try to cover a broad range of financial tech and just go with what I find interesting at the time. If I do anything with crypto, it would probably be about Polymarket / prediction markets.
this kind of reminds me of function calling, given how you get back from the LLM the correct arguments to send to the function that you want to call. is there a difference here from structured output?
wondering what's the difference between this and using function calling with gpt4o, I feel like gpt4o with function calling is already relatively stable, hard to come up with any error. in fact , it just never happened so far
Hey Larry, this is actually not that much of a revolution... Take a look at the instructor library and you will find, that the achievable results are same! Also, latency is higher with the strict outputs of openai.
Hey yes I've been using Instructor for a long time now so am aware (I left one of the first comments on the "Pydantic is all you need" video/talk that is on CZcams since I quite liked it).. I didn't say anything was a revolution or that anything is mind-blowing like people on Twitter, just try things out and share :)
@@parttimelarry Right! Thanks for sharing anyways! By the way, in this context, I have a question for you.. maybe you have some opinion? We are building something with instructor and those strict outputs. We are arguing internally, if the "forceful" output structuring may actually hurt the LLMs ability to perform well in certain situations. I mean, yes - we get perfect outputs which can be processed in code afterwards, but the output was generated forcefully and in some cases (like categorizing data) the LLM might have had to hallucinate to make that strict output happen. What do you think abou that?
Colab Notebook: colab.research.google.com/drive/1uRbaePS1_YnxSRoHK5LKmpneodwEuqFL#scrollTo=4qkdslxC8e0K
My upcoming Build In Public project: fintrack.ai
0:00 Introduction
1:53 Fintrack.ai
2:20 Extracting a List of Related Companies
5:33 Pydantic Models and Responses
7:21 Extracting a List of Company Objects with Tickers and Commentary
9:37 Wrapping Up
awesome video. thanks for sharing the notebook too
Great seeing you adding colabs! ❤🎉
Good to see you are back. I hope you had a great time in Europe.
OpenAI turned adhoc prompting into precise programming... if this reliably works, it is a significant boost for building AI application.. bravo to those passionate and brilliant OpenAI engineers
Love this kind of content!
Holy Fck! This possibly disrupts the market data monopolies
Ooooh, this is REALLY cool 🥰
Love your podcast and podscan :)
@@parttimelarry I appreciate that so much! Can't wait to see what you'll be building. If you ever run into issues, hit me up!
Thanks!
Finally a good explanation and demonstration thanks
This is brilliant work! I'm excited to see if we can have it return high level strings or objects that are then sent to a kernel or agent that calls functions based on that output. Its wild to have 100% schema accuracy with something like an LLM and can see a lot of wild use cases coming from this in the following months
Thank you for the video
nice done man, thanks
Why don't you focus more on freqAi? It's easier to use and ready, and you can improve it. You have a lot of passion, and we are expecting a lot from you.
Hey I may do a video on freqAI at some point, but most of my work is focused on LLMs and equities at the moment. I try to cover a broad range of financial tech and just go with what I find interesting at the time. If I do anything with crypto, it would probably be about Polymarket / prediction markets.
@@parttimelarryI would love a video on Polymarket
welcome back!
Nice video. Glad to see you are using NVDA money for vacation :). Just joking!
cool idea
lmao he's back!!!!!!!!!!!
Just found your channel. Really great niche. Do you know of any other channels/sites that have a similar AI/Trading focus?
this kind of reminds me of function calling, given how you get back from the LLM the correct arguments to send to the function that you want to call. is there a difference here from structured output?
Could anyone briefly explain to me what this fintrack project consists of? I haven't really understood
wondering what's the difference between this and using function calling with gpt4o, I feel like gpt4o with function calling is already relatively stable, hard to come up with any error. in fact , it just never happened so far
Hey Larry,
this is actually not that much of a revolution... Take a look at the instructor library and you will find, that the achievable results are same!
Also, latency is higher with the strict outputs of openai.
Hey yes I've been using Instructor for a long time now so am aware (I left one of the first comments on the "Pydantic is all you need" video/talk that is on CZcams since I quite liked it).. I didn't say anything was a revolution or that anything is mind-blowing like people on Twitter, just try things out and share :)
@@parttimelarry Right!
Thanks for sharing anyways!
By the way, in this context, I have a question for you.. maybe you have some opinion?
We are building something with instructor and those strict outputs. We are arguing internally, if the "forceful" output structuring may actually hurt the LLMs ability to perform well in certain situations.
I mean, yes - we get perfect outputs which can be processed in code afterwards, but the output was generated forcefully and in some cases (like categorizing data) the LLM might have had to hallucinate to make that strict output happen.
What do you think abou that?
Another string parser in a language that is over hyped.
bro left us and comeback well dont