3. OpenAI API Python - Earnings Call Summarization
Vložit
- čas přidán 13. 09. 2024
- Like the video? Support my content by checking out Interactive Brokers using the link below:
www.interactiv...
I will be starting a spinoff channel on AI in music, art, and gaming in 2023. Subscribe at: / @parttimeai
In this video, we get started with the OpenAI Python package. We write code to perform a simple task: summarizing an earnings call.
Colab Notebook and Data:
colab.research...
gist.github.co...
Like the video? Support my content by checking out Interactive Brokers using the link below:
www.interactivebrokers.com/mkt/?src=ptlPY1&url=%2Fen%2Findex.php%3Ff%3D1338
I will be starting a spinoff channel on AI in music, art, and gaming in 2023. Subscribe at: youtube.com/@parttimeai
Colab Notebook and Data:
colab.research.google.com/drive/15tr9FMCDuSO5Dahw8XMEkk2-p4CoR17s?usp=sharing
gist.github.com/hackingthemarkets/e664894b65b31cbe8993e02d25d26768
I will be 70 next year and this amazing content makes me feel younger. I applied this to an AAPL earnings call transcript from the Seeking Alpha archive and it worked a treat. I cannot exaggerate the interest this stimulates, so a big shout and a huge thanks to Larry. Keep it coming.
Hell yeah! About to drop another project that builds upon this, keep trying new things, this year is going to be crazy :)
I am addicted to Larry's OpenAI content now and can't wait for the next videos to come up. Please don't let us wait for too long, Larry. The ideas you talked about were so cool such as this one.
Transcribe video to text, send them in batches, summarize text etc., Really cool idea. Please do an extended session in parts please. Thank you so much, Larry. As always, I appreciate all the value you provide in knowledge sharing. Immensely Thankful.
I am learning so much from your video series. Great work and thank you for sharing.
Wow, an amazing video. Keep up the great work.
Very easy to understand video!! Great ideas and an amazing way of explanation. Love this series. Waiting for more playlists on AI!!
Looking forward to the rest of the videos in this series - learning to build a web app would be amazing
Thank you Larry great learning lesson.
Can't wait to see your other ideas for ChatGPT.
Fantastic! Thank you PTL! You are the best. :)
Keep it up Larry, thanks for sharing your thoughts and knowledge!
Thank you!
Thanks for the Thanks! Cheers!
Excellent work 👍
Super inspiring video! So glad I discovered your channel a couple if weeks agi. Exactly the content I was looking for!
Great Video ! Thank you ! 🥰
I am a beginner in OpenAI, learn a lot from this series, because you make this topic for my understandable. Keep going with this good job and many thanks.
Thank you for making and sharing the (working) codes and video! Love your helpful videos and your explanations!!!
This is awesome bro, you are killing it! Keep up the great work! 💪🧠
I'm looking forward to your embeddings tutorial. It seems like a more accurate option than chunking text to fit the 4k token OpenAI API limitation. Thanks again for your killer explanations! 👏
Alright, who’s with me. Let’s all chant Larry, Larry, Larry, Larry 🥳
Or better yet… Full Time Larry, Full Time Larry, Full Time Larry!!!
Pretty cool!
I am not a programmer, thank you for pointing out the /n/n
Awesome stuff, it would so nice to see you build a stock screener that takes all the stocks from the stock market and sorts them by % gain today
You could have chatGPT generate this easily, or just filter it on tradingview/robinhood?
Thanks! How did you get Whisper to identify the different speakers in the earnings call?
I wonder if Chat GPT could be used as a tech support reference while it uses all the companies stored tickets to draw answers from
Absolutely, I guarantee multiple startups are probably working on this now. There are many huge opportunities opening up in real time
Very cool! Why did you chuck up with arrays, instead of just deviding up the text in 6 peaces - based on 1/6 * total lenght for example?
When I just divided the string into 1/6 it would often end a chunk in the middle of a word. So I liked separating into buckets of words first. There are probably plenty of other ways to do it though.
Could you create a video explaining how to identify the best times to buy and sell the S&P 500 index using various factors such as the Federal Funds rate, inflation rate, M2 money supply, crude oil price, and Treasury yields in the monthly time frame? I've noticed that when the Federal Reserve pivots, the market often experiences a crash.
Great video, thanks! one question: how would you chuck up the data for embedding purposes? Like you have a long text from many documents which exceeds the token limit, and you want to search those documents for similarity to a certain quarry?
this series gave me an idea to build a script that summarizes key ideas from a podcast. Not exactly sure how I can break down the transcript into chunks without the summaries overlapping. 🤔
Summarize the summaries?
If you break it into chunks, won’t there be potential context missing? It’s treating each part separately not together right? Meaning it’s not able to understand utilize context provided from earlier chunks. Unless I’m missing something.
Anchor!
Thank you for the good work! I have tried to put on other txt file using Google drive but using openAI api I failed to get the context of the txt file but the html code. Any suggestions for easier txt upload for api to use?
I try to sign up, but not available for my country. Can you share the list of available countries?
The list of countries is here: beta.openai.com/docs/supported-countries
I guess the 4000 token limit for text-davinci-03 is quite a big deal. If I want to ask the AI question based on 12,000 tokens, Id need to probably need to break that into 6 chunks of 2000, then ask the AI to summarise into 6x 500 token chunks, then combine them, and then ask the AI the question based on that. But this inevitably will lead a limitation as some information is going to get lost.
It's like having a super smart assistant, but they have very little working memory. Is they any way to really get around this? Is because it is a neutered free version, or is this an inherent limitation with GPT3?
I guess it's a limitation of GPT in general. Because the way that the transformer architecture works is that it predicts what tokens comes next in a sequence of fixed number of tokens. The entire dataset is trained in batches. It would be to computational heavy to feed the whole dataset in 1 go, since you would make 1 giant neural network with a lot of parameters.
The way the transformer network learns and recognized these data patterns is by chunking up the data and feeding/training it to the GPU in parallel.
I did have a go on your model, but I was unuseful. I was using some tesla balance sheet 2022 PDF (you can find may on internet) but it did return with a error . Is there some type of data that it can read ?
i didn't understand the pricing , that 0.0004 for 1k token, can you teach please
What should I change in the code to summarize a pdf document?
Got an error when trying to copying your 4th codecell, do you need to buy tokens in order to execute? Thought your free 18$ should be fine to begin with for 'simple' tasks
Did you enter your API key and make sure it was set properly? What is the error?
@@parttimelarry Problem solved! was a weird kind of language-detecting error (didn't recognize latin-001 language or something). btw ty for this vid immediately subscribed now using your model for my own project
btw: But how can I divide the imported text into different paragraphs as you mentioned in 15:43
@@pauldriessens715 how did you solved it, i have same 001 problem
"Jerome Powell isn't fucking around." 🤣🤣🤣💯
Hi guys, am new here, I am good at python and I would like to know how to make an earning from AI and such tutorial, anyone succeed at making income can help me through please. Thank you
Holy shit,
Your channel is everything I’ve been looking for. Instant sub, starting to slowly self teach myself Python to incorporate with my trading .
THANK YOU SEÑOR LARRY 🥲🫡🤝🏼