OpenAI DevDay: First Tests and Impressions - GPT Vision API, GPT-4 Turbo 128K, Dall-E 3 API ++
Vložit
- čas přidán 21. 07. 2024
- OpenAI DevDay: First Impressions and Tests - GPT-4 Vision API, 128K Context, Dall-E 3, Assistant API ++
👊 Become a member and get access to GitHub:
/ allaboutai
Get a FREE 45+ ChatGPT Prompts PDF here:
📧 Join the newsletter:
www.allabtai.com/newsletter/
🌐 My website:
www.allabtai.com
I test all the new features and APIs from the OpenAI DevDay. GPT Vision API, 128K Context, Dall-E 3, Assistant API and more.
00:00 OpenAI DevDay Intro
02:36 GPT-Vison API Test
04:46 Dall-E 3 API Test
06:35 TTS API Test
08:57 GPT-4 Turbo 128K Test
12:10 OpenAI Assistants Test - Věda a technologie
Congrats on being so quick out of the blocks with some practical uses of the new developments. Looking forward to your upcoming tutorials
Looking forward to your upcoming tutorials building with the new GPT Vision API and combining it with the Dall-E 3 API!
Is Assistant cheaper than the fine-tuning model? I don't understand the 0.2$ per GB per Assistant per day pricing logic.
You are on spot for the things I want to hear in a impression view like this. What, how and how much does it cost. Great job!
can you use external API to send data to the assitant when user request ?
Hi nice video. BTW, which version of openai package you have used in your demo ?
For your finance bot example, are you planning to pull a dataset of earnings call transcripts for the custom GPT feature? If that's the case, do you have a dataset that you recommend?
My question is do we have access to the 32k as well with the API
Always excellent video
Do i get access to the new modules by putting my credit card ?
I'm getting a 500 server error when using the Vision API. Does anyone also has this error? I'm located in Europe, is it that maybe?
For the last 2 items that is of TSLA Quarterly review, different person's view may be totally different. How chatGPT makes its judgement ? API call didn't provide any parameters how the review going to be ?
When does the ‘create your own gpt’ launch?
👍Useful information. Thanks.
Thnx for tuning in :)
Awesome stuff, thank you!
Awesome Knowledge Sharing. Thanks. I have a quesiton, people are creating Ai Chatbot through botpress, voiceflow by using open AI api key. Can you create a video on how to create AI chatbot by using new GPT Vision API.
hey, reckon we can do a video on how to chat with an assistant via command prompt and python?
That's all so cool. I wish I had GPT-4 API access. Been waiting for it since March :(
That's not normal, you should have access by now. Or maybe it's because of your location?
Same for me. But I just read something about it being automatically opened to anyone who has spent at least $1. So I guess I'm going to get busy programming and run up a bill.
You have to pay your play. GPT4-T wasn’t available until I reloaded the bank 💸
what does the 128k mean?
🎯 Key Takeaways for quick navigation:
00:00 🚀 Introduction to OpenAI DevDay highlights and new API features.
- Introduction to new GPT and DALL-E features,
- Testing of GPT-4 Vision API and DALL-E 3 API,
- Mention of a new 128k context window for GPT-4 Turbo.
02:02 🖼️ Testing the GPT-4 Vision API.
- Overview of the API's function and pricing,
- Demonstration of analyzing an image from the Wall Street Journal.
04:45 🎨 Exploring the DALL-E 3 API.
- Testing image generation with DALL-E 3,
- Description of the API setup and cost,
- Experimentation with a '90s hacker setup image prompt.
06:58 🔊 Introduction to the new Text to Speech API.
- Pricing and model options for the Text to Speech API,
- Demonstration of the API's usage and output.
08:58 📈 Testing GPT-4 Turbo's 128k context window.
- Analysis of Tesla and Meta Q3 earnings call transcripts,
- Summary creation using GPT-4 Turbo with a large text input.
12:13 🤖 Creating a Finance bot using the new Assistant feature.
- Setup of a finance analysis bot,
- Bot's performance in analyzing and reporting on Tesla's Q3 earnings call.
15:23 📊 Final thoughts on OpenAI DevDay and upcoming features.
- Overview of not covered features like fine-tuning and new models,
- Anticipation for further exploration of the GPT-4 Vision API.
Made with HARPA AI
very interesting !😎
Amazing work as always! btw how many hours do you sleep daily? haha
Bro is AI generated 😂
10-12 hours
what is the topic coming up video?
Awesome!
impressed by 128k context window
Kris - now you have to update your course for beginners :)
🎯 Key Takeaways for quick navigation:
Made with HARPA AI
I HAVE ALREADY WATCHED 10 CHANNELS ABOUT OPENAI DEVDAY.. better than late?? 🤔🤔🤔
You’re ahead most of the society 👍🏻
There are several default voices
The voices are better than Elevenlabs for sure. Unless 11Labs updated in last few months. How much more real do you want?? Makes me chuckle how jaded people are. If something isn't perfect its just 'meh'. Remember rotary phones, 8 tracks, pong, libraries, dialup? Whisper voices just 'good? SMH flip table.