GPT-4o: What They Didn't Say!

Sdílet
Vložit
  • čas přidán 13. 05. 2024
  • While yesterday's GPT-4o announcement has been covered in detail in lots of places I want to not only cover that but talk about some of the things they didn't say and what the implications are for GPT-5
    openai.com/index/hello-gpt-4o/
    openai.com/index/spring-update/
    🕵️ Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    👨‍💻Github:
    github.com/samwit/langchain-t... (updated)
    git hub.com/samwit/llm-tutorials
  • Věda a technologie

Komentáře • 70

  • @Aidev7876
    @Aidev7876 Před 25 dny +49

    So what was ot that they didn't tell us? This is the only reason I listened...

    • @JG27Korny
      @JG27Korny Před 25 dny +12

      clickbait

    • @rikschoonbeek
      @rikschoonbeek Před 25 dny +2

      I think from 8:00 you'll hear it

    • @pluto9000
      @pluto9000 Před 25 dny +5

      rikschoonbeek
      I didn't hear it. But you saved me watching all.

    • @Merlinvn82
      @Merlinvn82 Před 24 dny +2

      The gpt-4o is not actually a finetune from 4, it's a new one trainned with the same gpt-4 datasets.

    • @fellowshipofthethings3236
      @fellowshipofthethings3236 Před 24 dny +2

      congratulations for being baited..

  • @winsomehax
    @winsomehax Před 25 dny +38

    Why free? I think it's the same reason they removed the login. The more people using it, the more data they get to train on. They couldn't stay ahead of google in the data game otherwise - google has gigantic amounts of people's data. This explains why google has been so stingy with their bots. They don't need more of your data

    • @ondrazposukie
      @ondrazposukie Před 25 dny +8

      I think they just want to be as open as possible to make many people use their AI.

    • @neoglacius
      @neoglacius Před 25 dny

      exactly, why facebook or google is free? because YOURE THE PRODUCT , now including logistical and operational data from all companies in the planet

    • @rikschoonbeek
      @rikschoonbeek Před 25 dny +5

      Can't say there is a single motive. But data seems extremely valuable, so that's probably a big motive

    • @kamu747
      @kamu747 Před 25 dny

      That's not the reason. We'll, not the main, there might be something there but...
      1) It's a competition. Meta changed the pace when they decided to provide their AI for free. Others will need to offer better for free to stay in the game.
      2) It has always been part of their mission to provide free services. There are altruisitic reasons behind the intentions of those involved. OpenAI isnt really a company as you know it, it is a movement. A changed world is their ultimate goal. If they don't do this, the global implications are catastrophic, AI risks cresting an irreversibly massive divide between classes because believe it or not, a lot of people can't afford to pay $20.
      This is what OpenAI started as am NGO, but their mission was too expensive, they needed to placate some investors and monetise a little in order to be sustainable for the time being while compute was expensive.
      Compute just got cheaper with Nvidias new H200 which allows them to afford to offer services to more people. However, there's certainly more advanced capabilities that paid users will benefit from later on.
      4) As for user data, they no longer need it as much as you think they do.

    • @4l3dx
      @4l3dx Před 25 dny +7

      Their actual product is the API; ChatGPT is like the playground

  • @kai_s1985
    @kai_s1985 Před 25 dny +4

    It makes sense that this model is based on a different transformer (or tokenizer) because they were calling it gpt2 (gpt2-chatbot or sth like that).

  • @chrisconn5649
    @chrisconn5649 Před 25 dny +5

    I am not sure they were ready for launch "Sorry, our systems are experiencing high volume". Shouldn't that be expected?

  • @hemanthakrishnabharadwaj6127

    Great content as always Sam! Excited by how this could be a teaser to a GPT5, totally agree with what you said.

  • @ChrisBrennanNJ
    @ChrisBrennanNJ Před 20 dny

    Great reviews. Loves the channel! Voice IN! Voice OUT! Human doctors learning (better) bedside manners from machines! (Film @ 11).

  • @micbab-vg2mu
    @micbab-vg2mu Před 25 dny +6

    great update - I am waiting for audio vesrion in GPT4o - so far i use it for coding and imagies analysing,

  • @BiztechMichael
    @BiztechMichael Před 25 dny +2

    Re the 1.5 models and the new tokenizer but still GPT-4, I see this as comparable to the Intel “tick-tock” pattern of CPU upgrades - you’ve got a new process node, first you port the old CPU architecture to run on it - that’s the tick - and once that’s proved out, then you get your new CPU architecture running on it, and that’s the tock. Then repeat. This let them split the challenge into two different phases, and gave them something good to release at each phase.

  • @seanmurphy6481
    @seanmurphy6481 Před 25 dny +2

    To me it would seem OpenAI is using multi-token prediction method with this new model, but I could be wrong. What do you think?

  • @MeinDeutschkurs
    @MeinDeutschkurs Před 25 dny +1

    Gpt4o was able to refactor complex JavaScript code. I was impressed.

  • @Luxcium
    @Luxcium Před 25 dny

    This is a very interesting and informative for anyone interested who has not seen the presentation and I am also watching just because your voice is so calming and engaging… Thanks for bringing this to the AI Community 🎉🎉🎉🎉

  • @SierraWater
    @SierraWater Před 25 dny +1

    Been playing with all last night and this am.. this is a world changer..

  • @mickelodiansurname9578
    @mickelodiansurname9578 Před 25 dny +2

    the default setting I seen in their API docs for gpt4o was 2 FPS... however it can be increased.... i'm thinking there's a sweet spot but I hope its not 2 FPS! Also the audio API controls are not integrated yet and you have to use the old 'whisper' rigmarole of TTS and ASR

  • @willjohnston8216
    @willjohnston8216 Před 25 dny

    Wow, what an interesting time to be alive. I think it's an improvement in many ways, but only around the edges and not the core 'intelligence'. I'm seeing very similar answers to previous versions and other LLMs. Also, I see that it now does a more web search to include in the results and it is telling me that it can store persistent information from our sessions, which seems like a big enhancement. I don't see the 10x improvement that 3.5 to 4 showed and I suspect that they are quite a ways from achieving that in a version 5, but I'd love to be proven wrong.

  • @lucianocontri239
    @lucianocontri239 Před 25 dny +1

    don forget that gpt depends on deep-learning advancements in the cientific field to deliver something better, its not like a regular company.

  • @Emerson1
    @Emerson1 Před 24 dny

    Are you going to any fun events in bay area?

  • @bennie_pie
    @bennie_pie Před 21 dnem

    I found it incredibly quick with a simple text completion but it didnt actually read or do what I asked. It needed reminding to visit the URL I gave it (tool use) which I had to do several times and it still seemed to prioritise its own out of date knowledge over the content it had just fetched. I need to try out all the features fully (limit hit after a few messages) but it came across as a bit too quick to churn out code without reading the initial prompt properly...it felt a bit lazy. Perhaps I just need to learn how to prompt it to get the best from it (as was the case with Claude)

  • @samvirtuel7583
    @samvirtuel7583 Před 24 dny

    If the audio and image are integrated into the model and use the same neural network, how did they manage to dissociate them in the version currently available?

    • @samwitteveenai
      @samwitteveenai  Před 23 dny

      The model will have different heads out and they can just turn it one off etc.

  • @user-me7xe2ux5m
    @user-me7xe2ux5m Před 23 dny

    Something that I haven't seen largely discussed yet is the opportunity for __personalized tutoring__ that was demoed at OpenAI's GPT-4o announcement event.
    Imagine a world where every student struggling with a subject like math or physics has a personal tutor at hand to help them grasp a difficult subject. Not solving a homework problem for a student, but guiding them step by step through the solution process, so they can derive the solution on their own with minimal help.
    IMHO this will make the entire (on-site as well as online) tutoring industry to some degree obsolete.

    • @samwitteveenai
      @samwitteveenai  Před 23 dny

      I agree this is huge. I know there are people working on it, but agree it is going to be one of the biggest areas for all these models.

  • @ahmad305
    @ahmad305 Před 25 dny

    I wonder if Dall-E will be available for free users?

  • @Decentraliseur
    @Decentraliseur Před 25 dny

    They understood the ongoing market shares competition

  • @buffaloraouf3411
    @buffaloraouf3411 Před 25 dny

    im free user how can i try it

  • @Evox402
    @Evox402 Před 25 dny

    I tested it with the mobile app. Its quite amazing how fast it can respond. But the whole thing with different emotions, sounding sad, happy, excited, did not work at all. The voice was using the same tone and "emotion" everytime.
    Did anyone had different experience with it? Could anyone re-create what they showed in the live-demo?

    • @XerazoX
      @XerazoX Před 25 dny

      the voice mode isn't updated yet

  • @darshank8748
    @darshank8748 Před 25 dny

    Actually Ultra 1.0 is able to do img in and out. But as usual with Google we will witness it next year

  • @jondo7680
    @jondo7680 Před 25 dny +2

    It's simple if gpt 3.5 got replaced by this, gpt 4 will most likely replaced by something better for paid users.

  • @maciejzieniewicz4301
    @maciejzieniewicz4301 Před 24 dny

    I have a feeling GPT-4o was trained using knowledge distillation Teacher-Student framework, with 4o being the student, and Arrakis or whatever else as multimodal teacher. 😅 I have no proof of it anyway. Also good to mention optimized tokenization process.

  • @CaribouDataScience
    @CaribouDataScience Před 25 dny

    So is it only free on the desktop?

    • @markhathaway9456
      @markhathaway9456 Před 25 dny

      an Apple machine first and others over a couple of weeks. They said the API is also free, so we'll see some apps for iOS ans Android.

  • @nyambe
    @nyambe Před 25 dny

    My GPT 4o does not do any of the new things. It is just like gpt 4

  • @carlkim2577
    @carlkim2577 Před 25 dny

    That's it i think. It's a mid point to version 5. Sam talked before about how multi model would lead to better reasoning. They were running out of text data so clearly shifting focus to video plus audio. That resulted in Sora . Now the audio gets us this.

    • @Anuclano
      @Anuclano Před 24 dny

      If they ran out of text data, why it has no idea about what Pushkin wrote.

  • @digzrow8745
    @digzrow8745 Před 24 dny

    the free 4o access is pretty limited, especially for conversations

  • @trixorth312
    @trixorth312 Před 25 dny

    When does it roll out in australia?

  • @74Gee
    @74Gee Před 25 dny

    Can't build much on the free tier. 16 messages per 3 hours

    •  Před 25 dny

      Why would you build something on a free tier?

    • @74Gee
      @74Gee Před 25 dny

      Well I wouldn't, but at 1:20 that's the suggestion.

  • @ps3301
    @ps3301 Před 24 dny

    The world doesn't have enough chip to make ai cheap. Technology still requires a lot more innovation

  • @clray123
    @clray123 Před 25 dny

    Who are these guys? Free Ilya!

  • @J2897Tutorials
    @J2897Tutorials Před 22 dny

    One thing they didn't say is that you can only ask 'GPT-4o' about 5 questions before being blocked for the day unless you pay up.

  • @user-ff6cs5em7f
    @user-ff6cs5em7f Před 19 dny

    one eye just a illuminati thing it is

  • @yomajo
    @yomajo Před 25 dny +1

    spoiler: they told everything. go on to next video.

  • @OnigoroshiZero
    @OnigoroshiZero Před 24 dny

    I would bet that GPT-4o is 3-5 times smaller than the original GPT-4, if not even smaller.
    There have been so many advances in the field since GPT-4 released, especially from Meta, which would be stupid to not take advantage of. And the model being completely free backs this up, if it was similar to the original, going free would completely bury the company financially.
    And I would guess that GPT-5 will be similar size with GPT-4, but taking advantage of every new known innovation in the field, plus dozens more that OpenAI will most likely have made internally, will make it a couple times better, plus with having true multimodality and better memory will likely make it be the first glimpse of AGI by the summer of next year.

  • @user-ix1je3sp4k
    @user-ix1je3sp4k Před 25 dny

    THAT The deal announced by Perplexity and SOUND HOUND $SOUN platform is being used by GPT-40