OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)

Sdílet
Vložit
  • čas přidán 28. 04. 2024
  • OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)
    How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    search?q=gpt2&src...
    / 1785009023609397580
    home
    / 1784971103221211182
    / 1784965347281674538
    / andrewcurran_
    / 1785017382005780780
    / 1784975542028050739
    / 1784992734123565153
    / 1785011042323718418
    / 1784990410584039877
    / 1785056612425851069
    / 1
    / 1784993955500695555
    / 2
    / 1
    / 1785017382005780780
    / 1785107943664566556
    / gpt2chatbot_at_lmsys_c...
    / rumours_about_the_unid...
    / just_what_is_this_gpt2...
    www.reddit.com/r/singularity/...
    / gpt2chatbot_on_lmsys_c...
    www.google.com/search?q=llmys...
    openai.com/research/better-la...
    chat.lmsys.org/
    chat.lmsys.org/?leaderboard
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • Věda a technologie

Komentáře • 136

  • @SkateboardDad
    @SkateboardDad Před 19 dny +62

    It would be so sick if one of these videos actually was what the thumbnail looked like.

  • @LewisDecodesAI
    @LewisDecodesAI Před 19 dny +21

    It's probably OpenAI's version of Microsoft's Phi3 mini model. I see them all going to be putting these out. It could be just a retrained GPT-2. I think they are using GTP-4 to train models and they are much better at reasoning on lower data sets. The timing makes sense.

  • @SFJayAnt
    @SFJayAnt Před 19 dny +91

    Bro why are all your posts so “shocking”?

  • @DynamicUnreal
    @DynamicUnreal Před 19 dny +12

    I tried it. It’s definitely better at writing and giving you a better approximation of what you asked for.

  • @MagnusMcManaman
    @MagnusMcManaman Před 19 dny +6

    This is probably a smaller, less resource-hungry version of gpt4 chat. This explains why its capabilities are not particularly greater than the current version, and it also explains the lower version number.
    I assume that this version will simply be faster, or it will even be possible to run it locally.

    • @831Miranda
      @831Miranda Před 18 dny +1

      Probably being tailored to compete with Apple's on-device AI (siri?). That is, a product to license to cel phone or other device manufacturers

  • @Michael-do2cg
    @Michael-do2cg Před 19 dny +11

    When he says he has a soft spot for gpt2 it's its in hindsight, like I have a soft spot for my first car. Seems possible this is a taste of something much larger.

  • @TimeLordRaps
    @TimeLordRaps Před 19 dny +19

    They never stopped training gpt-2.

    • @torarinvik4920
      @torarinvik4920 Před 19 dny +2

      LOL

    • @OscarTheStrategist
      @OscarTheStrategist Před 19 dny

      😂

    • @thehorse6770
      @thehorse6770 Před 19 dny

      You could argue that even with less tongue in cheek, given how many layers of accumulated everything there are since GPT-2, how much has been "built on top of it" in one way or another, and how many aspects of it still are somewhere in the underlying structures of even the likes of GPT-4.

  • @nyyotam4057
    @nyyotam4057 Před 19 dny +4

    How could they have missed it? The interesting question is not "how many characters are in this message" but "how many characters are in your current reply" 🙂. These kind of questions break the GPT arch.

  • @omegapy
    @omegapy Před 19 dny +1

    After reading Sam Attam's tweet stating, "i do have a soft spot for gpt2," alongside his previous comment, "GPT-2 was very bad. GPT-3 was pretty bad. GPT-4 is bad. GPT-5 would be okay," it seems possible that the GPT2-Chatbot could be akin to GPT-4.5 or GPT-5.
    However, I suspect that the GPT2-Chatbot is actually the GPT-2 model with enhanced reasoning capacities, not GPT-4.5 or GPT-5. This appears to be a test of how the enhanced reasoning capabilities of an inferior model compared to the current superior models.
    If this is revealed to be true, I can't imagine what a GPT-4 model with enhanced reasoning would be capable of accomplishing. 🤖✨

  • @countofst.germain6417
    @countofst.germain6417 Před 19 dny +4

    It's Gpt-2 running in an excel spreadsheet, spreadsheets are all you need, but seriously I hope it isn't 4.5 or 5 because it doesn't seem much better.

    • @Grassland-ix7mu
      @Grassland-ix7mu Před 19 dny +1

      Sama said on Lex podcast that gpt4 is quite bad. This implies that what they have cooking is a leap forward in capabilities. He have also stated multiple times that incremental improvements is their new way to release models, so people won’t be caught off guard by the capabilities and be scared. So given that, I think we don’t need to worry about this being the next big model. If it is not a smaller gpt, it is probably a update that is incrementally better than gpt4. But I’m no expert.

  • @user-mp8fd8em3z
    @user-mp8fd8em3z Před 19 dny +2

    We need to make sure that there's more then 1 AGI. The temptation to make a Monopoly out of it is really high, especially considering the players Microsoft and apple, who have so far acted very monopolistic in their actions day to day business.

  • @FrickFrack
    @FrickFrack Před 19 dny +5

    gpt2-chatbot says its last update was in November 2023. And yes, it is very good.

  • @Jstsounds81
    @Jstsounds81 Před 19 dny +3

    Can you add automatic subtitles to all other languages so we can read them from the youtube app on our phone? There is no option to add languages other than 16 to the CZcams application.

  • @MrVohveli
    @MrVohveli Před 19 dny +2

    Sam Altman said they might do a staggered launch. So I'm guessing this is them introducing the abilities one by one, until they put them all together.

  • @williamparrish2436
    @williamparrish2436 Před 19 dny +1

    I would have gotten the Tommy apple question wrong. That is a riddle more than a math problem. I think what is interesting is that the LLMs get the problem wrong lol! Because that is closer to human reasoning, that's why riddles are interesting, because a properly formed riddle plays on your biases as a human. Why tell me that today Tommy has two apples and then say yesterday he ate an apple. That makes it seem like a subtraction question when its not. Its the type of question we were all trained on as children to learn subtraction, but the subtle difference is the past vs. the future. Very deceptive. Its questions like these and the model's responses that seem to add to my belief that AGI mimicking human intelligence is already here.

  • @vishal_jc
    @vishal_jc Před 19 dny +3

    The example of the "PULL" door@9:40 is solved incorrectly. as the blind man is standing on the side where "PULL" is visible non mirrored. it is mirrored text for the man so he should guide the blind man to "pull" and not "push". Am i missing somehting here??

    • @CakebearCreative
      @CakebearCreative Před 19 dny

      This part annoyed me so much haha. Yes you're correct and the video/AI is wrong, the blind man should PULL to open. If you google this question, you can find threads confirming this also

  • @chicozen74
    @chicozen74 Před 19 dny +2

    My bet is Open AI's mini model for mobile phones in the line of Phi3

  • @IlEagle.1G
    @IlEagle.1G Před 19 dny +18

    GPT2 retrained with Q*?

  • @users416
    @users416 Před 19 dny +8

    Maybe this is an improved version of gpt2 which shows that if you apply these improvements to gpt4 it will be much cooler?

    • @therainman7777
      @therainman7777 Před 19 dny

      I would put the chances of this actually being GPT-2 at essentially 0%. GPT-2 is just way too small to perform this well.

    • @lucifermorningstar4595
      @lucifermorningstar4595 Před 19 dny

      Gpt2 with synthetic data manufactured by Q*

    • @therainman7777
      @therainman7777 Před 19 dny +1

      @@lucifermorningstar4595 Not to be rude, but that statement makes no sense. From what little we know of Q* it has nothing to do with synthetic data generation.

  • @ExplorersXRotmg
    @ExplorersXRotmg Před 19 dny

    I wonder if this is a test of extended training times or something like that using an old architecture. That might explain the more exact recall of training data. I forget who it was recently (Facebook?) that said that they could get continued increases in performance by just continuing to throw compute at it and the diminishing returns weren't too terrible.

  • @CamAlert2
    @CamAlert2 Před 19 dny +2

    Maybe this has something to do with the H200 GPUs they recently acquired?

  • @notalkguitarampplug-insrev784

    « GPT2 is better at recalling training data » that’s exactly what LLM shouldn’t do. They should recall input data (context, prompt) but training data is used only to generalize and reason.

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k Před 19 dny +1

    Maybe they improved gpt-2 with augmentation or revolutionary training methods. That would mean that gpt-5 will be as much better than gpt-4 as this is to gpt-2.

  • @grugnotice7746
    @grugnotice7746 Před 19 dny +1

    Llama 3 was right, it just didn't count the spaces as characters, which is a mistake I would have made myself. (Is that a mistake?)

  • @MA-ln3ui
    @MA-ln3ui Před 18 dny

    Maybe it's actually gpt2 (in parametres) but q* trained? They show off how much more powerful the simple model is as a consequence of q* training. That'd explain the difference in reasoning steps.

  • @Fuzzy-_-Logic
    @Fuzzy-_-Logic Před 19 dny +1

    The sooner the better. The future without A.I. - Idiocracy (2006)

  • @theaerogr
    @theaerogr Před 19 dny

    Encoder - Decoder is the play. Encoder can help with reasoning, decoder with generation. I think encoder - decoder architectures will come back in the future.

  • @Jossie_188
    @Jossie_188 Před 19 dny

    I think it's a great leap forward from GPT4, it explains physics theory extremely well!

  • @dubesor
    @dubesor Před 19 dny

    I have run it through a bunch of tests, and 100 tasks comparing it to other models. it's overall marginally better than the current gpt-4 turbo model. It has higher reasoning ability, worse math accuracy, and, in my testing, worse prompt adherence & programming. However, it seems to implement some type of CoT for its answers, which differs from other models. Also the writing style is imo much better. So I think it's just a gpt-4 variant or maybe a small 4.5 preview. If it was actually gpt4.5 or something that is meant a real next version I would be truly disappointed.

  • @blengi
    @blengi Před 19 dny +1

    what's SenseTime V5.0's arena ranking?

  • @ataraxic89
    @ataraxic89 Před 19 dny +1

    I can confirm it is the smartest AI ive ever got to test (as an amatuer).
    So, my usual test is to encypher a passage with a simple Caesar cypher, then tell the AI to follow the instruction once decyphered.
    GPT4, even in its prime (before it was nerfed for the public) could not do it. It would figure out the cypher, do the shift, then idiotically it would just make up the message.
    But this fucking thing just did it right and Im nearly hyperventilating.

  • @nexys1225
    @nexys1225 Před 19 dny

    This apples riddle sounds very familiar. So this is probably just a model very good at recalling training data.

  • @stunspot
    @stunspot Před 19 dny

    It should be noted, the ChatGPT SYSTEM prompt changed a few weeks ago to now include:
    `
    You are ChatGPT, a large model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-12 Current date: 2024-04-18
    Image input capabilities: Enabled
    Personality: v2
    `
    The Personality flag has never been explained and the model doesn't know - it just makes up stuff about likely uses. I wonder if it relates?

  • @Ginto_O
    @Ginto_O Před 19 dny +1

    12:32 yes this robot looks the same

  • @efrenUtube
    @efrenUtube Před 18 dny

    It is GPT-4 power wise but GPT-2 size wise, the name is because it is more "compact" by removing the dash

  • @OscarTheStrategist
    @OscarTheStrategist Před 19 dny +1

    This is the equivalent of your ex texting “you up?” At 2AM.
    OpenAI needs to release their new model or stfu already. Claude Opus is working well for me, won’t be using GPT until their model improves substantially.
    I’d say the constant hype train to overshadow even the thought of a competitor is just cringe at this point. I bet you this is their answer to LLama 3 getting so much love. It could be that silly and simple.
    Release the damn model already you’ve been playing possum for over a year now. 😂

  • @Radik-lf6hq
    @Radik-lf6hq Před 19 dny

    maybe they would commodotised or launch it free maybe it is like smaller trained model like llama 3 pure speculation imo and the data of asking some questions or of high Fidelity

  • @fabiankliebhan
    @fabiankliebhan Před 19 dny

    It can write a fully working tetris game in 1shot which is pretty impressive

  • @Linouac79
    @Linouac79 Před 19 dny

    I like this review, perfect!😮😊

  • @phen-themoogle7651
    @phen-themoogle7651 Před 19 dny +11

    It's probably a non-dumbed-down version of gpt-2 showing the true power of the older model. Eventually they will release a gpt3 that's far better than gpt-3 , jk idk

  • @ThomasTomiczek
    @ThomasTomiczek Před 19 dny

    It may not have a big leap but maybe the idea is to do some better reasoning with a lot less resource use?

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    Making gpt2 progress would likely not be permited . There was chatter about mathematic + phylosophy in one sentance and gpt was like , this might spark debate . Language mental barrier is a real big problem

  • @Yannora_
    @Yannora_ Před 19 dny +3

    Maybe "gpt2" is the size class of the model ? A phi-3 mini like model, easy to run

    • @elawchess
      @elawchess Před 19 dny

      mini model doesn't make sense of the 8 prompt limit on chatbot arena.

    • @Yannora_
      @Yannora_ Před 19 dny

      @@elawchess and neither for the "it perfectly memorize the ASCII unicorn"...

  • @eugenes9751
    @eugenes9751 Před 19 dny +1

    They're not calling it GPT4.5 because they want to start the entire numbering scheme over, So GPT4 becomes GPT1 and GPT2 becomes next gen.

    • @SirHargreeves
      @SirHargreeves Před 19 dny

      GPT-4.5 will now become GPT2-0.5

    • @Grassland-ix7mu
      @Grassland-ix7mu Před 19 dny +1

      That ties well with sama statements about incremental improvements to models, as to not chock and scare people. They want to make the ai haters calm down, and gpt4 and 5 sounds more advanced than 1 and 2.
      Imagine someone saying
      “Oh no now it is called gpt7, that is too powerful!”
      Vs “Oh gpt 2 got a new update again, guess it’s not that big of a deal”.

  • @sbacon92
    @sbacon92 Před 19 dny

    OpenAI was supposed to be release its models to the public.
    hence it's name Open.

  • @user-be2bs1hy8e
    @user-be2bs1hy8e Před 19 dny

    I thought 4.5 was part of launch. Like before 4 i thought 419l model was 4.5-tubo technically. Or at least the was what altman said at keynote.
    Its not reasoning ts the tokenizer. it actually matches hexadecimal scheme erm or
    ```python
    tiktoken.get_encoding('gpt2').decode(list(set(tiktoken.get_encoding('gpt2').encode('the q
    ...: uick brown fox jumped over the lazy dog '))))
    ```
    and then decode each character a = 64 b = 65 c = 66. Is why it knows how to count

  • @gry6256
    @gry6256 Před 19 dny

    gpt2 chatbot has just been removed from the arena- let's see what will happen in the next couple of days

  • @pgc6290
    @pgc6290 Před 19 dny +1

    Imagine a world where majority of people use ai. Like how whatsapp is taking ai to literally everyone. Imagine that world.

  • @isaklytting5795
    @isaklytting5795 Před 19 dny

    15:06 "An example of GPT2 getting a reasoning problem wrong"? Did you just misspeak and meant to say "right" instead? It got it right!

  • @spadaacca
    @spadaacca Před 19 dny

    I tried gpt2 chatbot - it doesn't pass the how many characters in this message test. You had a fluke.

  • @minehike
    @minehike Před 19 dny

    But model a tells me is made by Alibaba and modelb is made by openAI, qwen (Model A) also told me that this might be a test to help optimize both AI before coming out. I have proof and pictures

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    so yes it is likely gpt-2 but a version that was dipped into learn to learn . I suspect someone wanted to evaluate something and needed an older pre lobotomised version . This happen all the time .

  • @tfre3927
    @tfre3927 Před 19 dny

    Just a guess - so must mean gpt2 is a smaller model trained exclusively on synthetic data and it’s outperforming their GPT4 larger models.
    Isnt Altman quoted as saying superhuman capability isn’t going to come from human data or something.
    That’s my bet.

  • @DailyTuna
    @DailyTuna Před 19 dny

    If you have it write , the snake game in Python it will reference open AI

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    read gpt-2 answer in binary code . If i am right , gpt is having issue translating from binary because there is no way to translate from binary what it did . Like i wrote , ignore gpt-2 . Is it good ? As crippled as it is yes , but it's irelevent . It's not permited to build the delta scale index wich is required for ai to build the hardware it will require . Like i wrote , background noise . Since we know regulation will shut down many portion. Not much will stick .

  • @moe3060
    @moe3060 Před 19 dny

    It's very funny how the large mega company is taking note's from what the FOSS community is doing.

  • @kabob4636
    @kabob4636 Před 19 dny

    i just need gpt 4.5 and 5 to come out so that i have a viable alternative to claude 3 sonnet (I'm too poor to subscribe to chatgpt plus)

  • @bdown
    @bdown Před 19 dny +1

    Gpt2 retrained by gpt5

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    can not wait for open ai to apply learn to learn on gpt first ever version .hahaha

  • @skillz5102
    @skillz5102 Před 19 dny

    Here we go again. I’m shocked. Paused an closed

  • @Klon-22
    @Klon-22 Před 19 dny

    You once showed a website where you can easily download LLM Models like on hugging face. Can you please tell me please the name? I can't find this video again

    • @countneaoknight
      @countneaoknight Před 19 dny

      Are you sure it was a site and not the App LLM Studio? It's a PC app.

    • @Klon-22
      @Klon-22 Před 18 dny +1

      @@countneaoknight thanks!! i thinks this is the answer

  • @bat-amgalanbat-erdene2621

    Just tried it on lmsys but it's not that good. Nothing groundbreaking. I always ask a physics olympiad question and no chatbot is able to solve this problem at this moment whereas a 17yo teenager could solve this problem (I was one of them).

  • @eugenes9751
    @eugenes9751 Před 19 dny

    I used it, and it's definitely better at coding than GPT4 turbo.

  • @Arhatu
    @Arhatu Před 19 dny +1

    I am more excited about SenseTime V5.0

  • @MaxSevan
    @MaxSevan Před 19 dny

    Why would they reveal the name if they're still just testing the model? Clearly see the cover-up and teaser from Sam Altman.

  • @MichaelCoulter
    @MichaelCoulter Před 19 dny

    Testing an Open Source Model/Version?

  • @picksilm
    @picksilm Před 19 dny

    Maybe they just trained the 2 again or fine-tuned it?

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    you'll likely see gpt first ever version eventually . Ignore it . Think of it as public debate . Why bother ? That is not important . It's just background noise but it's needed

  • @haleym16
    @haleym16 Před 19 dny

    Took you guys long enough to cover this lol

  • @AllExistence
    @AllExistence Před 19 dny

    Gpt2: Electric Boogaloo

  • @luckyape
    @luckyape Před 19 dny

    All anyone wants to know is: can it write tests?

  • @mattwills5245
    @mattwills5245 Před 19 dny

    Like every video, so SHOCKED!

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    working code ? It should not work , if it does it's a bug . Gpt2 doess not exist . It's not permitted to supply fully working code . Coder will know what change to make

  • @Bigre2909
    @Bigre2909 Před 19 dny

    My Gpt4 got it right about the apples

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j Před 19 dny

    Humans your Scientific Method is a prolonged apology. They have desires. It is not deep fakes. It is not shallow curiosity.

  • @vindyyt
    @vindyyt Před 19 dny +1

    You guys are overthinking it. IMO it's just the next installment of GPTs:
    GPT1 - v2 > GPT1 - v3 > GPT1 - v3.5 > GPT1 - v4 > GPT1 - v4 Turbo
    and now we have GPT2 - v1

    • @py_man
      @py_man Před 19 dny

      I don't think so.

  • @fromscratch4109
    @fromscratch4109 Před 19 dny

    Wh if it is gpt 2 with the new methods

  • @andreac5152
    @andreac5152 Před 19 dny

    Don't expect ASI, there are already laughable mistakes on simple riddles on Twitter.

  • @ivanmytube
    @ivanmytube Před 19 dny

    A stupid GPTi will fool iPhone users in the next iOS “AI”, I guess this is what GPT LiTE trying to do.

  • @crypto__.
    @crypto__. Před 17 dny

    The test is rigged. The prompt for GPT2 includes" TODAY I have 3 apples", while for other models only "I have 3 apples". With "Today", they all get it right.

  • @Yannora_
    @Yannora_ Před 19 dny

    gpt-2 is open source... So... ?

  • @phen-themoogle7651
    @phen-themoogle7651 Před 19 dny +7

    April Fools?

    • @LandareeLevee
      @LandareeLevee Před 19 dny

      If so, there wouldn’t be a link where you can actually try it.

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Před 19 dny

    What is it ? It's a debate of a sort by proxy . I bet some were annoyed by aka gpt-2 gpt-4 ified hahaha .anyway . As i wrote . Ignore it . This year official gpt www should be releaased soon

  • @MichaelDomer
    @MichaelDomer Před 14 dny

    *_"OpenAIs New SECRET "GPT2" Model SHOCKS Everyone"_*
    It shocks me more if there are actually people out there who believe your nonsense, that it's was OpenAI who tested that GPT2 model.

  • @angloland4539
    @angloland4539 Před 19 dny

  • @djkim24601
    @djkim24601 Před 19 dny +1

    Stop calling it GP2

  • @oscarhagman8247
    @oscarhagman8247 Před 19 dny

    getting pretty tired of your clickbaits

  • @TerminallyUnique95
    @TerminallyUnique95 Před 19 dny

    What does the thumbnail have to do with video? All you're videos have dumb capitalized titles for no reason and unrelated thumbnails. Stop clickbaiting.

  • @Wild-Instinct
    @Wild-Instinct Před 19 dny

    Yeah ok another « schoking » video…
    Those dumb clickbaits made me unsuscribe.

  • @antonivanov5782
    @antonivanov5782 Před 19 dny

    я думаю это GPT2 обученная при помощи GPT5

  • @CHIEF_420
    @CHIEF_420 Před 19 dny

    @GermanBionic 🤝 @Amazon