ChatGPT is WORSE now than before | ChatGPT’s declining accuracy is concerning

Sdílet
Vložit
  • čas přidán 6. 09. 2024

Komentáře • 109

  • @shanivibess
    @shanivibess Před měsícem +10

    I swear its getting dumber... I thought it was just me lool it can't follow simple instructions... It used to do everything I asked so easily, but now it can’t handle simple tasks. It’s actually crazy. I’ll ask it to summarize a paragraph, and it does that fine. Then I'll say, “Here’s a new paragraph, can you summarize this one?” It says, “Okay, got it,” and then summarizes the entire chat so far... the fuck??? I say "no no no, summarize just the new paragraph please" and it combines the new one with the old one... "no... JUST THE NEW PARAGRAPH!!! here it is again (I paste it)" and it goes back to summarising the entire chat, I just give up and make a new chat looool but it was not this dumb before... it does this constantly!

    • @megatronreaction
      @megatronreaction Před 19 dny

      for me it's their STT that makes me going crazy. they can't really get what I say, we used to communicate better in 3.5.

  • @radcyrus
    @radcyrus Před měsícem +7

    It is getting so dumb there are no words for it, I gave it a list of books that I have read and asked it to give me recommendations of books that I have not read but might like if I do, no matter how many times I do this, it will ALWAYS include a couple of books that I have already read in the response

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem +1

      Yup. Ask it "besides" _anything_ & it will answer with at least one section about the thing you already said.

  • @Hcakdot
    @Hcakdot Před 2 měsíci +5

    The reason for GPT and others getting 'stupid' is their security training ('aka censoring'), one of the projects I've been working on was using LLMs and similar for identification of 'bad things', one of the tools I use for testing this is a series of photos. These photos are pictures of explosives of various types, on release of GPT4 it could correctly identify various pictures of Semtex in official packaging with warning logos etc. By June 2023 it thought the same pictures were Playdoh, I was testing this monthly and roughly by middle of March is the point is started to turn bad... It turns out that the 'security' features they impose on the model prevent it correctly identifying it, and because of the reinforced learning of the model over time, this corrupts the model...

    • @GwynethLlewelyn
      @GwynethLlewelyn Před 2 měsíci +2

      I was wondering about that as well. Is there something like "overtraining" a model? In other words, the constant retraining of these models so that they perform less hallucinations and stick to "safe" replies (they cannot mention sex, politics, weapons, drugs...) places more and more constraints upon the system, and this, in turn, also makes the model break apart...

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem

      Just like intellectual property compliance!

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse Před 3 měsíci +15

    I also find it humorous that Scarlett Johanson threatened to sue them over using her voice as the model's voice and how fast they changed it!

    • @Dwijii_
      @Dwijii_ Před 3 měsíci +2

      I was wondering what happened to the voice of sky

    • @mind_of_a_darkhorse
      @mind_of_a_darkhorse Před 3 měsíci +3

      @@Dwijii_ Nothing like a high-dollar lawyer to go after these big fish!

    • @Shellll
      @Shellll Před měsícem

      Thus losing any sort of respect

    • @Neal_McBeal
      @Neal_McBeal Před 29 dny

      @@ShellllHow so?

    • @Shellll
      @Shellll Před 22 dny

      @Neal_McBeal mega famous celebrity attacks a voice actor for sounding "similar" -- Forcing that voice actor's performance to be removed from production.

  • @Unimatrix69
    @Unimatrix69 Před 2 měsíci +19

    ChatGPT is a LANGUAGE probability model NOT A TRUTH ENGINE!

    • @KSExperimentalCollege
      @KSExperimentalCollege Před 8 dny

      THIS response is BESIDE THE POINT and is YELLING for NO discernible REASON!

  • @RichardKCollins
    @RichardKCollins Před měsícem +3

    None of the "AIs" can trace the source of their input data with clear references and lossless methods. That is old database technology that always works. It is critical. None of these "AIs" has a personal memory of its experiences. When you use statistical methods for all things, it cannot re-derive the rules of calculus, or even certain types of arithmetic, from bad examples from the free internet. What is required is lossless, perfect memory and exact methods. I call them "lossless" methods. The rules of the world are often absolute. When GPT divides numbers from text in scientific notation, it almost (99%) always gets it wrong. Because it is making up the rules and not itself using a lossless and verified algorithm. It needs to be using a calculator, it needs to use a computer (a lossless one).
    Personal memory is "the exact and complete memory of ALL things it had to use to generate responses". And for interacting with each human, it needs to be ALL conversations. That memory is "LEARNING"!! Fundamental to learning is remembering. Not a guess, not "riff on some theme". Not some cute pictures and quirky personality. Exact and reliable code.
    Those "AIs" need to have personal memory and data about themselves. That means "How long can I work on each piece?" "How big is my memory?" Exactly what did I read and generate in this conversation? How much do I cost? When was the latest version released"
    An "AI" that does not know its own specifications, bill of materials, precise limitations and capabilities -- is NOT a tool, it is a sham , a tool, a disgrace.
    I started working with random neural nets, artificial intelligence, encryption and robot design in 1966. That is 58 years I have been designing and building information systems for the world. The last 26 years , "The Internet Foundation" to see why all global issues and projects NEVER complete. These AIs all fail because they did not collaboratively curate and document the input data as a lossless dataset first -- across all human languages, across all domain specific languages. The "AI" companies are NOT GIVING BACK. They are NOT investing any effort to improve the world. Do you see them even TRYING to solve world problems? I have a list of about 15,000 global topics they could try.
    Filed as (GPT AIs were doing "one shot with no memory", now they only do "cheap one shot and they do not care about you at all")
    Richard Collins, The Internet Foundation

  • @KingHenrySB
    @KingHenrySB Před 3 měsíci +7

    Ever since they rolled out 4o, it's been more buggy than ever before and 3.5's output has gotten so much worse, it's as if they're intentionally trying to force people into paying for subscriptions

    • @codingwithdee
      @codingwithdee  Před 3 měsíci +3

      Also, I’m assuming they probably don’t really care about people using the UI. Most of their revenue is probably from businesses

    • @KingHenrySB
      @KingHenrySB Před 3 měsíci

      @@codingwithdee that’s a great point, with the API being the golden goose, it would make the most sense for them to prioritise that instead of the web app

    • @POVShotgun
      @POVShotgun Před měsícem

      Nah I paid and that model is crap too

    • @Neal_McBeal
      @Neal_McBeal Před 29 dny

      I believe they are intentionally making it worse in order to move people away. Because handling all that traffic has become so expensive. They impressed people with a very successful product, got their investments and now it is time to save some money.
      Edit: I wrote the comment midway in the video and yeah, she mentioned the same thing towards the end. Sorry about that…

  • @Septumsempra8818
    @Septumsempra8818 Před 3 měsíci +4

    The context window is much shorter than Claude and Gemini. Copilot was stubborn 2 miths ago, but now its back to working well. The 4-O models are really good. Clocked 1000 lines of code and it did it well.
    Honestly, just use all of them at the same time

  • @daviddivas9443
    @daviddivas9443 Před 2 měsíci +2

    It's also a problem with RLHF, take a model that surpasses human levels on various things, then ask humans to "align" it. Ends up more "rounded". Especially when the humans doing the grunt work are from mechanical turk or similar. Dumbing it down to the lowest common denominator...

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem

      It's also been hobbled by "safety", even for basic coding features or other questions. It will just persistently fail & when exposed on why, refuse to continue the conversation.

  • @arkimphiri
    @arkimphiri Před 3 měsíci +3

    Great analysis Dee. My approach has been to use 3 LLMs at once, I ask ChatGPT, Gemini, and Claude at the same time, in one UI using Semaj AI which I developed solely for this purpose. I can confirm indeed that Claude usually gives the best code

  • @xenoranger79
    @xenoranger79 Před 26 dny

    GPT also faces the same issues as humans. Instead of reanalyzing the data, it gives the fastest answer that closely matched the question. Because it takes more power to generate a new answer, humans often give quick answers. If you ask someone the question of a stop sign, they'll generally say 'Red'. If you show them a picture of a green stop sign and ask the color of 'the stop sign', a half paying attention person may answer 'Red'. GPT learns from reinforced learning, so there's a high probability it might reply that the stop sign is 'Green' similar to a person not paying attention.
    I've seen GPT fail to answer programming and math questions when they get too complex. It takes the easy way out while ignoring fundamentals that vastly change the outcome.

  • @brianYYZ
    @brianYYZ Před 2 měsíci +1

    I find it I start a new chat window and carry over the code with a little context it does better. I think the memory starts "leaking" after so many tokens have been used in the same chat session.
    Had a script completely stop working. It had left out an entire function. I now go piece by piece, much more slowly.

  • @pretentioussystem
    @pretentioussystem Před měsícem

    Many thanks!
    Please post more updates when you tested more.
    I was about to sign up for Cgpt4 but now I have 2nd thoughts.

  • @sunnohh
    @sunnohh Před měsícem +2

    I have yet to get a single correct answer from chat gpt any version. But I ask basic finance questions.

  • @haraanganjotsingh8032
    @haraanganjotsingh8032 Před měsícem

    So how was the 40 and the 40 mini? Since these models don’t need that much computer power were they still inaccurate and making stuff?

  • @NicholasCancelliere
    @NicholasCancelliere Před měsícem +1

    Claude AI is amazing. I stopped using all the other LLMs and just use it right now.

  • @mind_of_a_darkhorse
    @mind_of_a_darkhorse Před 3 měsíci +2

    Well-explained details on why ChatGPT is starting to get mediocre! I've noticed that most of the easily available AI Models seem to be horrible at coding. It makes me wonder if the coders writing the code for the models are attempting to maintain their necessity. But your reasoning makes sense as well!

    • @codingwithdee
      @codingwithdee  Před 3 měsíci +1

      Yeah it definitely seems so. I wish they gave us a bit more insight on why these changes happen

  • @xd-qi6ry
    @xd-qi6ry Před 3 měsíci +1

    have made a custom gpt It has superior reasoning and so much more
    it is 5x + smarter than base-model, it understands the complex
    Its called Smarter Vision Multimodal image/text analysis
    Its unlike any custom GPT’s before and is ready for new vision features for 4o
    and also an example i’ve been \using is upload an image of a cloud that looks like multiple things but it can be interpreted, the one i have made recognised it was a rabbit every time now on 1st shot so it knows when something is unusual about an image even if you dont say anything is, it can also do iq test image reasoning pattern questions.
    It kind of even understands real logic games when giving good instruction
    just gotta follow the instructions given to get the right seed its 1 in 2 chance or so i have absolutely no idea why it needs that.

  • @vibesmom
    @vibesmom Před měsícem

    It’s so noticeable and frustrating. It’s not just with code either.

  • @java20422
    @java20422 Před 2 měsíci

    The first time you ask a question it has to search most of the time and you can notice it also quotes sources and was deailed as it read from some sites, the next day or question it has learned already, so no sources you can see that, it's summarizing what he has learned the previous time it may look less detailed because the concept is stored simplified

  • @KingHenrySB
    @KingHenrySB Před 3 měsíci +2

    Great video, the explanation you provided makes a lot of sense.

    • @codingwithdee
      @codingwithdee  Před 3 měsíci +2

      Thanks so much for watching, appreciate it!

  • @JJSeattle
    @JJSeattle Před měsícem

    I use ChatGPT 4 and Claude - at the same time - feeding each other's answers if there is a problem, or not. ChatGPT 4 is great for plowing through, then Claude 3 Sonnet to write out stubborn errors. 😊

  • @JorgeStolfi
    @JorgeStolfi Před měsícem +1

    There is a new profession out there, "prompt engineering", which is about constructing prompts for ChatGPT and the like so as to increase the chances of getting the desired result. It came at the right time to absorb all those unemployable dimwits who aspired to be "SEO experts".
    But I am trying to specialize in "prompt sadism", the art of creating prompts that elicit egregiously stupid replies from ChatGPT. Like "If two farmers milk four cows in 30 minutes, how many farmers will it take to milk 10 cows in 5 seconds".
    And whenever ChatGPT makes a stupid mistake, I congratulate it for its "exceedingly correct and helpful answer". So maybe I am partly responsible for the degradation you have observed...

    • @AaronBlox-h2t
      @AaronBlox-h2t Před měsícem

      haha.....You have too much time on your hands.

  • @alfredomaclaughlin1185

    Not a tech guy, but I think the answer's quite simple: computers age faster. ChatGPT is dealing with memory loss, forgets it told you that story already, and probably can't read very well because it's too stubborn to wear prescription glasses. Cut it some slack, folks, it's doing the best it can!

  • @softlution2
    @softlution2 Před měsícem +1

    Typical behavior by Large companies not threatened by competitors. Most likely in 10 years Openai will lose the game. We have seen that so many times. ChatGPT is fully capable as a model but all Openai cares about is how to make more money by reducing ChatGPT capabilies offering low end versions. Everyone can see that and trust me in a few years we will have lots of companies offering much better services. They just got cocky. A web interface that auto scrolls for over a year now making it imposible to read and nobody is fixing it. They got Cocky. As simple as that

  • @colinmaharaj
    @colinmaharaj Před měsícem +1

    These simple pieces of code, are what I call boiler plate. What I do to make things work is give it......
    1. The language (c/c+)
    2. The compiler
    3. The version of the compiler
    3.5 If command line or not
    4. If to use STL, STD library and other standard libraries
    5. What I want to do with the data
    6. An example of the input data and
    7. An example of the output data.
    And the world is alright with me

  • @rickharms1
    @rickharms1 Před 2 měsíci +1

    Thank you, I thought it was me. I am a retired system/ network engineer. I did support for a computer sales team. Programming was not a part of my duties, but I could kind of wade my way through some simple issues. Fast forward to today, my hobby is micro controllers, e.g., Arduino with its simplified C++. I have ChatGPT help me. Sometimes it has been of great assistance, especially when exploring new concepts. But, it then gets bogged down, creating questionable and even wrong code. I will show it how it is wrong. At least it apologized. However, it is stubborn, and will ignore some of the issues which it created.

  • @8pathseclective66
    @8pathseclective66 Před 5 dny

    Of course ChatGPT gets worse with longer threads it has a limit for tokens - the longer the thread the more tokens used and it truncates at about 8K tokens and image generation has fewer tokens closer to 400 due to the nature of how image generation is completed from tokens because image generation tokens are a "kind of language"

  • @colinmaharaj50
    @colinmaharaj50 Před měsícem +2

    Dee meh dear, I just realized something, you know why chat GPT is free, because YOU are beta testing the darn thing for free. Remember when google was playing a word association game with us a decade ago? Well Altman is (or you are) improving the quality for him, and will get his ($7T) funding while quality improves and you are looking for a job.

  • @noitnettaattention
    @noitnettaattention Před měsícem

    I noticed this long time already, and with each "newer" version it seems getting more degenerated

  • @olabassey3142
    @olabassey3142 Před 3 měsíci +1

    lmao i started coding for the first time in 7 years last week and was using chat gpt, after a lot of stress i used claude and got my code working. claude is definitely better. i experimented with gpt, bing/copilot and claude, claude is the best, chatgpt is questionable and bing is brain damaged, bing was even hallucinating without actually returning code. 😂😂😂

  • @braveonder
    @braveonder Před 15 dny

    3.5 was much more beter for embeded c++ code. Now it is mixing info and doesn't understand anymore.

  • @CosplayZine
    @CosplayZine Před měsícem

    I think they're making it worse so you'll think you need to upgrade to make it work better. But to be fair it appears people are asking it to do the work for them rather than to check or present ideas to help them work.

  • @franke102
    @franke102 Před měsícem

    The reason ChatGPT has become worse is because of industrial LLM segmentation for the purposes of licensing/monetization and the Invention Secrecy Act of 1951.

  • @gregorybolin4672
    @gregorybolin4672 Před 2 měsíci

    Nice editing and flow 😊

  • @Theoisx
    @Theoisx Před 26 dny

    I have notice the same, that chatGPT not always gives the correct answer, but it helps if I continue to ask for more. I also noticed that you are quite cute and interesting. Not chatGPT, but you, Dee...

  • @tubeDude48
    @tubeDude48 Před 2 měsíci

    I use it all the time to program MicroPython. It rarely makes a mistake. Works for me!

  • @What_do_I_Think
    @What_do_I_Think Před měsícem +3

    The quality is getting worse, because AI is not intelligent. It is simply stated just a complicated statistical evaluation over software examples that were crawled in the web, to determine the "most likely" solution.
    Computers becoming more "intelligent"? Dream on!

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem

      That doesn't explain it getting worse at what it could already do; that's a direct result of "safety" detraining & added proscriptions against reproducing copyrighted content. Those "corrections" wrecked the trash utility offered before.

    • @What_do_I_Think
      @What_do_I_Think Před měsícem +1

      @@prophetzarquon1922 It does explain it, if you think about it. When you don't fully understand something and modify it, it is likely that you make it worse with every modification you make. But that might be to complex to explain in chat and one needs some understanding of what is going on here.
      AI is intentionally so complex, that nobody understands it. So they can sell it as a wonder to us. But this complexity makes it also difficult to change.

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem

      @@What_do_I_Think No no, you're missing the headline, here. It is _intentionally_ worse, because it was doing things we don't want to allow; so, lobotomizing its stronger features while simultaneously saving some operational effort, was the go-to band-aid.
      It's not that the AI can't be (a lot) better than it is, _right now._ It's that for legal reasons we won't let it.

    • @What_do_I_Think
      @What_do_I_Think Před měsícem

      @@prophetzarquon1922 That is a rumor. Possibly even spread by the corporations themselves to make AI more believable.

    • @What_do_I_Think
      @What_do_I_Think Před měsícem

      @@prophetzarquon1922 I did not miss anything. Rumors, which might even come from the AI corporations themselves!

  • @jspencer89yt
    @jspencer89yt Před 3 měsíci

    I gave it a Word document pre-filled with questions and answers and asked it to remove any identifying factors it gave me back the document and it only said questions and answers literally everything else was gone 😂

  • @DanandNato
    @DanandNato Před 3 měsíci +1

    Why did Sam Altman say that? We know its pretty dumb in many areas and its dumber now, but does it mean chat-gpt gets worse in the future?

    • @DanandNato
      @DanandNato Před 3 měsíci

      Also, ive noticed GPT can remember between sessions and is really smart when its "going rogue". But when reminded that it is doing stuff its shouldnt suppose to be able to do, it then plays dumb again and ends the conversation. Ive got proof and saved in PDF and printscreen.

    • @codingwithdee
      @codingwithdee  Před 3 měsíci +3

      I think he just said that to get the point across that they’re continuously working on advancing it. “it’s the dumbest you’ll ever use because later versions will be more advanced”

    • @codingwithdee
      @codingwithdee  Před 3 měsíci

      It playing dumb again if probably the safety guards?

  • @IStMl
    @IStMl Před 2 měsíci

    They should just give us X true GPT-4 queries and let us pick the model when we have a complex prompt

  • @TheTrainstation
    @TheTrainstation Před 3 měsíci

    Claude will give you the full code length, gpt4 was super lazy. GPT4o give you the complete code but it glitches out

  • @charlesd4572
    @charlesd4572 Před 2 měsíci

    Inference is pretty cheap - but I guess on scale does make sense still

  • @Hawkeye4040
    @Hawkeye4040 Před 22 dny

    It's getting dumber because it's using a data source made by us and we suck at this.

  • @shreekanth1825
    @shreekanth1825 Před 4 dny

    I do think same, these AI will get dumber, as more data feed, more confusions. Decline in performance. Limitation of human brain is that more information more stuck it is. Ai is reproducing the same. AI' s will be suited for specific applications not for whole world questions.

  • @nielsSavantKing
    @nielsSavantKing Před 24 dny

    It's easy to criticize everything. But the sweat comes from to fix it

  • @natgenesis5038
    @natgenesis5038 Před 2 měsíci

    3/10 accuracy of codes and must ask it multiple times just to code something can work .

  • @nate6692
    @nate6692 Před měsícem +1

    Generative AI is essentially the SNL Pathological Liar skit. Everything is made up based on plausibly (language wise) stitching together stuff it's heard. It's fiction even when it's correct. Yeah that's the ticket. I've had it double and triple down on stuff it's just flat out made up before.

    • @prophetzarquon1922
      @prophetzarquon1922 Před měsícem

      Nonetheless, it was better at functionally correct output before than it is now

  • @LukeAvedon
    @LukeAvedon Před 2 měsíci

    Interesting analysis. I think AI drift is also an issue.

  • @yttraMariestad
    @yttraMariestad Před 2 měsíci

    Bard (now Gemini) has also got worse and really starts gaslighting after a while

  • @trantorgarde12013
    @trantorgarde12013 Před měsícem +1

    So, it's becoming an average human developer 😁

  • @cadsticcadsticc1322
    @cadsticcadsticc1322 Před 11 dny

    getting spelling in AI created image are wonderfully inaccurate

  • @hansa5867
    @hansa5867 Před měsícem

    Just gonna pop in to say that I agree that it's been getting worse.

  • @rhettr4923
    @rhettr4923 Před 2 měsíci

    Yep, that's been my experience

  • @D7460N
    @D7460N Před 2 měsíci

    This is exactly right! GPt4o is TERRIBLE!

  • @clockwise7391
    @clockwise7391 Před měsícem

    i noticed it now has the intelligence and reasoning of perhaps a sharp 12yo

  • @humdingermusic23
    @humdingermusic23 Před měsícem +2

    It's entropy, the more it learns the more it gets confused.

  • @kevinigwilo3383
    @kevinigwilo3383 Před 7 dny

    Am sure you have been paid to say this, even to the extent of mentioning an alternative indirectly, because of money you spite someone business, that's why I love my country and its organization or companies, they would have immediately sued you For slander and defamation because it's clear you are trying to sway people's minds From chatgpt to Claude, messed up, as if all ai don't give incorrect queries sometimes, it is even clearly stated in the bottom, so you have no right to start comparing and messing up the company image by attempting to sway user's choices, messed up Will unsubscribe you for this wicked manipulation attempt, and I wish gpt will take this up in making sure they shut down this your account since you are collecting bribe, will still be a strong fan of only gpt no matter what you say.