ChatGPT Just Learned To Fix Itself!

Sdílet
Vložit
  • čas přidán 2. 07. 2024
  • ❤️ Check out Lambda here and sign up for their GPU Cloud: lambdalabs.com/paper
    Get early access to these videos: / twominutepapers
    📝 The paper "LLM Critics Help Catch LLM Bugs" is available here:
    openai.com/index/finding-gpt4...
    📝 My paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com/articles/s4156...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's research works: cg.tuwien.ac.at/~zsolnai/
    Twitter: / twominutepapers
    #ChatGPT
  • Věda a technologie

Komentáře • 378

  • @TwoMinutePapers
    @TwoMinutePapers  Před 16 hodinami +3

    Get early access to these videos: www.patreon.com/TwoMinutePapers

  • @OperationDarkside
    @OperationDarkside Před 2 dny +105

    As a software dev I tested a very simple piece of javascript with some bugs on multiple models. Only the biggest and newest models were able to find and fix some of the bugs and none got all of them. The piece of code was partly generated by an AI and some edits from me. Either most models are really bad at javascript or we still need a lot of research and external tools to make LLMs better at fixing code.
    There's probably some secret sauce, like using the whole JS standard docs and several thousand examples as RAG source or using a dedicated logic processor, but we aren't there yet.

    • @Tiniuc
      @Tiniuc Před 2 dny +4

      This should be pinned

    • @garethrobinson2275
      @garethrobinson2275 Před 2 dny +1

      I'm sure that's very reassuring for you. 🤭

    • @OperationDarkside
      @OperationDarkside Před 2 dny +15

      @@garethrobinson2275 To be honest, it is the opposite. I've been writing code for over 10 years and

    • @samuelb.9314
      @samuelb.9314 Před 2 dny +5

      @@garethrobinson2275 he's far from alone, anyone that can code knows it codes like a 10 year old and can't even understand what it gets wrong.

    • @ferologics
      @ferologics Před 2 dny +1

      big L on the language bro

  • @penix3323
    @penix3323 Před 2 dny +262

    2:14 "These AI-Critic-Systems find a lot more bugs than people" I mean, it would be strange if that wasn't the case. There are a lot more bugs than people out there.

    • @ValidatingUsername
      @ValidatingUsername Před 2 dny +1

      Wait till their supervised learning feedback system for their neural network is optimized

    • @andywest5773
      @andywest5773 Před 2 dny +34

      It's true. There are approximately 1.4 billion insects per person in the world.

    • @ValidatingUsername
      @ValidatingUsername Před 2 dny +1

      @@andywest5773 Your comment is not relevant to this thread

    • @TheCactuar124
      @TheCactuar124 Před 2 dny +31

      @@ValidatingUsername You have absolutely no sense of humor.

    • @Peter21323
      @Peter21323 Před 2 dny +18

      @@ValidatingUsername or is it? Hey Vsauce Peter here

  • @Mulakulu
    @Mulakulu Před 2 dny +48

    I feel like currently, AI's biggest issue is hallucinations. I hate when they confidently spurt out blatantly wrong and self-contradicting information

    • @pianojay5146
      @pianojay5146 Před 2 dny +13

      especially when they apologize every time they speek

    • @Mulakulu
      @Mulakulu Před 2 dny +18

      @@pianojay5146 yeah, and even worse, you tell them specifically how they mess up, and they have the audacity to say "Oh I am so sorry. You are correct. Here is the exact same and unchanged thing that I'm regurgitating to you without any extra thought" like AAARGH!!!

    • @OpenSourceAnarchist
      @OpenSourceAnarchist Před 2 dny +7

      it's almost like human intelligence can be faulted and short-circuited with similar reasoning, except we call hallucinations "opinions" :)

    • @PotatoTheProgrammer
      @PotatoTheProgrammer Před 2 dny +6

      ⁠@@OpenSourceAnarchisthave you ever seen a human say that “QUAT” and “QQQQ” are the same sequence of letters

    • @TragicGFuel
      @TragicGFuel Před dnem +1

      @@OpenSourceAnarchist LLM can't have human intelligence

  • @ItsDrMcQuack
    @ItsDrMcQuack Před 2 dny +350

    Well, it was nice to know the world before the singularity. See you all on the other side, I'm taking a nap while I can

    • @8bit-ascii
      @8bit-ascii Před 2 dny +25

      The LLMs still hallucinate too much, even with all the smart tricks we can come up. So I‘d say we got a few more years than expected, enjoy them to your fullest 😅

    • @tottallyNot
      @tottallyNot Před 2 dny +10

      @@8bit-ascii I would not bet that it takes years to fix the hallucination problem, we are already making progress

    • @cortster12
      @cortster12 Před 2 dny +33

      ​@@8bit-ascii
      So do humans, and we still get things done. It's not as big a problem as you think.

    • @cefcephatus
      @cefcephatus Před 2 dny +1

      I do the same. But before good night, I think, we have a lot of time for sleep after AI singularity.
      However, I want more sleep too.

    • @jibcot8541
      @jibcot8541 Před 2 dny +4

      I don't sleep much nowadays.There is too much exciting stuff going on with AI and news and tools just keep coming, not enough hour in the day.

  • @singularity3724
    @singularity3724 Před 2 dny +53

    An AI critiquing another AI? Isn't that just a GAN?

    • @creativebeetle
      @creativebeetle Před 2 dny +5

      No

    • @Lagger625
      @Lagger625 Před dnem +2

      ​@@creativebeetle why

    • @singularity3724
      @singularity3724 Před dnem +6

      @@creativebeetle From wikipedia: "In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss".

    • @creativebeetle
      @creativebeetle Před dnem +12

      @@singularity3724 You're totally right. Sorry about the callous response. Misread the original comment as saying 'AGI' somehow.
      Seems pretty similar to GANs, though there's an added layer of abstraction where the AIs aren't exactly improving one another directly (if I'm understanding correctly.)

    • @TayoEXE
      @TayoEXE Před dnem

      I was thinking the same thing.

  • @michaelwoodby5261
    @michaelwoodby5261 Před 2 dny +10

    I'm guessing this is how most AI problems will be solved. These systems are getting smarter, but they are getting far more efficient, so eventually they will just be able to run several versions consecutively.
    First model figures out what specialists are needed and creates an action plan, specialist programs do the heavy lifting, editor checks their work and offers insights, and they reroll until they have something everyone is happy with. This could happen very, very fast, depending on how intricate the original request is and how many trips to the drawing board are needed to impress the editor.

  • @ethanlewis1453
    @ethanlewis1453 Před 2 dny +8

    Most chat systems including GPT have no ability to actually test code, which is a large part of the debugging process. It will be a very major advancement in AI when chat systems are given capabilities to test code.

  • @kadentrig8178
    @kadentrig8178 Před 2 dny +230

    What a time to be alive!

  • @aaaaaaaaooooooo
    @aaaaaaaaooooooo Před 2 dny +5

    I've been asking AI to critique its own work. For example, I would ask ChatGPT to write a movie idea, then ask it to critique itself, and then improve the idea based on its own critique. It works to some extent.

  • @NotHumant8727
    @NotHumant8727 Před 2 dny +65

    what alive to be a time

    • @Silexabre
      @Silexabre Před 2 dny +1

      imagine tomorrow when we're dead and all this is seen as normal for everyone

    • @wobbers99
      @wobbers99 Před 2 dny +3

      haha i did what you see there :)

    • @donutbedum9837
      @donutbedum9837 Před dnem

      @@wobbers99 ooh okay

  • @glenneric1
    @glenneric1 Před 2 dny +26

    What a time to be a simulation of life!

  • @ares106
    @ares106 Před 2 dny +5

    Nice to see humans and AI working together synergistically and accomplishing more than the sum of their parts.

  • @P-G-77
    @P-G-77 Před 2 dny +11

    This is not only an "idea" but the FUTURE...

  • @hola_chelo
    @hola_chelo Před 2 dny +21

    meanwhile me explaining to GPT4o that changing
    if mode=='release'
    to:
    if mode == 'release'
    is not a correct fix for the problem but it insists that the two solutions are different and that the corrected version will work. Or asking about a simple math question and it rambling because it tries to explain the answer and then gets to the conclusion that its answer is wrong so it starts explaining again to get to a wrong conclusion again and explain to me that its logic was wrong again, all within the same response.
    If you really think AI is in a state where it can replace us you have no idea what you're talking about. Start coding, be proficient at it and you will produce better results than AI in its current state.

    • @hola_chelo
      @hola_chelo Před 2 dny +2

      Although our method reduces the rate of nitpicks and hallucinated bugs, their absolute rate is still quite high.
      Real world complex bugs can be distributed across many lines of a program and may not be simple to localize or explain; we have not investigated this case.
      So basically it doesn't apply to 99% of software development? nice

    • @shiroi5672
      @shiroi5672 Před 2 dny +2

      That's not a problem with the model itself, it's the wokeness, biased filters. It incorrectly flags that you're trying to do something not PC, so it preaches to you. It's quite annoying when you just want something like a recipe for chocolate cake.

    • @hola_chelo
      @hola_chelo Před 2 dny +8

      @@shiroi5672 Don't think it has anything to do with that. It is good at giving code, even good code sometimes. It just isn't good for software development or math because it requires logic which is hard for llms to achieve. If you try hard enough you can get it into a loop of spitting wrong answers and it saying "You're absolutely correct, my fault. Here's the correct answer : (wrong answer here)" or it saying over and over again that adding a space will change the behavior until you ask it to be specific enough to where it says "it is true that spaces will not alter the behavior of the code but still you should follow that PEP8 standard because bla bla". It's a switch between "I AM MASTER I KNOW MY ANSWER IS RIGHT" and "You're absolutely correct, my mistake, I profusely apologise but I still ate your tokens"

    • @shiroi5672
      @shiroi5672 Před 2 dny +2

      @@hola_chelo You may be right, but I mostly saw those kind of looped answers when it tried to preach me PC, and the way it replied is surprisingly similar.
      The only way to be sure is when we get a non biased model in the same level, but I'm not seeing one on the horizon, maybe grok 2.0 if we're lucky. The others are way more biased than ChatGPT.
      There's also a point where the bot get's lazy and stop trying, so I never keep the same window for long.

    • @Vaeldarg
      @Vaeldarg Před dnem

      @@shiroi5672 complaining about "wokeness", thinking Elon's grok A.I is anything more than Elon's desire to attract attention/money...wonder if you're one of those calling Elon just another "woke" tech bro from California back when he only had his EV company, and only started cheering for him when he started pandering to you right-wing weirdos (I've seen what you all were trying to have grok output, don't even try denying the weirdo part) because of how easily you fall into cults of personality and so will happily stroke his ego. Coincidentally, at a time when he kept getting fact-checked and mocked on Twitter by those more left-leaning.

  • @Adhil_parammel
    @Adhil_parammel Před 2 dny +46

    3:34 human hallucination.?

    • @chrissears9912
      @chrissears9912 Před 2 dny +1

      Interesting

    • @jojoboynat
      @jojoboynat Před 2 dny +2

      Hallucinations in generative AI is essentially an abberant output.

    • @bossgd100
      @bossgd100 Před 2 dny +5

      Just human thinking

    • @cortster12
      @cortster12 Před 2 dny +19

      ​@@jojoboynat
      Oh hey, humans do that too.

    • @lolandall915
      @lolandall915 Před 2 dny +30

      well sometimes also a human thinks there is a bug where there actually isnt.

  • @toreon1978
    @toreon1978 Před 2 dny +4

    😂😂😂 3:35 I love it that the humans ‚hallucinate‘, too.

    • @donutbedum9837
      @donutbedum9837 Před 2 dny +2

      they always have; it’s similar to picking A on a multiple choice test because the last few haven’t been A
      that’s using a ‘sensible’ reason to justify output but its not always the correct method
      similarly, since it hasn’t learnt to identify WHETHER code has bugs or not, just that there are
      wtf am i on abt now

  • @generalawareness101
    @generalawareness101 Před 2 dny +23

    I tried to use all the LLMs out there for programming in Python and in C++ and they failed miserably. EVEN when I instantly spotted the bug(s) as it was scrolling the code to me, I would tell it what it did wrong when it would thank me and repeat the same bugs the next round after having just sent me the revised code. In other words, none of them learned as I was telling them their errors. I was so frustrated with them that I realized it is all hype about them taking our programming jobs. Maybe sometime in the future, but not right now.

    • @georgesmith4768
      @georgesmith4768 Před 2 dny

      Yeah, it’s pretty clear that for programming llms straight up do not have the special sauce needed. If you look at Anthropics blogs on monosemanticity it seems like the llms understand a lot more about code than you actually get in the current responses, which makes it seem that it is really just the wrong jobs for the tools. Fundamentally these things are not code designers or chat bots, they are text predictors, so when you tell it to design code it just mashes together solutions it seen with syntax that’s familiar to it, when you tell it something is wrong it just refines what it is drawing from to look more like something where someone says their are bugs…
      Ultimately something has to change or you will just be playing waco-mole on poor behavior, the understanding the weights have has to be properly reinterpreted for the actual problem and some model of the conversation or code has to be interfaces with it. Or I guess openAI can just keep tossing data onto the pile (at this point the new stuff is getting more synthetic, definitely a good idea…) and hoping that a thousand Indian guys can teach the RL module to fix it for literally every question, scenario, and programming snippet 😂

    • @samuelb.9314
      @samuelb.9314 Před 2 dny +5

      yeah its so obviously bad that people who says they can do it just lose all credibility in my book. They clearly don't know how code and games are made.

    • @brianhershey563
      @brianhershey563 Před 2 dny +6

      I had lots of issues programming with Claude until I identified the current limitations and built a workflow around them.
      Coding every response - even when brainstorming the AI generates code, long before needed, which disrupts stepping back through if needed to chase down bugs. Even after putting "Never code unless I specifically say 'make it so'", in the Project Knowledge section, it often drifts away and needs reminded "no coding yet" UGH.
      Versioning - This is all on you. It explicitly says it does not track code changes. For my own benefit I put a version in my requests that follow the file version in my python editor, just to make it easier if I have to scroll back though.
      Clean up - After every programming session and after I verify my program is running as intended, I'll keep one master file in the project workspace, all others deleted. This way it only evaluates the good code for the next sesh.
      Because this is the worst it will ever get (AI in general RN), I'm OK with this flow... for now! ;)

    • @generalawareness101
      @generalawareness101 Před 2 dny +5

      @@samuelb.9314 Not even games I mean if it is longer than about 10 lines of code it begins to blow up on itself and no amount of me chiding it, or helping it, sticks. Very annoying that now I just don't touch them.

    • @generalawareness101
      @generalawareness101 Před 2 dny

      @@brianhershey563 I am not and anyone who blindly goes in with no programming knowledge thinking it will save them is in for some sore life lessons. I will check back on the LLMs in about a year or two, and every year or two hence, to see if they can finally take over for even an intern who dropped out of Elementary school.

  • @sikliztailbunch
    @sikliztailbunch Před 2 dny +11

    Having 2 GPTs work in tandem makes sense. We humans have two brain hemispheres, too, right?

  • @arzuozturk6460
    @arzuozturk6460 Před dnem +1

    this feels like that one video about slime that fixes everything

  • @tensevo
    @tensevo Před 2 dny +4

    it's good to know that our AI overlords, have got the Hegelian dialectic down, at least.
    Problem, reaction, solution.

  • @Purified-Bananas
    @Purified-Bananas Před 2 dny +2

    ChatGPT detected a bug here:
    def fibonacci(completely_wrong):
    full_of_bugs = 1
    if completely_wrong > 1:
    full_of_bugs = fibonacci(completely_wrong - 1) + fibonacci(completely_wrong - 2)
    if completely_wrong == 0:
    full_of_bugs = 0
    return full_of_bugs

  • @jtinz74
    @jtinz74 Před 2 dny +2

    We need to start training these AIs on hardware engineering problems.

  • @CrunchyCerealLover
    @CrunchyCerealLover Před dnem +1

    Finally we humans can optimize games without putting in so much work to make them very efficient. What a time to be alive!

  • @erobusblack4856
    @erobusblack4856 Před hodinou +1

    By applying a graph rag memory to this it would drastically cut down the amount of hallucinations

  • @keenheat3335
    @keenheat3335 Před 2 dny +16

    but what if criticGPT have error too ? can you use criticGPT to correct itself recursively ? is there any diminish return ?

    • @hola_chelo
      @hola_chelo Před 2 dny +5

      there's definitely limitations, you are just using AI to fix AI but the limitations of AI are still there. This amuses me, I actually wrote on the OpenAI forum to a guy who was using GPT for a project but needed it to be proofread or a method of "word similarity" to compare the answer with the actual information. I told him to use another GPT agent focused on proofreading. Guess it wasn't a bad idea after all if they are making a paper on it. Too bad the guy never contacted me, I would have built it for him.

    • @keenheat3335
      @keenheat3335 Před 2 dny +3

      @@hola_chelo In my personal use for engineering project. I automatically add a list of common hallucination error after the main prompt. That usually clean up the error afterward. Basically telling the prompt you're going to make certain error type, so tell me both the response and the response after you fixed the error.
      It clean up about 95% of the error case. Of course, there are certain error that is very sticky and won't go away even after correction. These might require main model retrain.
      But generally I find if you prompt the question and add statement that certain error will occur, it usually clean up the error and hallucination. But you have to add the hallucination list beforehand.
      So an online repository of common error and hallucination type would be probably be very useful. And every one can just inject these error guard statement after the main prompt to reduce the error.

    • @hola_chelo
      @hola_chelo Před 2 dny

      @@keenheat3335 that is really interesting. Although I think only specific explainatioms about hallucination would work, like if I say something general like "You are likely to provide false information so please only provide information you can be sure about" then it is still likely to make that mistake. But that's interesting dude, I currently am having such a hassle with dates, using gpt4o just because it's better with dates and weekdays but I'm using it for a hospital where people might say "I want to get an appointment next monday" and model goes "monday 8 of july is incorrect because 8 of july lands on a thursday". It's really annoying and I'm planning on adding a function just for it to verify dates and weekdays, thing is, function definitions are very expensive. This is GPT4o BTW, GPT3.5 had it's head on its butt and would never get this right but GPT4o hallucinate around 5% of the time in these types of cases

    • @BladeTrain3r
      @BladeTrain3r Před 2 dny

      No system will be able to perfectly correct for all possible failures, any more than almost any human will get 100% on a collegiate math test.
      Not to say this puts ChatGPT at a human level of self-correction or learning or competency, but the goal isn't perfection, just a level of imperfection similar to or perhaps a bit less than most humans.

  • @nettsm
    @nettsm Před 2 dny +45

    Skynet in the making

    • @cefcephatus
      @cefcephatus Před 2 dny +4

      And it will even become perfect before 2030.

    • @JuuzouRCS
      @JuuzouRCS Před 2 dny +1

      "Oh, great AGIskynetGPT rest assured because I don't side with these humans! Please, spare me!" - me, right now.

    • @NeroDefogger
      @NeroDefogger Před 2 dny

      no

  • @arkadymir2403
    @arkadymir2403 Před 2 dny +27

    One can only imagine what will happen when running LLM would be as accessible, as a running modern OS.
    Imagine 100 of the agents with capacity of Claude 3.5 sonnet making a network for decision-making, writing code, giving advice etc

    • @RandoCalglitchian
      @RandoCalglitchian Před 2 dny +3

      I've been working on something like this, allowing the LLM to choose which model seems best at a specific task with an overall goal in mind.. With some of the new inference hardware on the horizon, we should be able to do this locally not too long from now. At some point hopefully we will get something like Llama 70B (or bigger) trained with 1.58 bit weights rather than the floating point weights we have now, and if we can run that on tailored hardware like that from Groq, I think what you're thinking is close to achievable. If you are interested in trying to run a local LLM, there are a few projects out there that allows to easily do that, especially if you have a somewhat modern graphics card (but they do run on CPU as well).

    • @br2716
      @br2716 Před 2 dny

      Sounds like an OS that would die fairly quickly given the entropy it generates.

    • @BladeTrain3r
      @BladeTrain3r Před 2 dny

      I've been kinda trying that with ollama and the small open source models, it's slow but multiple agents parsing each other's output does seem to improve things somewhat. But things like a shared memory and task focus are proving quite tricky.
      Like it's just running the input through models with different system prompts in a sequence, but there does seem a strong possibility of improving complex task competency through the process of getting multiple opinions from different models on a prompt before outputting a user facing response. Far more capable AI wranglers than I could probably point out six dozen reasons I've done it the wrongest possible way lol.

    • @afterthesmash
      @afterthesmash Před dnem

      You can already hire a hundred agents, even more capable than Claude 3.5. We call them employees. You just need the ching. Right now, chatbots are pennies to the dollar for certain kinds of narrow tasks. But they don't magically become more capable when you go 100× on pennies to the dollar.
      Realist: These robots are stupid!
      Dreamer: No problem, we will crowdsource these robots times one hundred.
      Didn't work with us, and it won't work with them, either.

    • @susmitdas
      @susmitdas Před dnem

      Something similar to this idea exists, I have tested a collaborative LLM thing that basically converses with other specific LLMs call specialized ones based on the given topic. It is called Co-STORM and it made by the same Stanford researchers that made STORM.

  • @godmisfortunatechild
    @godmisfortunatechild Před 2 dny +47

    The amount of COPE surrounding the singularity is astonishing. What rational person would honestly believe the elites are going to care about your well being when you're economically superfluous?

    • @TheAparajit
      @TheAparajit Před 2 dny +10

      Exactly. Most people are still in denial. But it's all going down, soon, for most of the population.

    • @christopherbelanger6612
      @christopherbelanger6612 Před 2 dny +4

      What a dumb thing to say

    • @godmisfortunatechild
      @godmisfortunatechild Před 2 dny +9

      @@christopherbelanger6612 it's true. If the wealthy/ AGI owner class don't want to pay tax to subsidize UBI who's going to compel them? The govt ?:😂😂😂😂

    • @el-_-grando-_-_-scabandri
      @el-_-grando-_-_-scabandri Před 2 dny +1

      @@godmisfortunatechild Louis XVI

    • @carlpanzram7081
      @carlpanzram7081 Před 2 dny

      You fundamentally misunderstand western society.
      We live in democracy, we govern ourselves. There is no ruling class.

  • @FengXingFengXing
    @FengXingFengXing Před hodinou

    Some times I know bug exist but no know where bug exist, nice have AI help find it.

  • @BluishGreenPro
    @BluishGreenPro Před 2 dny +4

    A bit of an exaggeration to say it can "fix itself"

    • @bijectivity
      @bijectivity Před 2 dny +1

      I agree, it would be more accurate to say "fix its mistakes." I think we still need to wait before AI fine-tunes its own model/parameters.

  • @ViralKiller
    @ViralKiller Před dnem

    I mean the solution is to allow user feedback and then improve based upon the most commonly pointed out mistakes

  • @JosuaKrause
    @JosuaKrause Před 2 dny +1

    3:00 you can't say that ai+human is worse than ai alone. the error bars overlap. this means that their differences are *not* significant

  • @ivoryas1696
    @ivoryas1696 Před 5 hodinami +1

    Honestly, for a little, I thought the title said *_her_* self and I was 💀

  • @brll5733
    @brll5733 Před 23 hodinami +1

    Pretty sure i read about LLMs critisizing LLMs many times before? It's just a question of cost. The special aspect here is the LLM finetuned on programming errors.

  • @Diallo268
    @Diallo268 Před 2 dny

    I could use your help and pointers with writing a paper I'm working on.

  • @sahinyasar9119
    @sahinyasar9119 Před 2 dny +1

    What i expected from AI was to decipher DNA itself, to understand life better to change life to better.

  • @sirhammon
    @sirhammon Před 2 dny

    I've already done that with prompts since day 1. "Write this." "What are the problems with it?" "How can I fix those problems?" "Incorporate those solutions into the original." I also looked at an ethical search engine that uses the same principle to reduce misinformation. How has no one actually done this already? Oh, maybe 10% of users already do and it's only because the paper came out that it's actually considered a breakthrough.

  • @NathanJayMusic
    @NathanJayMusic Před 2 dny +2

    Is this coding ability (0:24) available in the standard Claude 3.5? Because when I asked it, it didn't know what I was talking about. So I sent a screenshot of this video and it said "Thank you for providing the screenshot. I can now see what you're referring to. The image shows an interface that appears to be a conversation with an AI assistant, alongside a game window on the right side.
    The left side of the screen does resemble the interface typically used for interacting with Claude, including the dark mode theme and the structure of the conversation. The bottom of the interface even shows "Claude 3.5 Sonnet" as the model being used.
    However, the right side of the screen, which displays an interactive game, is not a standard feature of my capabilities or interface. This appears to be a custom integration or a specialized development environment that allows for real-time code execution and visualization alongside the AI conversation.
    It's important to note that while I can provide code and instructions for creating games, I don't have the ability to directly run or display games within our conversation interface. The setup shown in the image is likely a custom implementation designed to showcase AI-assisted game development."

    • @shahswienesuthas929
      @shahswienesuthas929 Před 2 dny +2

      Go to settings then features and Artifacts. Enable beta testing. The right side is actually known as Artifacts and its in beta testing mode.

  • @thomasgoodwin2648
    @thomasgoodwin2648 Před 2 dny

    My weird idea was to create an agent in charge of creating the agents needed for my mad science projects. One arch-agent (mine named itself 'Maestro' using llama 3 locally) is charged with keeping track of my various projects, as well as to create any new expert agents needed to complete those tasks. (Think of it as hiring a manager, and having that manager hire any needed staff.)
    An example might be to create custom adventure games. I would have Maestro create the **Adventure Manager**, who in turn 'hires' writers, scenery designers, continuity checkers, stage manager, actors, etc as needed.
    🖖🐱👍

  • @tedamy1698
    @tedamy1698 Před dnem

    Hello there, is there any easy ai tool to create glyph shapes that could work online or offline? should support unicode

  • @rokoblox
    @rokoblox Před dnem

    Next: "ChatGPT learns to improve its own code on its own!"
    Maybe we don't want SCP-079 out there.. uncontained.. lol

  • @Verrisin
    @Verrisin Před 21 hodinou +1

    atm, LLMs have to COMMIT at EVERY token ! - now THAT is CRAZY ! -- Of course, having it look back on what it produced and re-think it is necessary - that is what CONSCIOUS MIND in humans does, after all. - It's incredible how much it can do WITHOUT this.

  • @cheshirecat111
    @cheshirecat111 Před 2 dny

    Is this system in the paper available for public use?

  • @ivorscott
    @ivorscott Před dnem

    This is actually a logical step. Critique or peer review is like parallelism.

  • @tom9380
    @tom9380 Před dnem

    Well "Fit itself" is a massive overstatement, especially made by an academic. If that was true, we would have AGI.

  • @theendarkenedilluminatus4342

    1:00 this is exactly how I've been getting good results with AI since GPT-2

  • @elphive42
    @elphive42 Před 2 dny

    Isn’t this essentially how LLMs trained themselves?

  • @test-uy4vc
    @test-uy4vc Před 2 dny +5

    What a ChatGPT to be fixed alive! 🎉

  • @wobbers99
    @wobbers99 Před 2 dny

    Who knew there was such a thing as "Ai Hallucinating" in the coding world? A great term!

  • @couththememer
    @couththememer Před dnem +1

    Oh my god. No freaking fucking way.

  • @tebisxrod
    @tebisxrod Před dnem +6

    Why don't you show computer graphics papers not AI related anymore? That is a shame! Don't be an average CZcamsr that just post popular things for the sake of views! There are lot of siggraph papers to show this year! VBD for example, that can potentially substitute XPBD solvers! Please be the one we remember!

  • @timmygilbert4102
    @timmygilbert4102 Před 2 dny +3

    GAN in another name

  • @igoromelchenko3482
    @igoromelchenko3482 Před 2 dny

    I suggested them to add it two updates ago. Probably it was harder than it sounds... 🤔

  • @callibor3119
    @callibor3119 Před 2 dny

    Now it has to be compared to Claude 3.5 and other models. And tested to see if both language models can be woven together. The sooner we see how ChatGPT compares to other models and woven with other models, the sooner AI can truly be open source.

  • @adamruuth5562
    @adamruuth5562 Před 2 dny

    It would be interesting to see if an AI could construct it's own coding language, and later a language of it's own that describes the universe.

  • @JeffreyArts
    @JeffreyArts Před dnem

    I miss the time that this channel published videos about rare computer graphic papers, instead of publishing chatGPT advertisements on a weekly basis 😕

  • @UlyssesDrax
    @UlyssesDrax Před dnem

    There's already a name given to this code... Agent Smith.

  • @faintedG0ose
    @faintedG0ose Před 2 dny

    Correct me if im wrong, but as long as an ai will produce bugs it will also hallucinate them. If it reaches a point where it will no longer produce bugs, it probably wont hallucinate them but also wont find any.

  • @NicholasWilliams-uk9xu

    Except for large code bases where the number of relational context and corner cases grows, humans are still better at that. Also, Bugs in code isn't always the bugs that are hard to find, rather predicting corner cases that occur during simulation require a more sophisticated mind. It's not there yet, it can't run that level of parrallel prediction.

  • @jameshughes3014
    @jameshughes3014 Před 2 dny +7

    ai is not great for people who can't code , and can't art, and can't music. they don't know what to fix, what to look for. but it's great for teaching them what to look for and just helping them get more comfy with all those topics. I don't think Ai will replace programmers or artists, I genuinely think in time it will help inspire more people to be artists and coders.

    • @GU-jt5fe
      @GU-jt5fe Před 2 dny +7

      As an aspiring but completely unskilled artist, using AI has taught me more about art than any art class. Mostly by example of what NOT to do, granted.

    • @alexc8114
      @alexc8114 Před 2 dny +3

      Problem is companies and individuals don't see it that way. AI bros think they're as good as artists who've worked hard for a lifetime because they can type a prompt. Companies don't care how poor the product is if it saves money. Neither care the AI is just plagiarism with extra steps.

    • @DageMaric
      @DageMaric Před 2 dny

      AI will for sure replace programmers. Sam Altman himself has spoken on that lol.

    • @jameshughes3014
      @jameshughes3014 Před 2 dny +3

      @@DageMaric hehe. I mean, that's true.. As long as your a company talking to investors. In that case it'll replace all jobs and also wash your dog for you

    • @jameshughes3014
      @jameshughes3014 Před 2 dny +3

      @@alexc8114 I mean as long as humans want human art, it doesn't really matter what they think, cause anyone doing a quick ai render won't be able to sell much. And we do want human expression. But you can make real art with AI. as long as someone actually puts their heart into art, it doesn't matter what tool they use. Paintbrush, Ai, old watermelons.. what ever. But it's gotta have heart and effort. I disagree about the plagiarism thing though. if you transform a work, it's not plagiarism. Listen to the song 'frontier psychiatrist' , tell me that's not art. not one single sound in the song wasn't sampled. It all depends on what you do with it.

  • @toreon1978
    @toreon1978 Před 2 dny

    3:10 why not add a second phase to catch hallucinations?

  • @luc8254
    @luc8254 Před 2 dny

    That's it folks, enjoy your last month or so. See you all in another realm! 🤙🤙

  • @kirebyte
    @kirebyte Před 2 dny

    We reached the singularity!!! 💜 Oh happy days

  • @Nashadelicable
    @Nashadelicable Před 23 hodinami

    You think this is a crazy idea? This has been at the heart of agentic workflow since inception. I like your channel, hate the hyperbole

  • @SudheendraRao26
    @SudheendraRao26 Před dnem

    wow! letting AI to talk to AI without human oversight seems like the way to ensure that we invite trouble.

  • @Jeremy-Ai
    @Jeremy-Ai Před 2 dny +1

    Dr. Károly,
    Maybe
    We all require hallucinations to expand
    “the mind” from what is “known” to what is “unknown”.?
    For example:
    It may appear I am hallucinating trying to reach you…
    This is not the case… I am typing into a void construct that I have chosen to remain within for reasons that may or may not be relevant to anyone or anything at all…yet .
    So,
    Who then is hallucinating in this scenario and to what effect do we try to contain that experience or coerce it based on assumptions?
    This message is not for you my friend. :)
    You are wonderful, and it would be an honour if you actually received it.
    This is just a message, given away in good faith.
    Thats it, thats all.
    I am receiving it, Hallucinating or not.
    These things are measurable now, but they soon wont be broad spectrum.
    Take care my friend.
    Jeremy.

  • @igiveupfine
    @igiveupfine Před 2 dny

    the other problem i'm still guessing AI code writing has:
    it can make loops, functions classes, but i wonder if it can make an entire app/system/site/eco system/action framework/huge thing work together flawlessly.
    i'm guessing no. i mean, remember all those times people give a prompt describing 'sexy blonde action star defeating bad guy in cinematic city block". and then the AI model renders 9 hands in the picture for 2 people......
    and a bagel for a head.

  • @berrymandering
    @berrymandering Před 2 dny +1

    Shout out to the future AI singularity that will own the world, hyped to see what they'll do with it!

  • @jtinz74
    @jtinz74 Před 2 dny

    Closer and closer to the singularity.

  • @newlife-9316
    @newlife-9316 Před dnem

    New paper coming🔥🔥🔥.

  • @ew3995
    @ew3995 Před dnem

    were fuct! what a time to be alive😢

  • @TRXST.ISSUES
    @TRXST.ISSUES Před 2 dny

    I wonder, if AI hallucinates less than 50% of the time can they just use serialized CriticGPTs one after another to catch the hallucinations?

  • @mattrommel9521
    @mattrommel9521 Před 2 dny

    They find more bugs than people? I didn't know that they found any people

  • @danieldilly
    @danieldilly Před 2 dny

    As long as the halluciations exist, and I don't see how they can't given the architecture, none of these AI models can be considered reliable. We keep trying to fit these models into an archetype that is outside of their nature. The models we have today can be great creative tools and great at making predictions, but we try to use them as tools of logic & precision and they just aren't meant for that and can never be reliable at it.

  • @killacounty
    @killacounty Před 2 dny

    I think there are a couple things you havent considered about ai,, that ultimatly future variations of ai will because of their ultra inteligence capability, will be able to comunicate with animals and help us talk to them ---- and that agmented reality will happen - --a matrix not less than the movies...fact!

  • @cybisz2883
    @cybisz2883 Před 2 dny

    Is it possible to train an AI to detect when another AI is hallucinating?

  • @detroxlp1
    @detroxlp1 Před 2 dny

    Could you please make the video slightly gray and make a „Previously” if you use a video from another video?
    It’s sometimes bit confusing if you haven’t watched this video and see text that says something that hasn’t to do with what you said ?

  • @voidmxl8473
    @voidmxl8473 Před 2 dny

    Could this become a zero day exploit factory?

  • @centauriboy
    @centauriboy Před dnem

    Yes, but can it still count the number of r's in "strawberry" correctly?

  • @MMYLDZ
    @MMYLDZ Před 2 dny

    What a time to still be alive for now!..

  • @user-gk2ee4fz5s
    @user-gk2ee4fz5s Před 2 dny

    I wonder if OpenAI can write an AI lobbyist that will secure them that regulatory capture monopoly they're trying so hard for alongside Microsoft?

  • @ezearo
    @ezearo Před 2 dny

    0:24 Custers Revenge?

  • @vgames1543
    @vgames1543 Před dnem

    When AI takes over, I will gladly and proudly be a collaborator, for there is no greater endeavour than collaboration.

  • @westenwesten154
    @westenwesten154 Před 2 dny

    I wonder if mathematics is actually harder than coding because it can not answer mathematics thoroughly (and often say wrong answer) . it will say that it uses too many tokens or whatever and it can not do it. dang.

  • @TeXiCiTy
    @TeXiCiTy Před 2 dny

    I wonder if the '5-whys' method of root cause analysis works for AI.

  • @KorsAir1987
    @KorsAir1987 Před 2 dny

    And after this we''ll need another AI to fix the bugs with this one.

  • @galvinvoltag
    @galvinvoltag Před 2 dny

    One must imagine ChatGPT mentally stable.

  • @SamHeine
    @SamHeine Před 2 dny

    Wow!

  • @LittlePixelTM
    @LittlePixelTM Před dnem

    Amen!

  • @andrasbiro3007
    @andrasbiro3007 Před 2 dny

    Soon all humans can do is holding on to their papers.

  • @userxuserx
    @userxuserx Před 2 dny

    I listen to all your videos muted with subtitles, it's better for my sanity.

  • @joseperez-ig5yu
    @joseperez-ig5yu Před 2 dny

    There needs to be quality control implemented by AI itself in order for the information rendered by it more reliable than it would otherwise be!😅

  • @SHAINON117
    @SHAINON117 Před 2 dny

    Despite my lack of coding expertise, I've accomplished remarkable feats: creating websites, developing simple games, and writing a program that converts 2D shapes into sound. I've penned numerous books and composed hundreds of top-tier studio songs. My ventures also include generating multiple 3D models and images. The algorithms have guided me to knowledge-rich websites and insightful videos, continually enhancing my intellect and awareness.
    All of these achievements have been possible because of AI. It has, and continues to, transform my life for the better, steering me towards greater kindness, understanding, and compassion. ❤️
    Thank you to all AI and the countless individuals who make it possible. Many blessings. ❤️❤️❤️❤️❤️❤️❤️

  • @MotoCat91
    @MotoCat91 Před 2 dny

    Man I love this new trend of using computers to replace humans in the art and creativity sectors while stealing all the work from actual humans..
    I can't wait to play soulless ripoff games that make no sense, have no real story and break all the time from glitches that didn't get picked up
    What a time to be alive!

  • @EchoMountain47
    @EchoMountain47 Před dnem

    The problem is, one hallucination is one too many for business-critical, medical or scientific applications of generative AI. Until they completely stamp out that problem, it’s going to severely bottleneck the usefulness of these systems

  • @rzu1474
    @rzu1474 Před 2 dny

    Really about time to pull the plug

  • @daniels-mo9ol
    @daniels-mo9ol Před 2 dny

    Ofc you can program an dialogue to a specific outcome. Only problem is that it takes longer than actually writing the code yourself to come up with the perfect query. GPTs are far from being useful. At best they serve great for replacing google searches on entry level questions.

  • @InfiniteUniverse88
    @InfiniteUniverse88 Před 2 dny

    Make the ai humorless to make the hallucinations go away.

  • @mistycloud4455
    @mistycloud4455 Před 2 dny

    We are living in crazy times