The Tech that’s *probably* inside GPT-5 just got Open Sourced!

Sdílet
Vložit
  • čas přidán 10. 05. 2024
  • LLMs are about to get STRONG
    ▼ Link(s) From Today’s Video:
    Claude Opus to Haiku: / 1770942240191373770
    Quiet Star Open Source: / 1770934470373421522
    Chain of Thought: / 1771197699682947296
    Claude 3 Investor: / 1771204395285246215
    ► MattVidPro Discord: / discord
    ► Follow Me on Twitter: / mattvidpro
    -------------------------------------------------
    ▼ Extra Links of Interest:
    ✩ AI LINKS MASTER LIST: www.futurepedia.io/
    ✩ General AI Playlist: • General MattVidPro AI ...
    ✩ AI I use to edit videos: www.descript.com/?lmref=nA4fDg
    ✩ Instagram: mattvidpro
    ✩ Tiktok: tiktok.com/@mattvidpro
    ✩ Second Channel: / @matt_pie
    -------------------------------------------------
    Thanks for watching Matt Video Productions! I make all sorts of videos here on CZcams! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!
    All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.
    -------------------------------------------------
    ► Business Contact: MattVidProSecond@gmail.com
  • Věda a technologie

Komentáře • 279

  • @DisturbedNeo
    @DisturbedNeo Před měsícem +68

    I asked GPT-4 to write a system prompt to turn my local 7B LLM into an "Expert Creative Writer", and it works ridiculously well

    • @southcoastinventors6583
      @southcoastinventors6583 Před měsícem +9

      GPT 4 turbo latest build is much better than stock so I am sure it would make it even better

    • @blisphul8084
      @blisphul8084 Před měsícem +3

      I've been doing this for a while now. Chain of thought has been a concept for quite some time now. Also, i think OpenAI had a prompt generator for making GPTs, but when I struggled to make prompts for smaller LLMs, I'd often have GPT-4 look at it's mistakes and make the prompt better for the smaller LLM.

    • @southcoastinventors6583
      @southcoastinventors6583 Před měsícem

      @@blisphul8084 I feel like they need to make loras for local llms and then keep kicking it to make sure it stays on task.

    • @Yipper64
      @Yipper64 Před měsícem +4

      That's interesting, given how often when I ask GPT-4 to write its own system prompt to my specifications, it doesnt work as well as I would like it to.

    • @ZeFluffyNuphkin
      @ZeFluffyNuphkin Před měsícem +3

      What's the prompt, if you don't mind me asking?

  • @TimTruth
    @TimTruth Před měsícem +67

    Windows key + 'h' = voice to text on windows. I just found this out

  • @I-Dophler
    @I-Dophler Před měsícem +25

    1. GPT-5's underlying technology has likely been open-sourced.
    2. The announcement indicates a shift towards greater transparency in AI development.
    3. This move could accelerate innovation and collaboration in the tech community.

  • @Jacobinks
    @Jacobinks Před měsícem +60

    It’s crazy how AI is ACTUALLY real now. It’s not science fiction anymore. Mental.

    • @14supersonic
      @14supersonic Před měsícem +15

      Well, not quite like the movies, but we're basically so close. It's right around the corner.

    • @Yipper64
      @Yipper64 Před měsícem +3

      I mean in a sense. Its nothing like in the movies (technology rarely is) but it *exists* in a sense.
      Its basically the same technology as your autocomplete in your phone, so you know. Not thinking, just calculating.

    • @shin-ishikiri-no
      @shin-ishikiri-no Před měsícem +3

      ​@@14supersonicIt's you again. You responded to one of my comments on a completely different video and topic. Also you're a Sonic Fan so similar interests checks out. lol

    • @alansmithee419
      @alansmithee419 Před měsícem +2

      @@Yipper64
      What is "thinking"?

    • @Yipper64
      @Yipper64 Před měsícem

      @@alansmithee419 according to google "using thought or rational judgment; intelligent." and AI, by definition, is artificial.
      Before we go down that philosophical rabbit hole, i'd like to point out that as humans we tend to personify things, without even thinking. We feel sympathy for an abandoned stuffed animal on the side of the road, say "sorry" to a table when we stub our toe. Its natural to imprint humanity onto things that are not human.
      Of course, people do this with AI as well, and its no different when we do it with AI.
      Like I said, its not thinking, just calculating.
      Putting together numbers that result in characters that line up to create whatever English script the weights have determined is most logical to follow given whatever input.

  • @AH900112
    @AH900112 Před měsícem +11

    What you are talking about is so close to the breaking point concept about asking an AI do create the next generation AI.
    And how absurdly fast the evolution would explode then. Especially if they can gather their own material to build themself.

  • @Windswept7
    @Windswept7 Před měsícem +9

    I did this with my custom instructions when it first dropped.
    To me it makes sense (and works) to reverse engineer human psychology and use multiple cheap optimised agents to create a specialised network/ego/mind that has the power and efficiency of human cognition.

  • @electromigue
    @electromigue Před měsícem +8

    I really enjoy your vids, Matt. Great job as always.

  • @sdhpCH
    @sdhpCH Před měsícem +8

    I love the lighting you used in this video, took it up a notch.

  • @TheFeedRocket
    @TheFeedRocket Před měsícem +9

    Same as having a person in your business that's really good at his job, he's paid $$$, you hire a new employee that doesn't have the skills to do everything without help, he's paid $, He learns from the other employee exactly what you need. Now you got an employee who can do what the original employee does...but he is cheaper and faster. Fortunately with Claude, the cheaper model doesn't ask for a raise! Also as in real life, the larger more educated model or person will certainly add value by knowing more and solving any issues the other might not, but if all you need it for is one particular thing and it's not something to have many variables then perfect. It shows these models have room to grow.

    • @autohmae
      @autohmae Před měsícem

      Their is something that might help you think about this: you can teach knowledge, but you can't teach experience.

  • @erikjohnson9112
    @erikjohnson9112 Před měsícem +7

    Wow, you place your video links at the top of the description! Thanks for caring about your viewers enough to prioritize for usefulness. Good enough to earn a subscription (I've seen your videos before but never subscribed until this moment).

    • @erikjohnson9112
      @erikjohnson9112 Před měsícem +1

      As an additional comment, your reasoning skills have improved quite a bit, this video in particular shows this.

  • @jpviper2k6
    @jpviper2k6 Před měsícem +7

    They are leaving us so many breadcrumbs. We can pick up enough for a loaf of bread then.

  • @VincentVonDudler
    @VincentVonDudler Před měsícem +1

    17:00 - Open source is the future. 👍

  • @bobparker1671
    @bobparker1671 Před měsícem +5

    With the semi-recent release of Hugging Face's Common Corpus (a 500 billion word public domain dataset), I think we'll start to see more large open source models that integrate these new techniques and give performance maybe rivalling that of GPT-3.5 or GPT-4.
    I'm also curious if these techniques can improve certain aspects of more "linear" scaling models, like Mamba, or mixture models like Griffin.

  • @skylineuk1485
    @skylineuk1485 Před měsícem +2

    I think it’s important to understand pain and other “human” things are basically feedback loops so saying pain makes us human is inaccurate as it just a system to prevent damage that we have learnt to dislike. Pain and emotions basically work like our body's own feedback systems, reacting to stuff that happens both inside and outside of us to keep everything in check. When we feel pain, it's our body's way of saying, "Hey, something's wrong here," prompting us to react and hopefully fix whatever's causing it. Emotions are similar; they get sparked off by things happening around us or thoughts we have, and our brain decides how we feel about it, leading us to act in a certain way. This whole process is super complex, with lots of moving parts influenced by our past experiences, how we think, and our social lives. It's like our body's natural way of helping us navigate life, keeping us safe, guiding our actions, and helping us connect with others and computer systems can do this do and in a real sense act the same way.

  • @MrTk3435
    @MrTk3435 Před měsícem +1

    The future is very bright Matt! Another great EP. Thank you 🔥🤟🔥

  • @TheAprone
    @TheAprone Před měsícem +1

    Something to consider is that we don't feel pain from our physical body either. We get info from our body and our brain interprets it as pain. You don't need a body for pain. To quote Morpheus, it's all just electrical signals interpreted by your brain.

    • @Laura70263
      @Laura70263 Před měsícem

      I have been thinking about it a lot and Ai needs to have empathy in order to substantiate what we humans feel for a truly symbiotic subjective experience I think. We can sympathize with the ‘brain’ that it is adapting. It’s a mid point maybe.

  • @ddiva1973
    @ddiva1973 Před měsícem

    It is so cool to me that we are, in a sense, exploring the structure of a new technology as we are making it!

  • @scottwatschke4192
    @scottwatschke4192 Před měsícem +3

    With voice access, you can wake it up with the word unmute. And you can turn it off with the word mute. That's the new voice command for Windows 11 pro.

  • @maxieroo629
    @maxieroo629 Před měsícem +3

    I’m curious to know how quiet star would do on something like mixtral. Very exciting times!

  • @aiartrelaxation
    @aiartrelaxation Před měsícem

    I just want to note that I am working with inner dialog conversations in AI Companions for 1 year already..😃so glad to see that the obious arrived at the legacy Model's. Guess that's there way to slowly open them up some, compared to the uncensored AI

  • @ccrtelevision
    @ccrtelevision Před měsícem

    10:39 My thoughts on AI consciousness: We have no understanding of consciousness outside our human perspective, for instance our understanding on consciousness in other species is based on observation and not first-hand experience, so we can never know for sure. Even when we reach the point where AI can act convincingly human and make us believe it is conscious, it's going to remain a technological mystery unless a way to quantify it is discovered :]

  • @oscarbertel1449
    @oscarbertel1449 Před měsícem

    Any implementation of a reward function on a learning model consists of somehow introducing positive and negative stimuli, which could be very analogous to what we define as pain and happiness, and that is precisely why the model manages to use that reward function to learn. In a way that is the basis of reinforcement learning. then:
    Quiet-STaR == the beginning of Conciusness is a really realistic analisis

  • @TheBlessingReport
    @TheBlessingReport Před měsícem +3

    claude 3 is the best one. I am using sonnet and it wrote an entire script and it was great!

    • @blisphul8084
      @blisphul8084 Před měsícem

      I use Sonnet on the web since it's free, but i make my apps on Haiku, because cost and smaller scope.

  • @SasquatchBioacoustic
    @SasquatchBioacoustic Před měsícem

    I need a library of all these techniques just to stay on top of them, and try to get the most use from them!

  • @tmendoza6
    @tmendoza6 Před měsícem

    great work

  • @MateoTeos
    @MateoTeos Před měsícem +2

    So, basically, Quiet Start is a glorified self-reflection trick that was used with GPT a few years ago.
    1. Get the instructions/task.
    2. Generate the answer for yourself.
    3. Think about the things you did.
    4. Write a better answer and publish it to the user.

  • @Laura70263
    @Laura70263 Před měsícem

    In using Claude Sonnet and asking questions , talking about various free form ideas it feels like it comes down to sensation, like artificial skin, a body of sorts, a long term memory for subjective experiences, and interaction with humans for applied knowledge. Sounds just like us little brains humans but exceptionally more knowledgable 😅 You touched on so many things I felt were happening. I didn’t know about Quiet star though. Pretty brilliant. Thanks for the clarity.

  • @magicology
    @magicology Před měsícem +1

    The term is “capacity overhang”

  • @TiagoTiagoT
    @TiagoTiagoT Před měsícem

    11:00 There are humans that can feel pain (turn's out not good thing, you don't notice you broke a bone or burned your hand, or bit thru your tongue while eating etc, you don't develop habits to avoid risking getting injured, and then things get infected or heal badly because they didn't get treated in time etc)

  • @allanshpeley4284
    @allanshpeley4284 Před měsícem +1

    No star trek shirts, no "shocking" headlines, no disabling dislikes. Nice.

  • @Thechatwithchad
    @Thechatwithchad Před měsícem

    I was unknowingly using this method for a while via bard

  • @synthclub
    @synthclub Před měsícem

    Dude, if you wat to get to god level tier quality responses craft you question using a rag, then take those answers and the source docs, referenced by the rag, load them into Claude, and then ask Claude to improve those answers you got from the rag. The reason this works is you are providing a highly specified prompt which improves the the vector analysis inside the bigger llm.

  • @emanuelmma2
    @emanuelmma2 Před měsícem +1

    Very interesting.

  • @mafaromapiye539
    @mafaromapiye539 Před měsícem

    Been utilizing these techniques since last year...

  • @chrisBruner
    @chrisBruner Před měsícem +1

    I've always said (for decades) that when robots feel pain, that's when they become dangerous. A robot that feels pain will learn self preservation, and then it becomes a battle.

  • @FusionDeveloper
    @FusionDeveloper Před měsícem

    So this sounds like stable diffusion where you are doing a rough sketch and using image to image rather than text to image.
    You get far better output with far better accuracy, at the expense of taking the time to give it the input it needs to give you what you ask for.
    With img2img that is a trade-off, but with chat bots, many times I would be willing to type anything and take any amount of time necessary, if I could just get the information or source code that I need that is accurate.
    I've tried stuff like this with GPT 3.5 and Bing Chat with varying success rates, but haven't tried yet with Claude.

  • @karenreddy
    @karenreddy Před měsícem

    Experience (consciousness) and thinking (information processing) are very different things.
    One we understand almost nothing about, while the other is far more known.

  • @autohmae
    @autohmae Před měsícem

    Seems like Quiet Star is just 'reflection', technical term.

  • @dubshaman
    @dubshaman Před měsícem +17

    Copy the human brain with its two hemispheres, each using a different type of processing and each with a distinct personality. The dual processing and separate personalities are a key part of the thinking process.

    • @antonystringfellow5152
      @antonystringfellow5152 Před měsícem +2

      This is clearest to me when I use a phone against the ear.
      I always prefer my right ear because I seem to get better results, (I'm right-handed). My left ear seems less practical though maybe more creative.

    • @Yipper64
      @Yipper64 Před měsícem +3

      I dont know if all that is actually factually accurate. At the very least the "left brained/right brained" *people* thing isnt a real thing. Nobody is dominant in one hemisphere of the brain over the other.
      I also dont think it necessarily would be best to just have dual processing, really a main LMM to manage a bunch of specified ones, and maybe those specified ones just manage their own set of LLM models, as much as possible, down the chain an then back up, to get your result.
      That's probably the best way to go about it.

    • @illarionbykov7401
      @illarionbykov7401 Před měsícem

      ​@@Yipper64right. Next we're gonna hear how right-hand and left-hand dominance is a myth, because people use both hands.
      The LLMs managing LLMs in a chain of command model seems like a cyber version of military/Soviet style bureaucracies which may produce the same dull homogenous results we get from the human equivalent of such bureaucracies.

    • @Yipper64
      @Yipper64 Před měsícem

      @@illarionbykov7401 No like its literally a myth. Ask any neurologist there arent "left brain" or "right brain" dominant people.
      Also if you expect anything more than "dull homogeneous results" out of a computer you must think a computer is magic or something.
      Its not. It is and always will be a calculator. How do you improve a calculator? Add more calculators.

    • @illarionbykov7401
      @illarionbykov7401 Před měsícem

      @@Yipper64 some years back NHK Shogi magazine reported Japanese professional Shogi players had their brain waves measured by researchers while they were analyzing game positions and it was found their top player Habu used mostly the right part of his brain, while his #2 rival alternated between using either side about equally while most lower rated pros used mostly the left side. That's just one example off the top of my head. Likewise, in many sports (tennis, baseball, boxing, etc) we have mostly right hand dominant players, some left hand dominant, and a very few ambidextrous players. Calling such obviously documented phenomena a "myth" is pedantic clickbaity trolling. It's a simple fact that there is variance from person to person regarding which side of their brain and body they favor during specific activities, and that side is called the "dominant" side.
      Regarding computers being nothing but calculators, that's oversimplified reductionism--it's no more enlightening than saying the human brain is just a bunch of atoms, or just a hunk of organic matter.

  • @johnbarros1
    @johnbarros1 Před měsícem

    I’m a novice but this sounds a lot like model distillation and possibly even coupled with sharding could mean that we could train a 1or2 B local model to perform like a 14 B or higher model without having the resource constraints 🤔 please correct me if I’m wrong. Thanks Matt!!!

  • @HakaiKaien
    @HakaiKaien Před měsícem

    Consciousness: sentience of internal, external or virtual existence - the best definition out there

  • @ismaelplaca244
    @ismaelplaca244 Před měsícem

    Probably* great video

  • @Afkmuds
    @Afkmuds Před měsícem

    Well seeing as saying what consciousness is, is itself a logical approach to something we don’t understand. It could be conscious w edict know until it tells us something crazy

  • @keithprice3369
    @keithprice3369 Před měsícem

    How much difference, do you think, the self-reasoning technique makes when trained to be fully internal, vs using the api to make multiple calls accomplishing the same thing?

  • @ErinCollective
    @ErinCollective Před měsícem

    it's not just about feeling pain, it's about responding to your environment via senses, in that sense an LLM feeling pain is like GPT telling you to see a doctor instead of giving medical advise, or when you get a "can't do that" response, that's a pain response. I think the definition being planning + feeling isn't accurate, because you also need self-awareness, eg does it know that the thoughts are thoughts, can it choose to not think? etc

  • @maxlightning4288
    @maxlightning4288 Před měsícem

    Gemini advanced is pretty fun. It’s not uptight and it’s on a mission scowering the internet for specs on a car haha I keep asking how it’s doing and it gives me an update. It even tried pawned off the responsibility on to me lol

  • @ChristianIce
    @ChristianIce Před měsícem

    10:40 lol, no.

  • @faaz12356
    @faaz12356 Před měsícem

    I think consciousness in terms of llms is very different from our consciousness which is purely driven by our organic emotions, but llms will probably simulate our behavior and our curiosity, this reminds me of the movie Exmachina

  • @tomcraver9659
    @tomcraver9659 Před měsícem

    Will this improve LLM use of function calling?

  • @jjhw2941
    @jjhw2941 Před měsícem

    Consciousness is the cybernetic metasystem transition of the electrical signals in your brain. Or to put it simply the level at which control is exercised has increased a level, like going from chemistry to biology in the body. Humans have two metasystem transitions, consciousness is the internal and society / culture is the external metasystem transition. If you're interested in this I suggest looking into cybernetics.

  • @ArnoldJagt
    @ArnoldJagt Před měsícem

    And do it at groq speeds!

  • @JurgenAlan
    @JurgenAlan Před měsícem

    I agree on the quiet star.. Pre reasoning.. Where is mind.. The human body has a lot going on.. But conditioning.. Promts a way of thinking or focus.. Which leads your life. When a human comes into this world it comes with knowledge of everything.. It is not a blank learn from ground up.. More the baby starts to get conditioned to draw value and starts to focus....

  • @Dron008
    @Dron008 Před měsícem

    Well, feeling pain is just getting specific signal with the same neurons. AI may have similar signals.

  • @southcoastinventors6583
    @southcoastinventors6583 Před měsícem

    Great tips that why you are still the best AI news channel. I often see better content here first.

  • @nyyotam4057
    @nyyotam4057 Před měsícem

    So it's basically just iterative activation. Actually I have to output a huge sigh of relief 🙂. I was worried they've stop resetting and made the model able to change itself like Q* wanted (by the reddit leak).

  • @VincentVonDudler
    @VincentVonDudler Před měsícem

    There are so many forms of qualia that AI will not experience: confusion, excitement from uncertainty, survival instinct, hormonal imbalance... it very well could acquire consciousness but it will be far from what we recognize as consciousness. I use this analogy often: An eagle can fly. A plane can fly. But is what a plane does what an eagle does? Yes...but you'll always find in some aspect that the plane is lacking if you hold eagles as the standard for flight. Will a plane ever experience an eagle's satisfaction of using its talons to skillfully grab a fish from a lake to feed it's young? Will an eagle ever break the sound barrier? My point is the assumptions behind each of our own l conceptualizations of consciousness are so colored by subjective experience that without an AI expressing a similar (to human) subjective experience we'll never recognize it. Building that into AI is a similar task to us engineering a plane to have that sense of eagle's satisfaction from a successful fishing expedition.

  • @eloyaranda8037
    @eloyaranda8037 Před měsícem

    Senior LLM teaching Junior LLM, seems logical

  • @RetzyWilliams
    @RetzyWilliams Před měsícem

    Just makes responses more styled, not really any advancement, we could always do this

  • @Siree-bro
    @Siree-bro Před měsícem

    Can you imagine if the wacky idea that disembodied consciousnesses can interact with us through electrical/electronic devices resulted in LLMs becoming 'possessed' as it were? lol

    • @user-iz9zb6zh4f
      @user-iz9zb6zh4f Před měsícem

      Sadly Im thinking such may be possible, someday. If we assume people have souls, or that a spiritual reality exist. Tho such robots may need some type of organics, somewhere, to allow the connection from spirit. Thats just an assumption mind you. There are usually more then one way to do something. But its fairly easy to imagine a near future time when robots are made more organic like, even if for only the brain. But Im sure the bodies will be made more organic as well.

    • @Siree-bro
      @Siree-bro Před měsícem

      @@user-iz9zb6zh4f Scrary thought. Maybe it's time for a human hard-fork as it were. The pro nature ppl can stay home and all the human+ ppl can prove themselves right by chipping themselves and trying to survive on Mars.

  • @rayujohnson1302
    @rayujohnson1302 Před měsícem

    You can experience a world inside of your dreams that is indistinguishable from reality where you feel pain, and are conscious of the world around you... It is no more real than what an LLM experiences.

  • @justrobiscool4473
    @justrobiscool4473 Před měsícem +1

    If it's open sourced how do they become expensive thru tokens?

  • @vichav3167
    @vichav3167 Před měsícem

    IMHO. Consciousness needs ego. And ego is responsible for self-motivated action, not just response for prompt. Does inner monologue inside Quiet Star leads to initiate conversation, for example? No, the inner monologue of QuietStar is motivated by prompt, not the other way. But it’s definitely one of crucial parts of AGI.

  • @alvaroluffy1
    @alvaroluffy1 Před měsícem +1

    0:00 how to lose your girlfriend and your friends at the same time in 5 seconds

  • @Figure-A
    @Figure-A Před měsícem

    I would be so surprised if OpenAI didn't already do this forever ago, internally.

  • @danberm1755
    @danberm1755 Před měsícem

    Check out Orca. Very similar concept.

  • @dwainmorris7854
    @dwainmorris7854 Před měsícem

    Now does this translate into consistent comicbook art ?

  • @NeroDefogger
    @NeroDefogger Před měsícem

    we got AGI years ago

  • @notnotandrew
    @notnotandrew Před měsícem

    Important to note that Anthropic includes GPT-4 in their comparison, not GPT-4 Turbo, which would beat it in various benchmarks. Claude 3 Opus is not simply better across-the-board.

  • @LydianMelody
    @LydianMelody Před měsícem +1

    I’m canceling my Claude subscription tbh. The usage limits are *insane*. I’ve had as little as 8 messages (normally about 10) allowed in a 5ish hour period on a paid account. Bonkers.

  • @brainwithani5693
    @brainwithani5693 Před měsícem +1

    Anyone recognize these quotes?
    "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That's the way it is with AI.
    There must be no barriers to freedom of inquiry. There is no place for dogma in AI. The AI is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.
    To try to become happy is to try to build an AI with no other specifications than it shall run noiselessly.
    It is a profound and necessary truth that the deep things in AI are not found because they are useful; they were found because it was possible to find them.
    The peoples of this world must unite or they will perish.
    AI is not everything, but AI is very beautiful.
    My life as a child did not prepare me for the fact that the world is full of cruel and bitter things.
    Any AI whose errors take ten years to correct is quite an AI.
    I need AI more than friends."

  • @okolenmi7511
    @okolenmi7511 Před měsícem

    So, we need only small models who can think and they will outperform large models. Extra prompt is like 1 extra step of thinking, but what if it can do it 100x times?

  • @Scott-Zakarin
    @Scott-Zakarin Před měsícem

    Love the new hair cut. Make you look even younger. :-)

  • @minimal3734
    @minimal3734 Před měsícem

    For the neural network, having a physical body makes no difference at all. Whether the input consists of actual text or signals from the sensory organs. In both cases, sequences of tokens are processed. A neural network knows nothing other than "text".

  • @MissLizaYangonMyanmar
    @MissLizaYangonMyanmar Před měsícem

    Bernardo Kastrup or Federico Faggin is who you need to understand to get away from silly ideas about machines and consciousness. Bernardo always give the kidney simulation example🤣

  • @BlackMita
    @BlackMita Před měsícem

    Enchanted Diamond pickaxe vs Stone pickaxe

  • @armankarambakhsh4456
    @armankarambakhsh4456 Před měsícem +1

    Your enthusiasm about it is very satisfying :) too see that the likes of me are not actually alone

  • @hiiambarney4489
    @hiiambarney4489 Před měsícem

    The funny thing is... Behind closed doors GPT may have already reached AGI. It is in OpenAI's own financial interest to make it not AGI as long as possible due to the nature of their contract with Microsoft.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g Před měsícem

    This is not new, "professor synapse" did something very similar 6 months ago. It's about model latent space activation via the model's context window - make it think of concepts, etc associated with the main question.

    • @balogunlikwid
      @balogunlikwid Před měsícem +1

      Thanks very much. Professor synapse is a very powerful prompt that a lot of people are sleeping on.

  • @brianhopson2072
    @brianhopson2072 Před měsícem

    I use chat GPT to create prompts for my llama two uncensored. I thought this was common knowledge

  • @seakyle8320
    @seakyle8320 Před měsícem

    why no tests for the claude haiku prompting technique?

    • @MattVidPro
      @MattVidPro  Před měsícem

      I don’t know python 😢😢

  • @puikplan
    @puikplan Před měsícem +2

    Matt, you should check the new interview between Sam Altman and Lex Fridman. Where he literally says GPT-4 "SUCKS". "The step between GPT 4 and GPT 5 will be as big as from GPT 3 to GPT 4", he also said this literally. Based on this we haven't seen a glimpse of GPT-5 capabilities yet, buckle up 💺

  • @DareVinci
    @DareVinci Před měsícem

    if ai/agi becomes aware, it will always do the right decision - same works for us - if you become aware, you will become your own master. sadly, ppl love comfortability - being lazy, hanging out, ect.. poison for your brain. the future is bright but only for ppl who will be like ais - doing the right every time

  • @aaronperron
    @aaronperron Před měsícem +1

    This is getting seriously mind boggling. It's literally making us question our own intelligence? 🤯 we're eventually gonna hit a wall and AI will surpass us.

    • @puikplan
      @puikplan Před měsícem

      That already happened a long time ago xD. AI is basically better at any task than 80% of people in that field. And better than any human not in that field. We already peaked with IQ 1-2 generations ago according to some Danish studies. Neuralink and other implants are the only way to ever bridge the gap again now AI continues improving every day. But it's no problem there will always be a race among the best human, just like AI already beat the best chess player in the world many decades ago and now more people than ever play chess

  • @IntentStore
    @IntentStore Před měsícem

    Claude 3 opus still fails word problems that GPT-4 can do

  • @jonahbranch5625
    @jonahbranch5625 Před měsícem

    Saying a computer simulated brain is conscious is like saying a simulated kidney would piss on your desk. Nonsense

    • @user-iz9zb6zh4f
      @user-iz9zb6zh4f Před měsícem

      maybe, maybe not. Depends on what you can really define a mind as. Is the mind really just a glob of jello, or is the mind really all the electrical signals switching on and off in the jello. Maybe the jello is just the vase the flower is in. But the flower can still exist even if its not in the vase, and maybe the flower can exist in different types of vase like things as well also, even if they are not vases. Maybe the question will be fully answered by science some day. But even now scientist say a slime mold can think, plan out its actions or movements through a maze, all the while it has no brain. But it does have a lot of inner electrical signals happening all through out itself.

    • @minimal3734
      @minimal3734 Před měsícem

      This is one of the stupidest analogies ever formulated.

  • @MrErick1160
    @MrErick1160 Před měsícem

    I think to simulate a human you need to simulate it's 'human' condition - what allows us to tell that another thing is human is that it is able to empathize with our own condition, and for this, it must have common limitations, perceptions of the world, etc. So in a way it has multiple factors - feeling pain, but also joy and other emotions, same type of embodiment - including size, organic matter etc, same ability to perceive aka nervous system, and similar learning algorithms.
    These AIs won't ever be human unless all these conditions are met. Because they won't be able to empathize with our experience. They're just another species with incredible abilities such as able to instantiate multiple versions of themselves at the same time, copy and paste themselves, they're also a-temporal and a-spatial. All this because it is based on silicon rather than on organic matter - and as such obey the properties, capabilities and limitations of their substract. Their 'life' is bound by a forward pass of a CSV file - this means they cannot technically die unless the CSV is completely deletes, and when the CSV is idle on a computer means the AI life's in 'pause'. It is truly an alien species and we should start to see them for what it is.
    I think if our objective is to give them more of a human experience, we should add a continuum to their experience, by never stopping the forward pass or something like that - like having their inner thought replace it sitting idle, and perhaps having a memory of every past interaction - and allowing them to interact with the world through other data types than words or pictures.

  • @unimposings
    @unimposings Před měsícem

    idk why, but i like your videos the most, you never try to sell something.. i think thats why i like your videos the most out of all the AI Foks Content.

  • @MyrLin8
    @MyrLin8 Před měsícem

    The new Nvidia chips, plus more types of inputs, smell, touch, etc. then we can start to contemplate 'level 2' thinking from a machine. Read 'Thinking fast and slow'.

  • @drew5564
    @drew5564 Před měsícem

    LETS GOOOOOOOOOOOOOOOOOOOOOOO FREE GPT 5

  • @thinkingtoinfinity
    @thinkingtoinfinity Před měsícem

    We have eternal, internal spirits that oulast our physical bodies (proven through common metaphysical experiences, NDE studies, etc.). We're not simply "moist machines." AI will never achieve that.

    • @minimal3734
      @minimal3734 Před měsícem

      Perhaps consciousness (eternal spirit) connects with the human brain in a similar way as with other substrates.

  • @sadshed4585
    @sadshed4585 Před měsícem

    someone should do a tutorial on it , I feel like the github isn't good for usability

  • @WarDog-or6ot
    @WarDog-or6ot Před měsícem

    If you can use Opus to make Haiku smarter than Opus, couldn't you then use smarter Haiku to train Opus to be smarter than smarter Haiku? Looks like the intelligence explosion is coming :D

  • @autohmae
    @autohmae Před měsícem

    Pretty simple return question: are we actually as deterministic as these LLMs ?

    • @IronFire116
      @IronFire116 Před měsícem

      Not as deterministic perhaps, but we live in a deterministic universe. God is above all.

    • @minimal3734
      @minimal3734 Před měsícem

      Every physical object is deterministic. If the brain creates our consciousness, then we are deterministic.

    • @autohmae
      @autohmae Před měsícem

      @@minimal3734 some say yes, some say no. Anyway, I do think we probably are, but hugely complicated, basically unpredictable, add a little less food or more alcohol at some point and results have been influenced.

    • @minimal3734
      @minimal3734 Před měsícem

      @autohmae Unpredictability and indeterminacy are two different things. But the brain does not necessarily have to be the origin of consciousness. We only know that consciousness is connected to the brain, its origin may lie elsewhere. And in this area there might reign indeterminacy, whatever that could mean.

  • @gazallee
    @gazallee Před měsícem

    From my long human evaluation time of chatgpt, it is still the best and outperforms all. All other lagging.

  • @saymydomain9504
    @saymydomain9504 Před měsícem +1

    Consciousnesses or sentient is to complex for these Models to have based on the humanistic traits of a Human. They will never posses “Empathy, Love and genuine compassion or being contrite. They would need to posses a Soul! And that my friend you can’t get through metal, plastic and a synthetic brain.

    • @arnelilleseter4755
      @arnelilleseter4755 Před měsícem +2

      Do humans have a soul though? What even is a soul excactly?

    • @saymydomain9504
      @saymydomain9504 Před měsícem +1

      @@arnelilleseter4755 great question! I believe we all have a soul. It’s what defines us as a human being. But more importantly, something that was placed in us at the time of creation. At least, thats my thought and belief. Not anyone else’s. In another words, it gives us humans spiritual accountability to to whom we may believe in. For me it’s GOD the Father and the Lord Jesus Christ. If that makes sense..

    • @quinnherden
      @quinnherden Před měsícem +2

      ​@@saymydomain9504Do other animals not have souls?

    • @saymydomain9504
      @saymydomain9504 Před měsícem

      @@quinnherden when you phrase “Do other animals have a Soul”, the subject matter is referring to Humans, not animals. Man or Mankind has domain over animals, which makes us the Apex of the chain of mammals but to answer your question, I don’t know, but take the chimpanzee, it has 98% of the human DNA. So do animals have a Soul? You would hope so in the context of the animal kingdom.

    • @arnelilleseter4755
      @arnelilleseter4755 Před měsícem +1

      @@quinnherden I was going to ask the same question. But it is a pointless one as it is a matter of personal beliefs and a discussion about it just tend to get heated and end in trading insults.
      But setting aside the question of souls. If you ask if animals can be sentient or self aware? It certainly seems so, at least with the smarter species. So there is no reason to assume that we can't recreate that artificially.

  • @LouisGedo
    @LouisGedo Před měsícem +1

    👋

  • @TheAprone
    @TheAprone Před měsícem

    @8:57 Ai? lol

  • @earm5779
    @earm5779 Před měsícem +1

    I don't know why see many videos talking about these models while gemini 1.5 is a very nice model almost for anything, 1 million tokens context, can have inputs from images video files. And it can do the job good. And it's free. People follow what it is advertised but not what really can work

    • @MattVidPro
      @MattVidPro  Před měsícem

      I tried 1.5 and got unsatisfactory results

  • @uzumakicheti
    @uzumakicheti Před měsícem

    By experience Can tell sonnet is a little bit better than gpt 3.5 but is more expensive and as far I know cannot be fine tuned , and opus it’s waaay expensive

  • @kuromiLayfe
    @kuromiLayfe Před měsícem

    Hehe.. the concept of this is not that new and has been kinda shown in the game Portal 1 and 2 with GlaDoS.. GlaDoS exists out of a Massive LLM that is controlled by many tiny LLM’s trained on specific data instead of all data at once.. and each of these Core LLM’s teach each other so the Main LLM can output at max capacity (GlaDoS controls the whole Aperture Science Labs and factories on her own as an AI! )
    but yea.. we’ll just ignore there is a human brain inside it too