Ilya Sutskever | GPT4 predicts the next word better | Now upgraded to the more powerful GPT4o

Sdílet
Vložit
  • čas přidán 19. 06. 2024

Komentáře • 32

  • @DCC72
    @DCC72 Před 6 dny +3

    Only illya is brilliant enough to make Jensen speak with cautious humility.

  • @13371138
    @13371138 Před 7 dny +4

    This guy talks about AI with so much drama, it really makes me think about the risk involved

  • @for-ever-22
    @for-ever-22 Před 5 dny

    This is the good guy but he isn’t getting the credit he deserve. Great conversation.

  • @indylawi5021
    @indylawi5021 Před 5 dny

    Great insights about deep learning.and neural net as the foundation of GPT all the way to the current GPT4!

  • @Dr.UldenWascht
    @Dr.UldenWascht Před 6 dny

    2 seconds in, "Chad GPT" in the subtitles spotted. Nice.

  • @hanziggywuest
    @hanziggywuest Před 7 dny +1

    Yes! Ask it who was the murderer of K and let's listen to it think out loud! It's become a matter of next word prediction. Let's do it.

  • @dubdogstep
    @dubdogstep Před 7 dny +2

    just take it in how 2 super advanced AI humanoids talk to each other ?D :D

    • @Krn7777w
      @Krn7777w Před 6 dny

      I was in doubt for a minute if those are their digital twins or they are actually real. I had to stare very closely to make sure.

  •  Před 7 dny

    Possible definition of “reasoning”:
    Coming to a desired final answer by:
    - realistically and accurately understanding the particular relationships between all relevant objects and concepts (via prior education or on-hand reference materials);
    - building a chain (or chains) of logical statements (realistic assessments of strong probability) out of this knowledge, all the way from initial evidence to final conclusion;
    - doing this by having a mature sense of what knowledge might be relevant in solving for a particular conclusion, and then bringing that knowledge into mind and triangulating it with other relevant knowledge to generate a conclusion from it (which may or may not be the final conclusion you seek);
    - checking whether that triangulated conclusion is incongruent with any of your other knowledge; if not, check whether it happens to actually be the final conclusion you seek (whether it addresses the initial question with high confidence and specificity); if not, check whether it seems to be relevant to figuring out that final conclusion; if so, proceed to repeat the above process to seek out the next conclusion in the chain, or to build a new chain that will lead to another conclusion that may inform the current chain or other chains;
    - building chains of reasoning in this way, link by link, intermediate conclusion by intermediate conclusion, toward the ultimate conclusion you seek, linking chains together into the combined chain that ultimately leads to the final conclusion as the chains inform each other;
    - if you arrive at a dead-end conclusion which does not seem to be relevant to the ultimate conclusion you are pursuing, and you can no longer move forward with that particular chain of reasoning, move backwards on the chain and try moving in a new direction;
    - repeat these steps until you arrive at the ultimate conclusion that was your initial goal;
    - discard any irrelevant chains and chain segments that did not ultimately contribute to the final conclusion, or which are redundant.
    Does this seem like a reasonable definition of “reasoning”? Maybe the better you are able to follow this process, the better your “reasoning skills” are.
    It seems to me that the process I just described is the very process that I used to write this whole comment. I just explored different relevant concepts and triangulated different conclusions that I then subconsciously checked for relevance, one by one, until I was able to chain together a statement that made sense from start to finish, and which also accomplished the goal I had set out to achieve at the start.
    I feel like GPT must already be doing this, and maybe just has some weaknesses of understanding along the way. Like maybe it simply “jumps to conclusions” and prematurely assesses itself to be accurate due to a particular lack of understanding, using flawed probability assessments that sometimes work and sometimes don’t.
    “Reasoning” may just be a more rigorous form of conclusion-generation that we know to apply in cases where realistic accuracy is highly valued. We know when we lack understanding, and we are able to investigate that knowledge gap using reference materials, but ChatGPT does not have Internet access and has to rely only on its own internal understanding. Maybe we just need to ask it to tell us when it’s confidence in what it is saying is lower than normal, in cases when we especially want to make sure it’s not “bullshitting” it’s way to an answer.

  • @Batmancontingencyplans
    @Batmancontingencyplans Před 7 dny +4

    Why does illya always looks like he has been kissed before starting the interviews?

    • @Krn7777w
      @Krn7777w Před 6 dny +3

      Maybe he is and he forgets to wipe of the lipstick.

  • @AidenDavidson-sp1zv
    @AidenDavidson-sp1zv Před 7 dny +2

    It's incredible how similar ChatGPT is to the human brain. When we talk, we choose the next word that makes the most sense based on what has been reinforced in our learning and what we understand the words we said before to mean, just like ChatGPT chooses the next word based on what has been reinforced in its learning, and what it understand the previous words to mean.

    • @sirkiz1181
      @sirkiz1181 Před 6 dny +1

      Uhhh no. You might say we do this to an extent in terms of formatting a sentence, but we the way we think is clearly completely different. We contemplate things, think in abstract concepts that we then use words to describe and we reflect before generating anything.

    • @elawchess
      @elawchess Před 6 dny

      @@sirkiz1181 It's the end product of the neural network that predicts the next word though. Who is to say the other mysterious business going on in the neural network is not equivalent to "contemplating thinngs, ... think in abstract concepts".

    • @sirkiz1181
      @sirkiz1181 Před 6 dny

      @@elawchess the problem is still though in the way that we learn these abstract concepts. We create them through interactions with reality, the LLM learns it through an unfathomable amount of text data. We use words and symbols to represent things, to the AI those are the very things it thinks with.

    • @elawchess
      @elawchess Před 6 dny

      @@sirkiz1181 OK I would consider that to be a different topic though. Now they are feeding in audio, video, etc druing the training of LLMs. I don't mean to suggest we use the same type of "machinery" inside anyway. Just that one can view the output from a human as "predicting" the next word without saying much about what's actually going on in a human brain and same for LLMs.

  • @cogitoshort
    @cogitoshort Před 7 dny +2

    He left Open AI weeks ago. How old is this video?

  • @EnigmaPeePee
    @EnigmaPeePee Před 4 dny

    So i have out figured how to make agi. How do I connect with someone that can apply my “theory”?

  • @egoneoteo
    @egoneoteo Před 4 dny

    This interview is super old

  • @Chris-se3nc
    @Chris-se3nc Před 6 dny

    Never tried Chad GPT or shad GPT

  • @hdtvpower
    @hdtvpower Před 7 dny

    Date of interview?

    • @agenticmark
      @agenticmark Před 7 dny +4

      old, before he was shitcanned and signed a "do not talk" agreement

  • @EmeraldView
    @EmeraldView Před 6 dny

    Why are we doing this?

  • @TooManyPartsToCount
    @TooManyPartsToCount Před 5 dny

    OLD VIDEO REGURGITATOR

  • @petersoakell6950
    @petersoakell6950 Před 7 dny

    the AI will read this comment)

  • @induplicable
    @induplicable Před 6 dny

    Reasoning isn’t well-defined?! My guy say you’ve never read philosophy without saying you’ve never read philosophy. There are volumes of books on individual reasoning modalities and overviews. Lmao
    You have to believe that baseless claim in order to obfuscate the fact that LLMs suffer from The Problem of Induction and there’s no clear path for how to develop active inference.

  • @petersoakell6950
    @petersoakell6950 Před 7 dny

    Serious stuff.🥸. Don't mess it up .