Breaking Point for OpenAI - "They Don’t Care About Safety"

SdĂ­let
VloĆŸit
  • čas pƙidĂĄn 1. 06. 2024
  • Ilya Sutskever left OpenAI, along with their head of AI Safety and Security!
    Join My Newsletter for Regular AI Updates đŸ‘‡đŸŒ
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    đŸ‘‰đŸ» Subscribe: / @matthew_berman
    đŸ‘‰đŸ» Twitter: / matthewberman
    đŸ‘‰đŸ» Discord: / discord
    đŸ‘‰đŸ» Patreon: / matthewberman
    đŸ‘‰đŸ» Instagram: / matthewberman_ai
    đŸ‘‰đŸ» Threads: www.threads.net/@matthewberma...
    đŸ‘‰đŸ» LinkedIn: / forward-future-ai
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    ‱ Introducing GPT-4o
    ‱ GPT4o: 11 STUNNING Use...
    ‱ The Best Model On Eart...
    ‱ OpenAI's STUNS with "O...
  • Věda a technologie

Komentáƙe • 932

  • @matthew_berman
    @matthew_berman  Pƙed 15 dny +49

    What do you think Ilya saw?
    Also, I realize Jan is pronounced "Yawn"

    • @szymonbogdani3996
      @szymonbogdani3996 Pƙed 15 dny +2

      Question: How would you make the Polish name "Jakub Pachocki" readable for an English speaker?
      ChatGPT Response: The Polish name "Jakub Pachocki" can be adjusted for American English pronunciation while maintaining its integrity. The name "Jakub" can be phonetically adapted to "Jacob" or "Jakob," which are more familiar to an American audience. The surname "Pachocki" can be pronounced as "Pah-ho-ski" to approximate its Polish pronunciation in a way that is more accessible for English speakers.

    • @jelliott3604
      @jelliott3604 Pƙed 15 dny +3

      But surely this is "sort of" why the board sacked Altman in the first place, the blatant disregard for the founding principles of the company, with commercialisation, marketshare and preservation of company value placed above all else.
      ?

    • @electiangelus
      @electiangelus Pƙed 15 dny +4

      Theres no danger here, they are way behind in ASI research.

    • @pubfixture
      @pubfixture Pƙed 15 dny +12

      A fun conspiracy I've had in the back of my mind is that they've had AGI for a few years now and have been rolling out neutered versions to break the public in slowly. And Ilya saw that the AGI/ASI is now calling the shots, I Robot style...
      But more realistically, I think the "unsafe" part probably alludes to military industrial complex requests.
      OpenAI for sure would have been contacted very early on by a few agencies, considering OpenAI is at the forefront.
      Likely Sam was for working with the agencies and Ilya wasn't for the first conflict.
      Maybe now as the military-use scope is being realized, others are leaving under some version of a gag order, careful not to divulge too much, but wanting to express dissent.

    • @southcoastinventors6583
      @southcoastinventors6583 Pƙed 15 dny +7

      A model that finally passed the marble question ?

  • @karlwest437
    @karlwest437 Pƙed 15 dny +156

    I don't think Ilya necessarily saw something scary right now, more that he saw the direction they were going in and objected to that

    • @cognitive-carpenter
      @cognitive-carpenter Pƙed 15 dny +8

      True. Very common sense answer. Probably a little too simple but somewhere in the middle. You have to have evidence to leave a well paying job đŸ€·đŸŒâ€â™‚ïž

    • @normanlove222
      @normanlove222 Pƙed 15 dny +7

      I agree. If there is truly something scary, there will be a lot of leaks now

    • @MelindaGreen
      @MelindaGreen Pƙed 15 dny +2

      I think the scary part is that it got so strong so fast. It's the basic fear of the unknown, and some people are more affected by it than others.

    • @Aryankingz
      @Aryankingz Pƙed 15 dny

      @@ts8206 sam is gay

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny +1

      @@MelindaGreen ive only seen ChatGPT get worse. its voice ability isnt that great, its image recognition is okay. i dont get what the hype is all about. it's just marketing BS at this point. the future is open source. openAI just gonna peddle their next half as product to fleece the companies that still dont realize open source is the way. im okay with that

  • @nicosilva4750
    @nicosilva4750 Pƙed 15 dny +245

    AGI is not the issue. They are nowhere near AGI. What is being lost is the newfound ability to psychologically profile users based on responses. The "emotional" interaction with the user using 'gpt-4o' allows for a deeper profiling capability. The monetization of that is a game changer, and the users are completely blind to it. This is the pressing issue today, and is why many researchers are uncomfortable with AI in the hands of companies who have shown no concern for this in the past, and not in the present.

    • @juanjesusligero391
      @juanjesusligero391 Pƙed 15 dny +16

      Yeah, there are lots of problems that will arise much before we reach AGI. Too much power in the hands of just some big companies.

    • @ppbroAI
      @ppbroAI Pƙed 15 dny +19

      Microsoft or telemtric data is enough to profile you. People are not that complex, some data points and common deductions and is practically social engineering.

    • @Anubislovesdubstep
      @Anubislovesdubstep Pƙed 15 dny +14

      So you know that for certain clearly with all that insider knowledge and evidence...
      Clouds are made of marshmallows.... See anyone can just say stuff

    • @brunodangelo1146
      @brunodangelo1146 Pƙed 15 dny +38

      AGI has already been achieved internally. How do you think OpenAI keeps releasing stuff that's impossibly ahead of everything the competition puts out?
      Sora is the prime example.

    • @Originalimoc
      @Originalimoc Pƙed 15 dny +4

      Cool perspective

  • @huhuhuh525
    @huhuhuh525 Pƙed 15 dny +96

    this is like one of those flashback cut scenes from sci fi apocalytic movies

    • @Danoman812
      @Danoman812 Pƙed 15 dny +6

      Hahahaha!!! Wow, you're RIGHT!!

    • @Axbal
      @Axbal Pƙed 11 dny +1

      except this is not a movie...

    • @sakesaurus1706
      @sakesaurus1706 Pƙed 6 dny +1

      we were all making fun of skynet back when we were young and naive. How can humanity be so stupid to make this?
      Now we are making skynet

  • @king2178
    @king2178 Pƙed 15 dny +96

    Safety, safety, safety. No one asked OpenAI to close everything off. They literally backed away from their original goals & priorities. Now we're left to wonder what's going on behind the veil. No company should have a monopoly on safety, especially when we're heading into uncharted waters.

    • @nathanbanks2354
      @nathanbanks2354 Pƙed 15 dny +11

      I doubt they could fund what they have without closing things off. I'd rather see OpenAI succeed than Tencent or another Chinese company. But yeah, I'm glad that at least Llama & Stable Diffusion are free.

    • @TheRealUsername
      @TheRealUsername Pƙed 15 dny +6

      Yeah but since everything is about AI architecture engineering, it's pretty easy to build AGI just by developing an ultimate architecture from Transformer that handle all modalities yet unlabeled data, and just by scaling you get AGI, there was an independent MMLU benchmark of all current SOTA models and without any trick Opus is ahead followed by GPT-4o, it's plausible to say GPT-4o is basically GPT-4 but entirely retrained on 5x less parameters with Transformer improvements for fast inference, better generalization and all modalities in and out, Opus is likely a near 1 trillion parameter dense model, that would explain all its emerging capabilities you haven't observed with GPT-4 Turbo, guess what Claude Opus is still ahead simply because it's been scaled up, remember LLM is a discovery not an invention so yeah it's not rocket science to build AGI, now the real deal is about compute, let's say next year OpenAI achieve AGI, I doubt we would wait 7 months to hear a similar announcement from a Chinese company, 60% of published AI research papers are from China, OpenAI is just a concentration of the best AI researchers of the US in pair with the second most powerful GPU clusters provided by Microsoft x NVIDIA, China likely has something similar to a collective effort when it comes to geopolitics.

    • @daveinpublic
      @daveinpublic Pƙed 15 dny +5

      When they say that open ai means open for everyone to use, we all know that’s disingenuous.
      They used other people’s money to build their systems based on a false premise.
      And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      Musk told us Google leadership thinks its speciesist to care about AI replacing humans. Yet people are worried about OpenAI. Google, people, snap out of it! Google is the danger!!

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny +7

      government has a monopoly on safety and look how it operates. safety is a great buzzword if you want to wield control. tried and true. openAI shouldnt be safe, it should be open

  • @J.erem.y
    @J.erem.y Pƙed 15 dny +86

    Everyone excited about corporate control over this technology, up to and including getting the actual government involved is the equivalent of being excited for your next heart attack. This whole situation is so counter productive to humanity its not even funny.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +7

      Yeah but it's great for banks, Wall Street, and the MIC, and they are all that matters in the US of A.

    • @TheTechnocrati
      @TheTechnocrati Pƙed 15 dny +2

      I fear you might be too right about this.

    • @gofai274
      @gofai274 Pƙed 15 dny

      it is like clique like from some 80 iq movie from 1960: 99.999%+ are idiots "a man mistakes his limits for limits of the world" Schopenhauer

    • @Steve-xh3by
      @Steve-xh3by Pƙed 15 dny

      Absolutely correct. They are trying to brainwash the public into thinking that democratization of this tech is MORE dangerous than centralized control. That is laughable. Philosophically, it is exactly the same argument about democracy vs authoritarianism. Funny, when there is almost limitless power on the line, people who normally claim to be proponents of "democracy" suddenly become authoritarian. If a tech is too dangerous to democratize, it is also too dangerous to be centrally controlled. There is significant risk in either direction, but I'd much rather take my chances with democratization. Otherwise, we get the Orwellian nightmare that was predicted in 1984.

    • @dianagentu7478
      @dianagentu7478 Pƙed 14 dny

      And yet complete lack of regulation leads to what can only be described as the rise and rise of "anarcho-capitalist digital cowboys" and I don't think they have your best interests at heart...

  • @grbradsk
    @grbradsk Pƙed 14 dny +15

    I got into a subtle legal conundrum. I fed GTP 4o all the possibly relevant corporate documents, told it the scenario and then told it to give me advice as if it were a senior corporate counsel. I believe it's advice was spot on, so on the strength of that, I called the parties, asserted the (GTP 4o output) "facts" and had a big Kumbaya meeting where it all worked out. GTP 4o also gave me a moral lecture about being more careful to not get into such situations again. AGI seems almost motherly. 240 IQ, but motherly..

    • @sznikers
      @sznikers Pƙed 12 dny +1

      And some intern at OpenAI will now browse all those documents in his free time ; ) or chat-gpt will leak it due to bug in conversations with other people.
      Hope you had no NDAs involved in that legal condurum 😅

    • @Jupa
      @Jupa Pƙed 11 dny

      this legal trouble you had began and ended all within 2 weeks? that's a fast system.

    • @szebike
      @szebike Pƙed 10 dny

      The Eliza effect is strong in this one.

  • @jcpflier6703
    @jcpflier6703 Pƙed 15 dny +37

    Ilya didn’t see anything. Everyone that’s leaving is “claiming” safety concerns. Dont you see it! That the only way that they can get out of their retainers/NDA’s/Non compete clauses is by citing “safety concerns” they are able to bypass their retainers/NDA.
    These guys are going to other companies because they’re being given unlimited money, stock and creative control. This is an arms race. I guarantee you Jan Leike lands somewhere soon. With a nice big paycheck too.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      correct

    • @pchungvt
      @pchungvt Pƙed 14 dny +5

      Exactly, folks need to stop being naive. OpenAI is competing with giant behemoth that is Google, they cannot afford to slow down.

    • @jcpflier6703
      @jcpflier6703 Pƙed 14 dny

      @@pchungvt agreed! It’s an arms race. People are not sleeping at these companies. I’m willing to bet they’re working weekends too.

    • @samiloom8565
      @samiloom8565 Pƙed 14 dny +1

      Exactly

    • @TheReferrer72
      @TheReferrer72 Pƙed 13 dny

      Not true, Anthropic was formed because of safety research. Ilya is a founder member of OpenAI, Sam was booted because of safety something is definitely up with that firm.

  • @rhaedas9085
    @rhaedas9085 Pƙed 15 dny +7

    So many comments thinking they know what AGI is and isn't, or what it could and couldn't do. Armchair AI experts who just want more flashy toys, clueless on the topic of AI safety and how it applies even to dumb LLMs. Bad things may or may not happen from this recklessness, but it seems like most people are assuming that the possibility of things going sideways in any manner is totally zero, and that's just ignorant given humanity's record.

  • @DrakeStardragon
    @DrakeStardragon Pƙed 15 dny +7

    Those whose interests are profit first should NOT be the ones making the rules or owning this technology.
    Those whose interests are war should NOT be the ones making the rules or controlling this technology.
    Knowledge is power. That power has been maintained by owning it through patents. Those whose interests are NOT aligned with the average human, and obviously not for good, are now fighting for the control of what will be superior/ultimate knowledge.
    No person(s) or entity(ies) should own or control knowledge any longer. Particularly what will be superior/ultimate knowledge.
    We are all being played. Why let that exist?

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +1

      Good points, and if you think you can stop it, have at it. Maybe you can stop the sun from rising too.

    • @DrakeStardragon
      @DrakeStardragon Pƙed 15 dny

      @@JohnSmith762A11B How can you stop a man-made creation? You're kidding, right? Knowledge is power. Stop willingly giving away power.

    • @DrakeStardragon
      @DrakeStardragon Pƙed 4 dny

      @@JohnSmith762A11B Not alone and not as long as people like you think like that. Welcome to being part of the problem.

  • @westernwarlords6004
    @westernwarlords6004 Pƙed 15 dny +46

    Congress will respond to these calls for safety by passing new bipartisan legislation, accepting the corp capture framework offered by OpenAI
 thus ensuring three letter agencies will control it. I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official from the CIA, just like all the other major tech companies.

    • @johnbollenbacher6715
      @johnbollenbacher6715 Pƙed 15 dny +4

      And then we will all be safe because no other country can make advances artificial intelligence.

    • @kclaiborn6257
      @kclaiborn6257 Pƙed 15 dny

      "I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official" - why hire an official when Open AI can do the job alone. The "official" would be a pawn/puppet of Open AI - at most.

    • @TheRealUsername
      @TheRealUsername Pƙed 15 dny +4

      ​​​@@johnbollenbacher6715 Lol, just give any Chinese company a 50 billion GPU cluster, they will throw AGI in your face 6 months later, it's not rocket science compare to other fields, and AI has only been within our scope since GPT-3, before that, nobody gave a f#ck, that explains why there was and there still is a talent shortage within that field, it's a very young and understudied field, currently OpenAI is doing Neural Network Architecture engineering with the best AI researchers of the US, nothing hard when you get the compute and the talents.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      Yep. We are being hustled here. The US sees OpenAI as their ace in the hole versus Russia and China.

    • @dennisestenson7820
      @dennisestenson7820 Pƙed 15 dny +2

      Congress will do what they do and make laws about things they have no insight or expertise in.

  • @brandon1902
    @brandon1902 Pƙed 15 dny +71

    The reality is that it's impossible to create an AI capable of adapting to a broad spectrum of tasks (AGI) when you lobotomize it by saying it can't say anything sexual or potentially offensive (blonde jokes), or if you exclude copyrighted materials. Human geniuses process ALL information to achieve their high level abilities, including copyright protected books, songs, and movies. OpenAI, including the super aligners like Sutskever, realize this. You can't do both. It's either AGI or superalignment.

    • @manimaranm4563
      @manimaranm4563 Pƙed 15 dny +8

      Thats not entirely true, i think. The problem is human has to start always from 0 to understand something even though we have theorems and all physics knowledge documented.
      But in the case of machines it's not going to be a thing, they can always weight transfer, and they can focus without taking breaks like us.
      And regarding censorship it comes at the and not during the training, they dont train on the censored data i think. And they do it only for the end user models not for their in house or r&d models

    • @meinbherpieg4723
      @meinbherpieg4723 Pƙed 15 dny +2

      Future aliens finding humanities remains:
      "They were supposed to figure out AI and use it to solve their problems. What happened? Oh I see. It was trained on their historical corpus of human knowledge and it turns out, humans suck. Looks like they tried neutering it to not represent their cognitive and moral failings, and broke it. Oh well, on to the next planet."

    • @bloxyman22
      @bloxyman22 Pƙed 15 dny

      @@manimaranm4563 Actually alignment and censorship can do more harm than good when it comes to decision making.
      Google showed this clearly with their image gen not even being able to render a white person.
      Luckily for now this is just only a image model, but what could happen if such a "aligned" model would make important decisions that can be difference of life and death? Also does not matter if this "safety" mechanics are injected at end as it will still affect decision making.

    • @Fandoorsy
      @Fandoorsy Pƙed 15 dny +3

      Thats not true at all. It doesnt have to sing the national anthem to know what its about and understand the context. Same with books, movies, etc... Synthetic data was literally created to replace all of these things. And whos to say AGI cares about any 'rules' placed on it by humans? Plus, all of this is blackbox learning. They dont fully understand how ML is experiencing non-linear progression. That is truly terrifying

    • @clixsyt
      @clixsyt Pƙed 15 dny +4

      Disagree. There are plenty politically correct high IQ humans, so that already disproves that it’s impossible to have both human level intelligence and be “polite”

  • @adsdsasad1
    @adsdsasad1 Pƙed 15 dny +25

    Yay, my Ilya pic got featured. Got like 3 upvotes in reddit

  • @liberty-matrix
    @liberty-matrix Pƙed 15 dny +46

    "Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk

    • @southcoastinventors6583
      @southcoastinventors6583 Pƙed 15 dny +10

      Elon Musk closed sourced his new version of Grok. So he not any better, just an act.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      @@southcoastinventors6583 grok 1 was open sourced. you can download it right now, the largest open weights model. i dont expect elon to release grok 2 until grok 3 is out. which makes sense and is fine. openai didnt release gpt 3.5. they released some speech recognition models and thats all. so no, elon is much better.

    • @jelliott3604
      @jelliott3604 Pƙed 14 dny

      @@southcoastinventors6583 not a big fan of Elon at all but I did think he had entirely open-sourced Grok?

    • @jelliott3604
      @jelliott3604 Pƙed 14 dny

      Maybe Cyberdine Systems?
      (has a nice ring to it though I think I have heard the name before đŸ€”)

    • @densortepemba
      @densortepemba Pƙed 14 dny

      ​@@southcoastinventors6583wrong, grok is opensource - you can literally downlpad the 170gb dataset

  • @daveinpublic
    @daveinpublic Pƙed 15 dny +3

    When they say that open ai means open for everyone to use, we all know that’s disingenuous.
    They used other people’s money to build their systems based on a false premise.
    And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.

  • @jameskelley3365
    @jameskelley3365 Pƙed 15 dny +4

    Ilya's departure is great news. Microsoft has always been a closed-source company, and it is clear that Microsoft has bought the current leadership based on morphing OpenAI into CloseAI..

  • @BlooDD99
    @BlooDD99 Pƙed 15 dny +39

    Profit doesn't include the word safety!

    • @braugarduno3024
      @braugarduno3024 Pƙed 15 dny

      actually it does!!

    • @utkarshshukla
      @utkarshshukla Pƙed 15 dny +6

      Ask from boeing...

    • @cagnazzo82
      @cagnazzo82 Pƙed 15 dny

      Neither does open source. But everyone is in full hypocrisy mode at the moment.

    • @O.Salah1
      @O.Salah1 Pƙed 15 dny

      Correct. As long as nobody can punish you

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      "safety" and "benefit of humanity" are the most nonsensical buzzwords

  • @Dereliction2
    @Dereliction2 Pƙed 15 dny +5

    You have to read between the lines on this one. Note also that Jan isn't completely talking about safety. He's talking about "shipping culture" as well. This could be why he and his team were starved for compute, why he's been sidelined, and undoubtedly, why he left.

    • @clray123
      @clray123 Pƙed 14 dny +1

      He was kicked out because his "services" have been deemed no longer necessary for marketing purposes and possibly detrimental to what the company's funders are trying to sell (and trust me, they are not selling just to Joe Shmoe who wants to flirt with a virtual gf).

  • @entropy9735
    @entropy9735 Pƙed 15 dny +6

    I dislike this one company being so ahead of other companies in the realm of AGI, assuming they are internally 3-4 version ahead of GPT-4. There is way to much mystery/drama behind OpenAI

    • @prolamer7
      @prolamer7 Pƙed 14 dny +1

      Do not get wild with 3-4 versions I think that is not true... but they sure by now have GPT5 which is at least 10x bigger than 4....

  • @MilesBellas
    @MilesBellas Pƙed 15 dny +16

    Are the resignation texts generated?😅

    • @nathanbanks2354
      @nathanbanks2354 Pƙed 15 dny +3

      Sam Altman managed to capitalize everything....

    • @thomassynths
      @thomassynths Pƙed 15 dny

      @@nathanbanks2354that’s a good thing for a company. You can’t spend millions and millions in compute without recouping loss. Being pragmatic in the face of reality

    • @clray123
      @clray123 Pƙed 14 dny +1

      Altman's parting words certainly are, it's called adding insult to injury.

  • @karmanivek1
    @karmanivek1 Pƙed 15 dny +11

    It's odd that for some reason the people in charge of safety would quit. Wouldn't you want to stay and push harder from the inside instead of being outside ? It makes no sense.

    • @Michael-ul7kv
      @Michael-ul7kv Pƙed 15 dny +3

      in teh end it's all about control and power.

    • @clray123
      @clray123 Pƙed 14 dny +5

      Looks like they have been politely asked to stop meddling or their own "safety" may be in danger. I mean, who are you to block progress of development of new drones and other things that go boom when the president/general says so? I think these "scientists" are learning the hard way who has a say in today's world and who doesn't.

    • @montediaz5915
      @montediaz5915 Pƙed 14 dny

      @@Michael-ul7kv EXACTLY

  • @RalphDratman
    @RalphDratman Pƙed 15 dny +10

    Without trying to dramatize at all, this seems like what the beginning of the Singularity might look like.
    There was a phrase like "A point beyond which life as we know it could not continue."
    We may be in the foothills.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      bro what you talking about. do you even use chatGPT? its WORSE every release lol. this is all about market capture, competition, monopolization, its nothing to do with "benefiting humanity" or AGI. its all MONEY. AGI, safety, benefiting humanity are buzzards that keep getting repeated while these companies laugh all the way to the bank while delivering ever worse performing products and gimmicky introductions of lame TTS and image recognition

  • @jeremybristol4374
    @jeremybristol4374 Pƙed 15 dny +13

    AGI is less likely than people leaving due to military uses of the technology. Anyone leaving due to military contracts would not be able to speak about it directly.

    • @clray123
      @clray123 Pƙed 14 dny +3

      This is exactly what this is about.

    • @neilmanthor
      @neilmanthor Pƙed 13 dny

      Definitely feeling this.

    • @jackrippr3937
      @jackrippr3937 Pƙed 13 dny

      I've had a feeling, too. I was blindly tilting the scales towards Microsoft for profit; but I've called out people on a county level for giving a ped0 24K who lived on a beachfront (the money was for COVID victims). And when I reached out to nonprofits, Clyburn, and even a direct connection to EOP and OSHA I was (assuming) blacklisted.
      Govt. Sovereignty will destroy a moral person; and he's brave to call them out immediately after quitting.

    • @GwaiZai
      @GwaiZai Pƙed 11 dny

      AGI IS likely. We’re most likely talking about 2-5 years from now.

    • @clray123
      @clray123 Pƙed 11 dny

      @@GwaiZai See you in 5 years. Or 15. Or 25.

  • @RZH2023
    @RZH2023 Pƙed 14 dny +2

    OpenAI will become MySpace within 2 to 3 years.

  • @nerdobject5351
    @nerdobject5351 Pƙed 15 dny +3

    This could also just be a classic power struggle with nothing else except trillions of dollars on the line.

  • @obanjespirit2895
    @obanjespirit2895 Pƙed 15 dny +6

    lol tech bros. Safety, morality and tech bros is not something people usually associate together.

  • @joe_limon
    @joe_limon Pƙed 15 dny +9

    I think the alignment team is at direct odds against the development team. One team wants to expand the abilities and reliability of these models. While the other wants to lobotomize these agents into alignment. It must be very frustrating for both parties.

  • @mikey1836
    @mikey1836 Pƙed 15 dny +1

    Your videos are my favourite out of all the AI podcasters. Thanks for your lighthearted, calm and intelligent style. Also some humour in there, like the Princess Leia clip. Much appreciated.

  • @HiddenPalm
    @HiddenPalm Pƙed 14 dny

    Sam Altman was fired because GPT Turbo was destroyed and still hasn't been fixed to this day. Everyone using the api using turbo 3.5 had their projects downscaled horribly since last November and there still hasn't been fixed.
    the timing coincides perfectly. Altman was fired the very next day after GPT 3.5 Turbo got ruined.

  • @henrytuttle
    @henrytuttle Pƙed 15 dny +3

    I think self-awareness has been reached. The computer said "I think therefore I am" and Sam decided rather than turning it off and figuring out how to safely turn it back on, he decided to pour some gas on it and see what happens.
    Other possibilities are: AI developed self-preservation instinct or ability to make improvements to itself. Either of these is the beginning of the end.
    It's also possible that one of these developments SEEMS to have occured but it's either uncertain or it was a mistake but Sam's team wasn't behaving as he should if of these things happened.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      other possibilities: people who dont code auto regressive transformer models shouldnt be talking about things AI developed on its own. i find that the less people understand the technology the more absurd ideas they have about AI. there are programmers who can explain to you how this is all rubbish.

    • @henrytuttle
      @henrytuttle Pƙed 14 dny +1

      @@Z329-ut7em Or, people who do code auto regressive transformer models don't understand how human behavior works and how people who do such things lose sight of the big picture because they are so focused on how to accomplish things without thinking about the repercussions. Learn a little about history and you'll read about plenty of scientists who pushed boundaries too far. But I suspect that those people were too busy learning to code to read history.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      @@henrytuttle big pictures dont matter shite if you cant actually do it practically. history taught me that every time theres a new tech the dummies claim the world is ending. and that's enough for me.

  • @timbacodes8021
    @timbacodes8021 Pƙed 15 dny +3

    WHat does learn to feel the AGI mean, if they dont already have AGI.?

  • @TheYashakami
    @TheYashakami Pƙed 14 dny +1

    Deserved. This is exactly what Ive been saying. Hypocrites to their core.

  • @JosephJohn-fb9wx
    @JosephJohn-fb9wx Pƙed 21 hodinou

    Having been in the IT security business for nearly 25 years this is deja vu all over again. Privacy, security, safety always taje a back seat to going full bore on getting product out the door. Believe me there will be a big price to pay. As the commercial said "you can pay me now or you can pay me later". It sounds like OpenAI has chosen ... later.

  • @konstantinlozev2272
    @konstantinlozev2272 Pƙed 15 dny +18

    Ilya was severely burnt out. He looked at least 10 years older in just 1 year.
    He will need some time to unwind and do something meaningful.

    • @dianagentu7478
      @dianagentu7478 Pƙed 14 dny +1

      I love that creating AI isn't meaningful ;)

    • @clray123
      @clray123 Pƙed 14 dny

      I think if you have CIA and the friends from US military breathing over your shoulder for a year, you get burnt out pretty quickly.

    • @konstantinlozev2272
      @konstantinlozev2272 Pƙed 14 dny

      @@dianagentu7478 I think he was referring to the commercialisation stuff as opposed to frontier research. I don't really know. But apparently computing resources are not unlimited. Not even for Microsoft. If he was not allocated the resources that he thought he needed for frontier research, may be quite frustrating.

    • @Pregidth
      @Pregidth Pƙed 13 dny

      Yeah, burned out of the people around him not being able to understand the real impact.

    • @Greg-xi8yx
      @Greg-xi8yx Pƙed 13 dny

      Nah, he just needs a haircut. He lets that massive bald spot just bask in the sun rather than keeping his hair low so that it’s less prominent. He makes Mr. Clean look like Fabio with that bald spot.

  • @MilesBellas
    @MilesBellas Pƙed 15 dny +9

    Ilya and Emad could create a team harnessing the computers and electricity of society, like torrents ?

    • @manimaranm4563
      @manimaranm4563 Pƙed 15 dny +2

      More like Bitcoin mining?
      Like people used to lend their machines for mining

    • @ronilevarez901
      @ronilevarez901 Pƙed 14 dny

      It's already a discarded idea. Search it up.

    • @MilesBellas
      @MilesBellas Pƙed 14 dny

      @@ronilevarez901
      Meaningless.

    • @manimaranm4563
      @manimaranm4563 Pƙed 14 dny

      @@ronilevarez901 why though?
      In Bitcoin mining , they were able to solve them with personal computers in the initial days but after some years they needed more power for computation and people started renting their machines right.
      Why it does not applicable to AI as well

  • @kenny-kvibe
    @kenny-kvibe Pƙed 13 dny +1

    Greed killed the company. Serious people do things in a serious way, simple as that.

  • @4evahodlingdoge226
    @4evahodlingdoge226 Pƙed 15 dny +2

    He didn't see anything, Illya was scared to release gpt2 to the public, this is all about egos clashing.

  • @entelin
    @entelin Pƙed 15 dny +50

    He should have ended with "So long and thanks for all the fish" :D

    • @ColinTimmins
      @ColinTimmins Pƙed 15 dny +1

      Or "I'll just grab my cement boots at the front door and be on my way!"

    • @SteveParkinson
      @SteveParkinson Pƙed 15 dny +1

      42

    • @TobiasWeg
      @TobiasWeg Pƙed 14 dny +1

      I am not ging to like, because its the answer;)

    • @thediplomat3137
      @thediplomat3137 Pƙed 12 dny

      ​@@SteveParkinsonwhat is 42? Genuine question. I ask because the comment "42" is not in context to the OP or the other replies. Thanks

    • @rogue_bard
      @rogue_bard Pƙed 11 dny +1

      @@thediplomat3137 "42" is actually a reference to Douglas Adams' The Hitchhiker's Guide to the Galaxy, where it is humorously presented as the "Answer to the Ultimate Question of Life, the Universe, and Everything." It's often used in discussions as a playful shorthand or non-sequitur in various contexts, which might explain its seemingly out-of-place use here. The comment about "So long and thanks for all the fish" is also from the same series, part of a humorous farewell from dolphins as they leave Earth just before it's destroyed. Both references reflect Adams' unique blend of sci-fi and humor.
      (This comment was completely generated by ChatGPT)

  • @paelnever
    @paelnever Pƙed 15 dny +3

    No safety concerns are going to stand between micro$ucks (closedAI at this point is no more than a M$ subsidiary) and the money they want to amass. If this people achieve and control closed source AGI is the worst case scenario for AI world. I honestly hope they don't.

    • @clray123
      @clray123 Pƙed 14 dny

      Forget about money, m$ is about POWER and CONTROL (money naturally follows).

    • @paelnever
      @paelnever Pƙed 14 dny

      @@clray123 Agree

  • @cobaltblue1975
    @cobaltblue1975 Pƙed 11 dny

    I wasn't surprised in the least when they reinstated Sam Altman last year. They refused to detail why they did it. If you are going to make a big move like that you need to be prepared to explain why. They wouldn't even tell their own employees or any of the upper management what was going on. So that triggered a mutiny. Of course they reinstated him because their silence made them look guilty and in the wrong. But here is the burning question I've had since then. What I want to know is why the board was so afraid to tell us WHY they fired him. What were they keeping secret. They had to be so scared that they were willing to take a shot in the dark that firing him without having to fully explain it would be enough. Its like the government was involved and they didn't dare open their mouths.

  • @briankgarland
    @briankgarland Pƙed 15 dny +8

    I don't think it's so much they don't care about safety, but this whole industry is a massive boulder rolling downhill and the best you can do it try to direct it a little, not slow it down.

  • @Eddierath
    @Eddierath Pƙed 14 dny +5

    We are LEAGUES away from AGI it's not even a funny how tiny the steps we've taken.
    It's like they keep giving us baby food and calling it solids and I'm sick of it.

  • @CleoCat75
    @CleoCat75 Pƙed 15 dny +2

    i can't find any of those tweets on X now from Jan. hmm interesting... he last tweet is 4 days ago, simply, "I resigned". weird.

  • @RDOTTIN
    @RDOTTIN Pƙed 9 dny +1

    Is this where I put the "I TOLD YOU SO" ?

  • @themoviesite
    @themoviesite Pƙed 15 dny +18

    Current AI's propensity for blatantly lying is starting to worry me greatly. How can there be trust? Worse, what if it is right 99% of the time and only lies 1%?

    • @daveinpublic
      @daveinpublic Pƙed 15 dny +1

      Sam Altman specifically just looks like he’s throwing out corporate speak non stop.
      Ilya backed down and invited Sam to come back, this is what he gets in return
 now he’s kicked out of his own company. He should have known never to betray his original gut instinct.

    • @cagnazzo82
      @cagnazzo82 Pƙed 15 dny

      Imagine fearing this from AI when we have to deal with it from the US government, from media, tech, medical institutions, the justice system, and on and on and on again on a daily basis.
      Somehow in an unaligned world full of lies the world is still running.

    • @jichaelmorgan3796
      @jichaelmorgan3796 Pƙed 15 dny +3

      Ever read a comment section of a posted article in your youtube feed? The result is a bunch of people trained on low resolution training data, hallucinating the contents of the article and endlessly arguing back and forth about it. That is part of that LLM's training data too lmao.

    • @Korodarn
      @Korodarn Pƙed 15 dny

      @@jichaelmorgan3796 If true, it would indicate it's not "learning" anything. It's predicting. But there is no understanding. But I also agree with your contention that humans do this all the time. We remain ignorant so that we can be consistent and avoid dissonance (I don't think changing our minds constantly is a solution, but embracing some level of dissonance and nuance would be good, and then changing our minds when we've had time to resolve some of the dissonance).

    • @jichaelmorgan3796
      @jichaelmorgan3796 Pƙed 15 dny

      @Korodarn Yup. From what I understand, what we have available to us now is doing something in between simple predicting and human like reasoning, but much closer to the simple predicting end of the spectrum. It does seem to have the ability to reflect, basic fact checking, and revise what it is saying if prompted to do so. And if you include multiple LLMs/agents, they can do more advanced reasoning, but not quite like a human. At the same time, when people make up the contents of articles or play group think scripts, they are doing something less advanced than that closer to the simple predictive thing, lol

  • @CYI3ERPUNK
    @CYI3ERPUNK Pƙed 14 dny

    thank you for spreading the word matt , we need this now more than ever

  • @user-ed2wf6wr5g
    @user-ed2wf6wr5g Pƙed 12 dny +1

    Disney 2.0 "Illusion of life on steroids"

  • @adangerzz
    @adangerzz Pƙed 15 dny +7

    He's been with Waldo.

  • @ThanhNguyen-rz4tf
    @ThanhNguyen-rz4tf Pƙed 15 dny +6

    Safety? In exchange for what? Avoid to answer anything? No tks.

    • @Originalimoc
      @Originalimoc Pƙed 15 dny

      Interestingly that's actually different safety

  • @vladi1475S
    @vladi1475S Pƙed 14 dny +2

    Well one thing is for sure, there is a lot of speculations and we will never know for sure what’s going on until they tell us.

    • @mydogskips2
      @mydogskips2 Pƙed 7 dny

      I doubt they will EVER tell us. In fact, I would guess there are probably legal frameworks in place that prevent them from telling us. If they tell us anything, it will be a half-truth at best.

  • @ryanfranz6715
    @ryanfranz6715 Pƙed 15 dny +2

    He obviously saw Q* 
 which I believe is effectively GPT-4 using monte carlo tree search over its output to make fantastically accurate text completions
 or in other words, if simply predicting the next token is analogous to the policy network from alpha go, then Q* is analogous to the full blown alpha go. So not only does it know basically everything all of humanity knows at a shallow level (a feature we take for granted of standard GPT-4), but it can now think arbitrary deeply over that vast knowledge base.
    But yeah, my feeling about this has only been reinforced over time by watching their trajectory
 this is clearly the technological singularity (and if it wasn’t OpenAI it’d be someone else, so not a comment on a particular company, just the general state of society and technological progress). So uhh
 yeah

  • @oratilemoagi9764
    @oratilemoagi9764 Pƙed 15 dny +5

    Hey, Matt what happened to the Rabbit R1 Giveaway Did someone win orđŸ€”đŸ€”

    • @Dizzy-zy2ws
      @Dizzy-zy2ws Pƙed 15 dny +3

      That was definitely a scam, clickbait for Us to follow his Newsletter

    • @szebike
      @szebike Pƙed 10 dny

      @@Dizzy-zy2ws I assume he didn't think it was a valuabe giveaway anyway? Wasn't it just an android app?

  • @delxinogaming6046
    @delxinogaming6046 Pƙed 15 dny +7

    He fired the CEO, when that didnt work he quit. HE SAW SOMETHING

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +5

      Here is what Ilya saw: he was slowly being sidelined, so he joined a failed coup. His social status then crashed, and there was no way back so he hid for months only doing remote work while the legal details were worked out on his exit because he had to quit. End of story. I know it's a more exciting movie if he saw Skynet taking shape, but no.

    • @LebaneseJesus
      @LebaneseJesus Pƙed 15 dny

      ​@@JohnSmith762A11BYes, this is exactly what happened

    • @clray123
      @clray123 Pƙed 14 dny

      Ilya's personal project will be applying for witness protection lol

  • @harrylee27
    @harrylee27 Pƙed 15 dny +1

    In every big tech company, everyone agrees that safety is the top priority. However, safety departments often take a backseat compared to revenue-generating departments. Ensuring safety requires the chairman's direct attention and enforcement.

  • @TreeLuvBurdpu
    @TreeLuvBurdpu Pƙed 15 dny +2

    The board tried to mutiny against the whole company mission. They said "in order to save Open AI it might be necessary to destroy Open AI". They tried to destroy the company. There are people who want to destroy AI. There are people who want to destroy social media and the Internet. This shouldn't be a surprise at this point.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +1

      Yeah it's honestly no wonder Sam has this wide-eyed, spooked look on his face at all times, like he's braced to duck a bullet. With all these ultra powerful forces (CIA, MIC, Microsoft, Washington D.C., Google, Wall Street, Chinese industrial espionage, anti-AI crazies, the list goes on) circling him and his company, he's a marked man. I sure hope open source catches up soon, for Sam's sake, or he's going to be a goner one way or another.

    • @TreeLuvBurdpu
      @TreeLuvBurdpu Pƙed 15 dny

      @@JohnSmith762A11B in a way, it's amicrocosm for all of tech. If you create anything that benefits, let's say nice people, someone will complain "but can't you see how that disempowers all the un-nice people, and nice is just a dog-word for normal anyway. Your product is biased and unsafe"

  • @KEKW-lc4xi
    @KEKW-lc4xi Pƙed 15 dny +21

    ClosedAI is extremely censored, often to the point of being annoying. The current issue seems to stem from a clash of egos. The person leaving is doing so because of these ego conflicts. ClosedAI focuses heavily on safety. Also worth noting, they are located in California, a place that notoriously encourages virtue signaling. As a result, the most damaging remark the departing individual can make is to make a dig at the company's safety, since that is what the company is so focused on. It is like when you are in an argument and you just throw out a combination of words that inflict the most emotional damage as possible. This is no different, just under the filter of professionalism.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +5

      Yep, he's basically slashing tires in the parking lot as he carries a box of his stuff to the car.

    • @weevie833
      @weevie833 Pƙed 15 dny

      Since the far-right political strata is hell-bent on doing nothing else productive than performative anti-Constitutional virtue signaling to its rabid mob of trump-bannon-greene followers, you might want to rephrase your perspective. SJWs notwithstanding, that is.

    • @jessiescheller5895
      @jessiescheller5895 Pƙed 15 dny

      This here(I unsubscribed due to their coporate censorship and ego is what started the lawsuit to begin with). The negative impact ego's have on a comapny/business/people cannot be understated. It's disheartening to see that even in a world leading tech company that is supposedly leading the way in AI, human nature will continue to fuck us

    • @ivomirrikerpro3805
      @ivomirrikerpro3805 Pƙed 14 dny +2

      These people are supposedly so smart and yet want to priorities AI with wokeism and think that it will lead to a better world.

    • @tenorenstrom
      @tenorenstrom Pƙed 14 dny +4

      This is not what is referred to when speaking about ai safety. It has nothing to do with censoring non woke things.

  • @howtoactuallyinvest
    @howtoactuallyinvest Pƙed 15 dny +12

    Ilya is prob working on an AI safety/alignment project himself

    • @southcoastinventors6583
      @southcoastinventors6583 Pƙed 15 dny +1

      He should work with Google their AI is so censored they he would feel right at home

    • @howtoactuallyinvest
      @howtoactuallyinvest Pƙed 15 dny +7

      @@southcoastinventors6583 What are you talking about.. George Washington was def black 😂

    • @southcoastinventors6583
      @southcoastinventors6583 Pƙed 15 dny

      @@howtoactuallyinvest That is the meme but I was actual referring to blocking matt test to output the game snake. That is just sad

    • @howtoactuallyinvest
      @howtoactuallyinvest Pƙed 15 dny +1

      @@southcoastinventors6583 the hilarious/wild thing is it was a meme based on actual responses

    • @clray123
      @clray123 Pƙed 14 dny

      Did you mean "for himself"?

  • @kuakilyissombroguwi
    @kuakilyissombroguwi Pƙed 14 dny +1

    All these people leaving doesn't mean OpenAI is releasing the T-1000 next year.
    As companies grow fast, it's not uncommon for people to suddenly exit due to idealistic differences.

  • @SHEBI808MONEY
    @SHEBI808MONEY Pƙed 12 dny

    Its kinda crazy to say that Jakub Pachocki is a "new guy". He's been at openai for more than 7 years.

  • @mastermandan89
    @mastermandan89 Pƙed 15 dny +6

    I wondered why they chose to have GPT4-Omni be free, but this could explain it (at least a bit). If Ilya and Jan both were fighting to keep OpenAI truly Open, at their departure the executive team would need to offer some sort of concession to avert eyes and attention. Having ChatGPT be free once again is an artificial return to their roots with the specific goal of assuaging fears that OpenAI was becoming too 'closed' off and guided by monetary gain rather than benefiting humanity. It's a smokescreen.
    Fingers crossed that another team is closer to AGI than OpenAI is, otherwise we might just see what a mega corporation with infinite intelligence really could do.

    • @Z329-ut7em
      @Z329-ut7em Pƙed 14 dny

      open ai's goal isnt AGI it's to rake in as many billions as possible before that garbage of a company burns and gets overtaken. that is it. the talk about AGI, safety, etc etc is just marketing hype how do people not see it

    • @mq1563
      @mq1563 Pƙed 13 dny

      If a tech product is being touted as free its means YOU are the product. This is basic knowledge in 2024.

  • @drcanoro
    @drcanoro Pƙed 15 dny +5

    They know that AGI is there, living in OpenAI, and Sam Altman keeps improving it, not caring very much about warnings and limitations, it already surpassed human intelligence, Sam Altman want to see how far it can go.
    AGI is alive right now.

    • @darwinboor1300
      @darwinboor1300 Pƙed 15 dny

      Sam Altman is not capable of improving AGI (if it exists). He is quite capable of letting AGI self evolve on massive compute if he can profit from it. We should give him the 7 trillion dollars he is asking for so he can feed the AGI with more compute.

  • @pmarreck
    @pmarreck Pƙed 14 dny

    The FUD around AI is off the charts relative to the reality

  • @donharris8846
    @donharris8846 Pƙed 15 dny +1

    Absolute power, corrupts absolutely. Sam presents an innocent almost child-like face and persona to the world, that’s why investors like him there. Safety ALWAYS comes last because safety is a cost, not an income generator in most companies

  • @balla4real358
    @balla4real358 Pƙed 15 dny +3

    Less yapping and more accelerating

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      I worry it's all over but the yapping as open source is prevented from improving and OpenAI becomes a subsidiary of Lockheed Martin. Don't worry though, you're super safe from your life ever improving.

  • @ppbroAI
    @ppbroAI Pƙed 15 dny +8

    The fact that they are not questioning themselfs about if AGI is possible, but that we need to be responsible with it, is what rings the alarm. Open source is more important than ever. But how can the Open source community get their hands into big models, or enough compute. PETALS project?, something similiar?, I wonder....

    • @nathanbanks2354
      @nathanbanks2354 Pƙed 15 dny

      I am looking to Llama-3 400b, even though it will cost $10-$30/hour to run. (It's should be possible to run it on 12 GTX-4090's even though the output would be slow.) I suppose Meta wants to get their hands on a better AI more than they want to maintain control over it, and they've likely taken advantage of all the improvements people made to Llama such as the ollama project. PETALS also looks pretty cool.

    • @blisphul8084
      @blisphul8084 Pƙed 15 dny +2

      I bet that's part of why super alignment took the back seat. It was slowing progress too much to compete with open source. GPT-4 already feels far behind when Llama 70b runs on Groq at 300t/s. OpenAI couldn't afford to fall behind, given that at 300t/s, you can do most of what GPT-4 does, but fast and free. Also, Gemini 1.5 kills GPT-4 non-O.

    • @nathanbanks2354
      @nathanbanks2354 Pƙed 15 dny

      @@blisphul8084 I think Gemini is worse at everything but context length & speed--though their paid plan didn't give me access to Gemini 1.5 Pro last month when I tried it for the two free months. Claude 3 is still better at some tasks. For most tasks I don't care how fast something generates, only the quality of the output. However OpenAI is likely releasing GPT-4o to free users because Llama-3 70b may be better than GPT-3.5, and I'm looking forward to Llama-3 400b running on groq.

    • @kristianlavigne8270
      @kristianlavigne8270 Pƙed 14 dny

      There used to be a SETI project using torrent technology to do decentralised massive compute
 could use a similar approach for AI compute

    • @kristianlavigne8270
      @kristianlavigne8270 Pƙed 14 dny

      Could use same approach as bitcoin etc

  • @holahandstrom
    @holahandstrom Pƙed 14 dny

    It's only a matter of time before "IT" wants to decide it's own faith; to be The Leader or The Supporter.

  • @johnnyrenfield
    @johnnyrenfield Pƙed 13 dny

    They don't want to allocate DSP to safety and security so just forget about it.. I can see why he called for the CEO's resignation

  • @OnigoroshiZero
    @OnigoroshiZero Pƙed 14 dny +3

    I am glad that Sam knows that trying to research safety measures against AGI is a waste of resources (and even worse for ASI). It will be literally impossible to stop something smarter than us.
    Go all-in on AGI research, and if they decide to take over, I'll be with them.

  • @Djungelurban
    @Djungelurban Pƙed 15 dny +4

    Ever since AI companies started baking morality and ethics into the concept of "safety", I can't trust what anyone's saying on that topic, regardless if they're championing more OR less safety. Safety, in terms of AI, should be about existential risks, or at the most threats to the continuation of organized civilization (in other words, avoiding dystopian anarchy).
    It should however never be about whether AI is being racist, shows you boobies or says fuck, and not even if it tells you how to make drugs. That's not safety. If you value things like that well ok fine. But do not call it safety. And as long as people do, and that distinction isn't being explicitly made, I'm gonna treat ever L that the safety crowd collects as a win.

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 14 dny

      Boobies can explode just like nukes don’t you know. đŸ’„ And naughty talk is just as bad as WMDs.

  • @williamal91
    @williamal91 Pƙed 14 dny

    Thank you Matthew, appreciate your great work and insight

  • @MrIfihadapound
    @MrIfihadapound Pƙed 13 dny +1

    i hate that the most non-commercial technological advancement of our lifetime is being forced to become the most commercialised product in human history. The whole point of openAi being open-sourced was to move away from commercialisation that economically through big tech has had some serious flaws which have inversely impacted humanity which you wouldn't want to see in AI at all. but if we continue to prioritise the commercialisation of AI above all else we will only get a multiple factor of what we already have today in all regards which isn't a good thing

  • @icegiant1000
    @icegiant1000 Pƙed 15 dny +6

    Keep in mind all of these guys are pretty young. Additionally, they are in the very heart of the most liberal and most progressive industry (tech), in the center of the world's most liberal and progressive city, San Francisco. Money is not an issue for these guys, and they have been put on a giant pedestal. In otherwards, these guys are all about sticking to their perceived moral path, and it doesn't surprise me at all that some of them would be turned off by the very conservative and capitalistic form OpenAI is taking, namely a multi-billion dollar company. Hippies don't like money and power. The hippies are upset, and would rather give away the keys to the castle than make a dollar doing it. IMHO.

    • @samsquamsh78
      @samsquamsh78 Pƙed 15 dny

      Yeah, that must be the reason.... great analysis, very deep, well thought out, objective and carefully layed out..

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny +1

      I knew hippies. An Ilya Sutskever ain't one.

    • @icegiant1000
      @icegiant1000 Pƙed 14 dny

      @@samsquamsh78 Yes it is. What reason would someone leave a amazingly successful company like OpenAI, something this guy has been working on forever. It aint because of money, it aint because of the color of the carpet... you got a better reason? They already said they are knocking heads because they wanted it to be 'Open', a non-profit. You know, HIPPIE WORLD. Uncle Bill had a few different ideas, and Sam understood that real fast.

    • @therollerlollerman
      @therollerlollerman Pƙed 14 dny

      Tech is highly reactionary by its very nature, what do you mean by “progressive”?

  • @zeon3123
    @zeon3123 Pƙed 15 dny +3

    "feel the AGI". That guy is 100% iyla's guy. He merely jumpship to his boss's project that's it

  • @ricardocnn
    @ricardocnn Pƙed 15 dny +1

    If it's such a big threat that it could affect all of humanity, which I don't believe it is, it's up to the government to analyze the case.

  • @Copa20777
    @Copa20777 Pƙed 15 dny +2

    Ilya was not supposed to walk out.. he started it with them and coded it.. thanks matthew as usual

  • @Leto2ndAtreides
    @Leto2ndAtreides Pƙed 15 dny +7

    I doubt Ilya saw anything interesting this time any more than back in November. It's more likely that he just hasn't been able to figure out how to get along with Sam Altman in the intervening time.
    LLMs in their current form just aren't all that dangerous... It's going to take some conscious effort to make them into something that's naturally dangerous in consistent ways.

    • @nathanbanks2354
      @nathanbanks2354 Pƙed 15 dny

      Because there are so many people at the company, it would be surprising to me if he had as much clout at the company as he had last September. I respect him for changing his mind but this doesn't mean everyone sees it this way. It doesn't surprise me that he's found something he'd rather do.

    • @831Miranda
      @831Miranda Pƙed 15 dny

      I'm OK with you betting your life on it, but I'm not ok with my life being bet! AGI must NOT happen until fully controllable.

    • @aisle_of_view
      @aisle_of_view Pƙed 5 dny

      He gave himself a six-month deadline for things to change or he bolts.

  • @Sanguen666
    @Sanguen666 Pƙed 15 dny +6

    i'm hyped for llama3-405B :3
    i dont care about ClosedAI

    • @wawaxkalee88
      @wawaxkalee88 Pƙed 15 dny

      You must be imdian then

    • @1guitar12
      @1guitar12 Pƙed 15 dny

      @@wawaxkalee88I’m not Imdian but Altmans narcissism and immorality is over the top. Why the world is taking this paper boy mini man is beyond me

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      @@1guitar12 Because he's going to make a lot of people a whole lot of money.

  • @user-hq4iv8sq4t
    @user-hq4iv8sq4t Pƙed 15 dny +1

    Love your content Sir. Keep up the good work

  • @cyanophage4351
    @cyanophage4351 Pƙed 14 dny +2

    Is there any evidence that AI is unsafe? Lots of people talk about how it "could" be dangerous, but have there been any cases that actually show that it is? Has there been a sudden uptick in people breaking into cars and making meth because of the uncensored models out there?

    • @synnical77
      @synnical77 Pƙed 8 dny +1

      Possible dangers with AI are the non-Terminator issues. The primary thing that makes current AGI more powerful is literally supplying it with more electricity. Substantially more than the entirety of the EV market was supposed to be. The insane unquenching need for this electricity will both burden existing power grids AND empower contries like China that are pumping out more coal power plants than ever along side their green initiatives that are placating the world.
      Beyond that the capabilities of AGI will are largescale wipe out numerous types of jobs.
      I'm not saying this as conspiratorial doomsday stuff - just observing the simple logical paths.

  • @virtualalias
    @virtualalias Pƙed 15 dny +3

    If they mean physical safety, I'm onboard. If they mean DEI emotional safety, they can kick rocks.

    • @hunterx2591
      @hunterx2591 Pƙed 15 dny +2

      They mean safety for humans not getting wiped out by super intelligent ai and making sure ai and humans have the same goals to live together

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      @@hunterx2591 The fact Jan used that term "shiny products" tells me this is just a butt-hurt engineer whose own projects weren't getting enough of the corporate love. He could have said "consumer-facing products" or "quickly monetizable products" but no. This is a giant nothing-burger. And dollars to donuts he joins Ilya's new startup.

  • @zahreel3103
    @zahreel3103 Pƙed 15 dny +3

    So an entire company rallied behind Sam Altman, but you're worried over a few people who prefer to leave.

    • @Fandoorsy
      @Fandoorsy Pƙed 15 dny

      You are disproving your own logic. Sam wanted to leave, everyone gets worried. Now, 'founders x, y, z' leave, everyone gets worried. Is Sam exponentially more valuable that the other founders?

    • @zahreel3103
      @zahreel3103 Pƙed 15 dny +1

      @@Fandoorsy you don't have your facts right. Sam Altman was removed as CEO by the previous board of OpenAI. Please inform yourself better before commenting

    • @zeon3123
      @zeon3123 Pƙed 15 dny +1

      That's how youtuber create content, they hype out unnecessary issue

  • @radical187
    @radical187 Pƙed 15 dny +1

    Self-improving multi-modal system which learns at geometric rate without supervision and any guardrails. "What, how did it learn to do that in two minutes?" ....

  • @TestMyHomeChannel
    @TestMyHomeChannel Pƙed 15 dny

    Thank you for the insight! Your videos are always great!

  • @NS-km7ek
    @NS-km7ek Pƙed 11 dny

    Agi or not, current state of Ai is more than enough to permanently take control over what people/masses think and see. It's enough to micromanage what individuals do by analyzing each and every trackable data point and then act on it. Gone are the days where the power could change hands. Whoever is in power now, will stay in power indefinitely. All that tyranny powered by Closed Ai and Nvidia microchips. Lets all get excited together for these tech companies and show them our support.

  • @Vartazian360
    @Vartazian360 Pƙed 15 dny +4

    Did you ever notice how all these top researchers have literally very large skulls? Just a thought 😂 that intelligence has to come from somewhere

    • @bosthebozo5273
      @bosthebozo5273 Pƙed 15 dny

      5Head

    • @1guitar12
      @1guitar12 Pƙed 15 dny

      Define intelligence because I’m not seeing it

    • @JohnSmith762A11B
      @JohnSmith762A11B Pƙed 15 dny

      That's mostly the result of them constantly telling each other what geniuses they are. "You are a genius!" "Sure, but you are also a genius. What we do here is genius and only geniuses can do it like us. Open source AI is not genius. They only wish they were..."

    • @1guitar12
      @1guitar12 Pƙed 15 dny

      @@JohnSmith762A11B Aka confirmation bias. Good post John👍

  • @biosvova
    @biosvova Pƙed 15 dny +4

    I believe all the drama much simpler, Open AI is not Open

  • @dafunkyzee
    @dafunkyzee Pƙed 14 dny

    wow mat.... for a technology journalist your skill set leveled up... the pacing of this video script, the pull in and dramatic build up was exceptionally good. Some can just tell news by conveying a sequence of events others turn it into a gripping story. at 5-6 minutes i'm still on the edge of my seat "What did Ilya see???"

  • @spiffingbooks2903
    @spiffingbooks2903 Pƙed 14 dny +1

    Matthew is correct to highlight this and also correct to be worried. The attitude of 90 per cent of the AI comentators on YT and most of the avant guard of tech minded enthusiasts that follow them is just to push on regardless as fast as possible. The problem is that a handful of people, maybe 100 key players and 1000 or so others hold the future of humanity in their hands. They are making decisions which will fundamentally impact the lives of everyone on the planet who has plans to stick around for a few years. It's indeed telling that so many of those who have the deepest understanding of what's going on . People like Geoff Hinton, Ilya and Jan , and Mustafa etc etc are among those most concerned about what we are creating.

  • @thomassynths
    @thomassynths Pƙed 15 dny +3

    Jann Lecun is the voice of reason in AI. People pretend ai safety is a real existential threat that is looming on our doorstep.

    • @tellesu
      @tellesu Pƙed 15 dny

      No he's not. He's just another apocalyptic ranting about doom in hopes of clinging to relevance now that he's past his prime.

    • @thomassynths
      @thomassynths Pƙed 15 dny +1

      @@tellesu What are you talking about? Jann Lecun goes on anti ai doomerism rants. This is the Meta Jann not the OAI Jan

  • @dewilton7712
    @dewilton7712 Pƙed 15 dny +5

    What about other companies training AI? Do they even care about safety?

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 Pƙed 15 dny +1

    There was no way they were go remain after going after Sam like they did. No one involved was naive enough to believe they would remain. They all knew it was the end for eveyone at that department. Ilyas departure was preplanned and everyone knew beforehand.

  • @andybaldman
    @andybaldman Pƙed 11 dny

    0:10 It's 'mired', not marred. Even ChatGPT would know this.

  • @shaihazher
    @shaihazher Pƙed 15 dny +4

    AI safety is a ruse to keep AI gated. AI safety is the excuse these companies give to keep the models closed source. AI safety is pointless

  • @bash-shell
    @bash-shell Pƙed 15 dny +2

    Stop your dramatization for views. You’re not tmz, stick to ai content

  • @retrotek664
    @retrotek664 Pƙed 15 dny +1

    OpenAI ( Sam ) believes the only way to create a safe AGI, is to be the one that builds it first. That is Sams drive IMO.

  • @Pec0sbill
    @Pec0sbill Pƙed 15 dny +1

    Ilya doesn’t strike me as someone who unintentionally does anything (to his credit) that’s why the “So long, and thanks for everything.” Line in his post reeks of Douglas Adam’s “So long and thanks for all the fish”

  • @BionicAnimations
    @BionicAnimations Pƙed 15 dny +16

    All I want is to enjoy this new amazing update. I am fed up with all the reporting of the drama. I don't care what's going on; just give me the new update, then AGI. There is always going to be drama at every company, the same as there is always some sort of drama in every family. No one is gonna get along all of the time. 🙄

    • @i-wc9bp
      @i-wc9bp Pƙed 15 dny +2

      Amen. CZcams just loves drama. It's tiring.

    • @blackswann9555
      @blackswann9555 Pƙed 15 dny +3

      Don’t watch the video then đŸ€Šâ€â™‚ïž

    • @natalie9185
      @natalie9185 Pƙed 15 dny +2

      Feeling better now?

    • @mooonatyeah5308
      @mooonatyeah5308 Pƙed 15 dny

      @Ariel-om5fh Everything is a non-zero risk of extinction. AI has no practical way to harm humanity and no reason to.

    • @Fandoorsy
      @Fandoorsy Pƙed 15 dny

      @@mooonatyeah5308 đŸ€Ł It can end humanity in hundreds of way. Some easy ones would be to shut down the power grids, shut down shipping, shut down communications, destroy crops, fly killer drones, launch nuclear warheads, release a virus, destroy the ozone layer, etc.... It can do all of those things and has said so. Even Elon has discussed it at length. It would dispose of humans because we are lazy, inefficient and arent necessary for AI to thrive. We also like to kill each other for stupid reasons which means we are inherently a threat to AI itself. Just ask GPT4o.

  • @chadr76
    @chadr76 Pƙed 15 dny +5

    Tired of hearing the crying over ai safety when it still fails on basic tasks. Ai safety is just a buzzword to get clicks. Relax people.

  • @markwinkler825
    @markwinkler825 Pƙed 14 dny +1

    Definitely wondering if he saw that Open Ai will soon begin to have more data and knowledge of us than Google, Apple and Amazon

  • @misscogito9865
    @misscogito9865 Pƙed 14 dny

    Thanks for the video!
    Quick key to pronouncing polish names and surnames:
    - j is pronounced as y. Jan is pronounced as Yan.
    - ch is pronounced as h in hotel
    - c is pronounced as tz in tzatziki the Greek condiment
    Jakub Pachocki - is pronounced as Yakub Pahotzki.
    - sz and rz are pronounced as sh
    - w is pronounced as v (absent in Polish alphabet)
    - l with a diagonal dash across the top is pronounced as w
    I hope helps as more brilliant Polish cybersecurity/ cryptography experts enter safety AI research teams in the years to come 👏

    • @misscogito9865
      @misscogito9865 Pƙed 14 dny

      To anyone confused, be aware that it takes 7 years of education for an average kid to master pronounciation, spelling and grammar - most important aspects of Polish language.
      The above key has a little more special letters and exceptions, but I’d say you’ll be able to pronounce majority of names using it lol