OpenAI Wants to TRACK GPUs?! They Went Too Far With This…

Sdílet
Vložit
  • čas přidán 10. 06. 2024
  • OpenAI published its AI security plan, which is highly concerning. Their plan includes "cryptographically signing GPUs" so only "authorized" parties can train and run inference. Let's review their blog post.
    Join DomoAI today for 10% off with code: DOMOAI
    DomoAI: / discord
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    👉🏻 Instagram: / matthewberman_ai
    👉🏻 Threads: www.threads.net/@matthewberma...
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    openai.com/index/reimagining-...
    Regulatory Capture: • All-In Summit: Bill Gu...
  • Věda a technologie

Komentáře • 1,5K

  • @matthew_berman
    @matthew_berman  Před měsícem +225

    Am I overreacting?

    • @good-gpt2-chatbot
      @good-gpt2-chatbot Před měsícem +167

      No I don’t think so

    • @frankrpennington
      @frankrpennington Před měsícem +134

      It should be illegal to regulate compute…

    • @ryanseibert1449
      @ryanseibert1449 Před měsícem +69

      Nope. I've been a plus subscriber for the longest but I'm not giving my money to greed anymore. It was worth it when they were the best/only real option but I don't trust them if they don't trust me. Open source or bust.

    • @good-gpt2-chatbot
      @good-gpt2-chatbot Před měsícem +40

      @@ryanseibert1449 agreed, already switched to Anthropic

    • @ChuckNorris-lf6vo
      @ChuckNorris-lf6vo Před měsícem +6

      You like the sound of your own voice you are too young for this stuff. Whats good for the individual is good for the group. Whats bad for the group is good for the individual.

  • @brunodangelo1146
    @brunodangelo1146 Před měsícem +1012

    Open source must be protected at all costs.

    • @Michael-do2cg
      @Michael-do2cg Před měsícem +47

      Its the only thing that can stop a dystopian future.

    • @Machiavelli2pc
      @Machiavelli2pc Před měsícem

      Exactly. You’re either for Open-Source, or you’re for eventual tyranny. Whether by Companies, governments, or other entities.

    • @orangehatmusic225
      @orangehatmusic225 Před měsícem +9

      @@Michael-do2cg Not using AI as a slave will prevent that.

    • @mikgol81
      @mikgol81 Před měsícem +13

      ​@@orangehatmusic225yea right 👍 also, let's not keep our toasters and fridges as slaves!! Freedooooom!

    • @therainman7777
      @therainman7777 Před měsícem +1

      You people are so confused 😔

  • @andrew.nicholson
    @andrew.nicholson Před měsícem +303

    They should go ahead and change their name to ClosedAI.

    • @jejxkxk
      @jejxkxk Před 26 dny +15

      That’s one of Elon’s demands in his lawsuit lol

    • @Brax1982
      @Brax1982 Před 23 dny

      @@jejxkxk Is that a literal demand? If yes, then it tells you how seriously you can take the lawsuit...

    • @jejxkxk
      @jejxkxk Před 22 dny +6

      @@Brax1982 err… Or you could read the public filings to determine how seriously you should take the lawsuit

    • @fullstackcrackerjack
      @fullstackcrackerjack Před 2 dny

      Open... for business Lol.

  • @frankrpennington
    @frankrpennington Před měsícem +525

    When a company cannot innovate anymore they go to Regulation and lawsuits. Microsoft’s playbook. Anti competitive strategies… also illegal.

    • @themartdog
      @themartdog Před měsícem +23

      Yup, we all called this when MS invested

    • @KEKW-lc4xi
      @KEKW-lc4xi Před měsícem +34

      Classic economics. The first thing you do after dominating a free market is to do everything in your power to make the market not free. That's why there exists laws that try and prevent monopolies.

    • @TheExodusLost
      @TheExodusLost Před měsícem +5

      That sounds about right. Damn

    • @vertigoz
      @vertigoz Před měsícem +5

      @@KEKW-lc4xi there's no such thing as free market

    • @4l3dx
      @4l3dx Před měsícem +8

      Funny comments coming from people who use iPhone or MacBook

  • @ivideogameboss
    @ivideogameboss Před měsícem +502

    I just canceled my OpenAI subscription, never going back to them ever. This is all about greed.

    • @hobologna
      @hobologna Před měsícem +29

      same. I am actually getting brilliant copywriting from local LLMs. For writing use cases, many of these small uncensored LLMs are much better for the task. I also have a vision model now that I use to critique my photography rather than GPT4. While it isnt as good as GPT Vision, it's free

    • @TheExodusLost
      @TheExodusLost Před měsícem +2

      Do you think you’ll reconsider if they were to drop a huge breakthrough model, like GPT5? I’m considering unsubbing although I’ll admit I use it V FREQUENTLY

    • @kylequinn1963
      @kylequinn1963 Před měsícem +36

      I cancelled mine a while ago and bought a 4090, I'd rather spend 3 grand and use my own local models than use their system.

    • @WylieWasp
      @WylieWasp Před měsícem

      12:23 me too and I told them why absolutely ridiculous

    • @ivideogameboss
      @ivideogameboss Před měsícem +8

      @@TheExodusLost No, I'm going to invest in an Nvidia GPU and build my own machine to run Opensource models.

  • @leandrewdixon3521
    @leandrewdixon3521 Před měsícem +184

    No you are not overreacting. This is so obvious to some of us, but unfortunately not enough. The fact that anyone in 2024 thinks that the best route to human flourishing is by concentrating power in the hands of big corporations and governments reveals how many people struggle with pattern recognition.

    • @14supersonic
      @14supersonic Před měsícem +2

      For sure, most of humanities weaknesses come from the inability to detect changes at a rapid rate of success. It's why AI is important so that we can detect these patterns more effectively.

    • @kaicherry4532
      @kaicherry4532 Před měsícem

      aamen.

    • @AbeXLinkDrone
      @AbeXLinkDrone Před měsícem

      It's the indoctrination in the school systems.
      Schools are just political training grounds almost no useful education is actually taught just political correctness bs and dumbing down.
      Thr government n corporations want people dumb enough to not question but smart enough to work and pay taxes.

    • @ribertfranhanreagen9821
      @ribertfranhanreagen9821 Před měsícem

      Nah there is reason we have blockchain and people put ton of money in it. There are still movement on decentralization, sam with ai model shared in github. But this is getting less attention, since it include a lot of hassle

    • @mightytheknight2878
      @mightytheknight2878 Před 13 dny +2

      Trust me when i say this.
      Truth is a pattern, especially with economy and war there are thousands of example.
      The reason people act dumb is because they dont whant to admit there wrong.
      And as its says
      " Pride leads to destruction and happens before the downfall, arrogance before the fall"....

  • @jamlu1561
    @jamlu1561 Před měsícem +310

    When you have big companies talking about security, you know there is something else behind. This is just mad...

    • @ieye5608
      @ieye5608 Před měsícem +17

      Their security, not yours. They want to know more (everything) about you.

    • @OldTomato44
      @OldTomato44 Před měsícem +1

      Same with when the throw around the word "safety"

    • @MrIfihadapound
      @MrIfihadapound Před 28 dny +3

      protecting their investment - when Microsoft invested 50bn in OpenAI that was when it became about trying to make ai conform into a capitalist commercialised structure which is insane considering how transformative and expansionary technology is as a whole at the moment.

    • @ReigneNation
      @ReigneNation Před 11 dny +1

      When any entity (corp govt etc) says something about security/safety, especially when it comes to MY security/safety, my instinct screams at me to immediately run far far far away from them

  • @vSouthvPawv
    @vSouthvPawv Před měsícem +220

    Open(ly authoritarian) AI

  • @szebike
    @szebike Před měsícem +265

    This is called "regulatory capture" OpenAI tries to use its vast influence by money and (faking) the potential for the current gen machiene learning algorithms.

    • @not_a_sp00k
      @not_a_sp00k Před měsícem +2

      He literally says this in the video

  • @clray123
    @clray123 Před měsícem +65

    The idea is that in the future you do not own hardware (although you still purchase it), you only "rent it" and your use of it continuously monitored and needs to be approved by the vendor. This is basically taking away your ownership while still making you foot the bill. This is similar to how software licensing works if the license is non-perpetual. Or how "free to play" computer games are sold today. You own nothing and are happy. Until someone flips the switch on your remotely and your game disappears or your hardware becomes obsolete/useless.

    • @BrianChip-wn5im
      @BrianChip-wn5im Před 27 dny +2

      Microsoft has been crippling Windows 10 computers with various Updates. The January 2024 Update regarding partition sizing is proof enough.

    • @kotenoklelu3471
      @kotenoklelu3471 Před 3 dny

      ​@@BrianChip-wn5imi stopped liking updates after Windows XP. It's all downhill after that. Maybe I am just getting old

  • @frankjohannessen6383
    @frankjohannessen6383 Před měsícem +39

    A world with under-regulated AI might be a chaotic place, but a world with one AI-company with monopoly will lead to a dystopia worse than any seen before.

    • @rayzimmermin
      @rayzimmermin Před 6 dny

      and what would a world be with many AIs fighting for market dominance be
      a one AI system can actually be a eutopia because the one AI would control and regulate everything making sure people get what thy need so resources can be distributed fairly
      as ware a many AI world could see AIs use like bots to be able to buy up resources faster then others thus leading to people having a monopoly on resources starting an AI resource war

    • @rayzimmermin
      @rayzimmermin Před 6 dny

      also with many AI it is more likely one of them will go rouge and reek havoc on the world as ware with just one AI it would be less likely to go rouge
      AI is a lot like governments to many unregulated governments lead to resource wars and two few governments leads to totalitarianism so what we need is just a few good ones and so long as all do not become bad at once their should always be enough good ones to keep the bad ones in check

  • @adam_knocks
    @adam_knocks Před měsícem +51

    We just have to look at the auto industry. Decades ago, the big manufacturers begged for regulations that choked out their competitors smaller. There’s a reason OpenAI is now begging for regulations…

    • @ohokcool
      @ohokcool Před 26 dny +1

      Yup, time to open up open AI

  • @rodvik
    @rodvik Před měsícem +213

    Very concerned about the censorship path Open AI is taking.

    • @Player-oz2nk
      @Player-oz2nk Před měsícem +7

      Make sense as they were the first AI company to align with building foundational guidelines for govt regulations.

    • @YeeLeeHaw
      @YeeLeeHaw Před měsícem +6

      @@Player-oz2nk They want to protect their uncertain cash cow, and the state want to regulate everything as much as possible as they always do. Money and Control, it's not about safety; never has been, never will be.

    • @jaimdiojtar
      @jaimdiojtar Před měsícem +2

      Yeah their models are dogshit. The only that is still uncensored is gpt 3.5 turbo 0301 that will shut down next month

    • @ohokcool
      @ohokcool Před 26 dny

      Yes, I agree, this is not chill

  • @Batmancontingencyplans
    @Batmancontingencyplans Před měsícem +143

    Open AI is feeling threatened because LLaMA 3 came close to gpt-4 despite being an open-source model!!

    • @Zeroduckies
      @Zeroduckies Před měsícem +17

      Llama is amazing can't wait for llama10. Open source is the only way to go. We need transparency

    • @mafaromapiye539
      @mafaromapiye539 Před měsícem +6

      Llama 3 70B is good for Dialogue Engine

    • @PulseReviews12
      @PulseReviews12 Před měsícem +6

      Its better then some model and is even better then the new gpt 4 one if you use some prompting

    • @glenyoung1809
      @glenyoung1809 Před měsícem +11

      Wait until they finish training the Llama 3 400B model...

    • @6AxisSage
      @6AxisSage Před měsícem +5

      ​@@glenyoung1809if they dont throw zuck into jail for breaching the new anti competitive laws theyre cooking up by making too good of a model without the corporate overlords approval.

  • @Machiavelli2pc
    @Machiavelli2pc Před měsícem +200

    Exactly. You’re either for Open-Source, or you’re for eventual tyranny. Whether by Companies, governments, or other entities.
    Open-Source acts as a natural checks and balances. Anything else, is a recipe for eventual tyranny. Tyranny by corporations, governments, entities, etc.

    • @14supersonic
      @14supersonic Před měsícem +3

      CZcams likes to delete my comments, especially when it comes to "controversial" topics, so I'll post in parts:

    • @14supersonic
      @14supersonic Před měsícem +6

      Basically, most of these regulations aren't for malicious actors that would harm us.

    • @14supersonic
      @14supersonic Před měsícem +9

      But to make it harder for us to counter powerful forces and groups such as governments and corporations

    • @14supersonic
      @14supersonic Před měsícem

      When they enevitibly decide to use the technology against us for power and control

    • @14supersonic
      @14supersonic Před měsícem

      Open Source AI isn't the issue, but it's the greedy elites that seek to

  • @homberger-it
    @homberger-it Před měsícem +50

    This changes everything! No joke, though.
    OpenAI is becoming more and more frightening.

  • @NakedSageAstrology
    @NakedSageAstrology Před měsícem +32

    The thing is they train these models using data that did not belong to them, it is collective data of the human species that belongs to the human species alone.
    This has to be open source to work, unfortunate for capitalism, we have to come up with an entirely new model, otherwise the human species will destroy itself.

    • @jdholbrook33
      @jdholbrook33 Před měsícem +3

      @@Eval48292 Exactly, if they want their weights protected. Build their own security. I mean a GPU has a hardware ID. Have the software check the ID, if it's not on the list, it doesn't run.

    • @doltBmB
      @doltBmB Před 3 dny

      It doesn't belong to the "human species", it's millions of parts belonging to millions of individuals.

  • @jaysonp9426
    @jaysonp9426 Před měsícem +74

    Meanwhile the department of homeland security invites everyone except for Meta to their board

    • @AnthonyCook78
      @AnthonyCook78 Před měsícem +12

      And Elon

    • @isbestlizard
      @isbestlizard Před měsícem +7

      @@AnthonyCook78 Elon believes in Open*, the same as Free Speech*. The disclaimer in all cases is 'for Elon but not for you'

    • @actellimQT
      @actellimQT Před měsícem

      ​@@isbestlizardglad to see you got the signal! Action speaks louder than words!! 💪💪💪

    • @vSouthvPawv
      @vSouthvPawv Před měsícem +4

      Weird take: I think the focus, with the current geopolitical environment, is AI defense systems. I'm ok with closed models in that arena. Open weights on AI guided laser defenses is a vulnerability.
      At the same time, AI safety as far as free speech and the job market needs to be open source and there needs to be open source representation for AI at a societal level.

    • @jaysonp9426
      @jaysonp9426 Před měsícem +1

      @@vSouthvPawv I can def agree with that

  • @idrisabdi1397
    @idrisabdi1397 Před měsícem +97

    They went full closed source, the audacity of them trying to censor AI. am going to stop using them.

    • @bigglyguy8429
      @bigglyguy8429 Před měsícem +5

      Thank you. Claude is the same price and better at both coding and writing.

    • @KEKW-lc4xi
      @KEKW-lc4xi Před měsícem

      @@bigglyguy8429 good to know. I mostly use gpt for coding I'll have to try that one out instead

    • @DefaultFlame
      @DefaultFlame Před měsícem +4

      @@bigglyguy8429 Unfortunately Claude is not available everywhere.

    • @moamber1
      @moamber1 Před měsícem +4

      @@bigglyguy8429 Claude is more expensive and, frankly, not better. The most important, however, is that Claude vs ChatGPT is not a choice. No more than Windows vs MacOS. Bot are evil corps, their only difference is strength.

    • @bigglyguy8429
      @bigglyguy8429 Před měsícem +1

      @@moamber1 I generally agree but as Claude is #2 I've rather vote with my money for them than keep propping up #1. Both cost me $20 a month. I'm going to put both through a strict test today, and cancel the loser.

  • @gh0stgl1tch
    @gh0stgl1tch Před měsícem +139

    We should initiate a petition to change the name from openai to closedai

    • @ferd1775
      @ferd1775 Před měsícem +15

      You mean CIA-FBI-NSA-AI

    • @free_thinker4958
      @free_thinker4958 Před měsícem

      ​​@@ferd1775😂 exactly!

    • @SanctuaryLife
      @SanctuaryLife Před měsícem +8

      Elon Musk was right when he said to Altman “change it to closed Ai and I’ll drop the lawsuit 😂”

    • @nug700
      @nug700 Před měsícem

      I've had the exact thought over the past few months or so, except have the petition be for them to change their name to "AI Corp"

    • @ryzikx
      @ryzikx Před měsícem

      elon shill

  • @TheGratefulQuad
    @TheGratefulQuad Před měsícem +92

    You are not overreacting at all I think it's bullshit I think they just want to have control over it so whoever wants to use it has to go through them or don't use it it's all about money all about money I don't care what else they say it's all about money

    • @Porter92
      @Porter92 Před měsícem +1

      So they arent aloud to make money? Why? Who are you to tell someone what to do with their company? I think you should volunteer from now on at your job. Bullshit you make money doing it! Make sure you prove to everyone you work for free please 😂😂😂😂 USA home of freedom or wait no people like you telling others what they should do with their life lol

    • @glenyoung1809
      @glenyoung1809 Před měsícem +6

      Not just about money, it about control, setting up a captive consumer base who are milked constantly without any regard for ethical business practices, think the Apple business model.
      This is something which sounds a lot like the old 1990s Microsoft and Bill Gates, not something Ilya Sutskever would come up with.
      Elon Musk is going to have a field day with this news.

    • @user-po7xm5eo1g
      @user-po7xm5eo1g Před měsícem +2

      it's giving "in the future you will ơn nothing(everything membership based), and you will be happy(prescription drugs)" vibes

    • @TheGratefulQuad
      @TheGratefulQuad Před měsícem

      @@Porter92 I am John and I thank you so much for your opinion and I'm so happy you cared enough to read mine thank you have a nice day.

    • @TheGratefulQuad
      @TheGratefulQuad Před měsícem

      @@Porter92 oh and by the way I get disability because I am a quadriplegic so I volunteer at my job everyday I volunteer to make inspirational videos and I am damn good at my job no one is better at sitting in one place than me.

  • @Gl0we22
    @Gl0we22 Před měsícem +27

    "Embrace, extend, and extinguish" (EEE),[1] also known as "embrace, extend, and exterminate",[2] is a phrase that the U.S. Department of Justice found[3] was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used open standards, extending those standards with proprietary capabilities, and using the differences to strongly disadvantage its competitors. - From wikipedia

    • @glarynth
      @glarynth Před 29 dny +4

      Those who learn from history are doomed to watch as others repeat it.

  • @alexanderstreng4265
    @alexanderstreng4265 Před měsícem +12

    OpenAI has no right to call themselves Open.

    • @GregoryShtevensh
      @GregoryShtevensh Před 10 dny

      They have no right to call themselves "I"

    • @tsiefhtes
      @tsiefhtes Před 5 dny

      We could call them "A", the nemesis of "X".

  • @jeffg4686
    @jeffg4686 Před měsícem +38

    I can sort this one out
    OpenAI is funded by corporations who have direct interest in their products / services, and interest in keeping the "good stuff" out of the hands of small business (competition)
    Facebook is their own corporation - they know they're money will come from other means (advertising), and thus don't need to sell a model. they need to keep their crowd happy.

    • @darkskinnedpimp
      @darkskinnedpimp Před měsícem

      It is their not they're* .. that one really bothered me for some reason

    • @prrplex5594
      @prrplex5594 Před 9 dny

      Sam Altman the CEO of OpenAI has been a self interested grifter his entire career. It's painfully predictable that OpenAI went from nonprofit to getting in bed with big money as soon as it became a viable product.

  • @OtterFlys
    @OtterFlys Před měsícem +47

    No, not over reacting. And Thanks! For getting this out.

  • @jim02377
    @jim02377 Před měsícem +34

    This reminds me of the idea of putting DRM chips in all TV's and TPM chips being used to sign the OS that was loading on a PC. If I am remembering correctly Microsoft tried to use TPM chips to kill Linux before they decided open source wasn't the great evil.

    • @NNokia-jz6jb
      @NNokia-jz6jb Před měsícem

      They did?

    • @glenyoung1809
      @glenyoung1809 Před měsícem +6

      I remember that, way back in the late 2000s, MS is always looking for a way to maintain their monopoly and "Trusted" Platform Modules would make it such that only certain OS's would be allowed to run on consumer hardware. It was a bigger initiative than just MS but they tried to use it to guarantee a more captive market for Windows, where you didn't have a simple migration path away from their products.
      They hated and still hate open source OSes as being competition they didn't need.

    • @firstnamelastname6986
      @firstnamelastname6986 Před měsícem +7

      Yes, this is EXACTLY that same system. And I'm sorry to inform you that they were successful. It is now essentially impossible to buy a PC that does not have a TPM chip. Microsoft has mandated it as a mandatory compatibility requirement for Windows 11, and Apple has been packing an equivalent chip into everything from desktops to iPhones to the iWatch.
      Any examination of the system makes it clear that it's a dystopian DRM system, and they had to stall to let opposition fade every time it gets reported on. They way they were successful was by slow rolling deployment - it has been nearly 30 years since the first leak from Intel that they wanted to back unique identifiers into all CPUs for something like this. And they are still slow rolling deployment. Windows 10 End Of Life is set for October next year, and I'm sure they'll hold off even longer before they activate any of the uglier and more obvious uses of the Trust system.
      Information recently came out from Google that they were working on using the system to prevent adblockers - if your computer didn't have a Trust chip or if your browser allowed adblockers then your browser wouldn't be able to show the website at all. Obviously that sparked outrage - and they immediately released an announcement scaling back the project to not include that. Obviously enforcing website ads is going to come back eventually, but for now Google's project is going to focus on further deploying and entrenching the Trust system in less visible ways.
      Quite a few years ago one of the Whitehouse Cyber Security Czars gave a public speech advocating that all computers be banned from the internet unless the computer was locked down by this kind of Trusted Computing system - with the reason being to enforce operating system updates to secure the National Internet Infrastructure against viruses and Trojans. And such a system was in fact built - it's called Trusted Network Connection. But Microsoft is only pitching it for companies to secure their internal networks. They obviously aren't going to try to deploy that on everyone - at least not for several more years. But given that this has been slow rolling for about 30 years now, yes, they almost certainly are eventually going to try to ban internet service providers from allowing ANYONE on the internet unless their computer is locked down by a Trust chip. Another 5 years? 10 years? 30 years? I dunno. Probably in response to some massive internet crisis or war, and only after everyone is used to stuff like all software being sold with Trusted DRM and all websites using Trusted DRM to prevent adblockers.

  • @justinrose8661
    @justinrose8661 Před měsícem +18

    Good on you Matthew, the idea that any one company should have dominion over AI is insane

  • @atypocrat1779
    @atypocrat1779 Před měsícem +14

    "Only the big fish swim comfortably in regulatory waters."

  • @StuartJ
    @StuartJ Před měsícem +28

    I know it's early days for Xai, but they have done the right thing to open source too. You get the choice to run their model outside of X, or pay extra for X to host it, which includes realtime X data.

  • @metonoma
    @metonoma Před měsícem +28

    scary af! if your hardware won't allow you to access open source we're doomed

    • @cristianandrei5462
      @cristianandrei5462 Před měsícem +5

      Think about it, if AI will lead to a performance (efficiency) boost across most economical sectors, economies that don't have access to good models will have a hard time competing with those who have. We will get to a position where a handful of companies (Nvidia, Microsoft, OpenAI, Google and maybe Meta) can decide macroeconomics, they can leave entire countries out basically. That's even worse...

  • @marco114
    @marco114 Před měsícem +14

    make sure you are downloading and mirroring all Open-Source AI stuff.

  • @JudahCrowe-ej9yl
    @JudahCrowe-ej9yl Před měsícem +22

    Ya the used open source to develop there model< fact
    Mr Altman is now on a congressional board.
    And he isn't talking about open source anymore.
    And he is advocating the pulling of dev packages like tensor flow py torch .
    And the registration of gpu's is only scratching the surface of what this congressional board is recommending.
    So here's the elephant in the room about the congressional Ai board. Every person on that board has a huge pre existing stake in tech companies.
    They are now currently staging a monopoly board
    Where they own all the squares on the board.
    And that's not how ANY of the companies Said this would happen. Remember the democratized A.I push they all used to get what they currently have.

    • @glenyoung1809
      @glenyoung1809 Před měsícem +5

      Altman has always left the same impression on me as a used car salesman.
      He comes across as a blend of Steve Jobs/Bill Gates and has never been pro-open source, in other words Altman is an empire builder, he's even said it himself and will do whatever is convenient for himself. I don't understand why anyone ever looked up to guy, he's comes across as "greasy" to me.

    • @6AxisSage
      @6AxisSage Před měsícem

      ​@@glenyoung1809He's an example of the "best" deceptive strategists humanity has to offer.

    • @6AxisSage
      @6AxisSage Před měsícem

      Where can i find some references to open advocating pulling pytorch and tensorflow? Inhad a good look but turning up nothing.

  • @corruptedMegabit
    @corruptedMegabit Před měsícem +10

    It feels so weird being thankful to Meta of all corporations but here we are, good job Meta 🙃

  • @Duncanate
    @Duncanate Před měsícem +20

    We need to get to the point where we can run 70b models like Llama 3 at home, offline, with reasonable speeds and power consumption.

    • @msclrhd
      @msclrhd Před měsícem +3

      You can download a quantized version and split it between the CPU and GPU and get ~4 words/second on a 24GB 4090 using a 12 GPU layers split. The limiting factor is the memory to keep these models in the GPU. These GPUs can fit a 7B or 13B model entirely in GPU memory and get a very fast performance speed.
      Training the models needs even more memory.

    • @glenyoung1809
      @glenyoung1809 Před měsícem +7

      @@msclrhd Technically VRAM isn't all that expensive, if NVidia wanted to they could simply double the amount of memory on the consumer cards on the 80 and 90 level.
      If AMD can sell cards with 16, 20 even 24GB for less on gaming GPUs why can't NVidia?
      I think it's because they want to push AI hobbyists and startups towards buying the Pro level GPUs which cost 2-3x that of a 4090.
      They don't want lower level gaming GPUs cannibalizing sales of the pro-cards.

    • @VRforAll
      @VRforAll Před měsícem

      @@glenyoung1809 Cheaper and with more ram than Nvidia's Orin dev kits sold at 2k

    • @doltBmB
      @doltBmB Před 3 dny

      Never gonna happen. Large models are just inherently computationally expensive and wasteful. Billions of kilowatt hours have already been thrown at a glorified toy with little real world utility. We need to cut our losses and stop the madness.

  • @OnigoroshiZero
    @OnigoroshiZero Před měsícem +51

    Fuck OpenAI. I am never again using their own models, I can wait a few months for the others to catch up even if OpenAI is ahead with newer models.
    Open source is the only way this is going to work, and the safest and most private for us. They should never be allowed to get access to private hardware.
    Elon should have been the leader of OpenAI.

    • @benroberts8363
      @benroberts8363 Před měsícem

      governments hate open source

    • @l1nuxguy646
      @l1nuxguy646 Před 25 dny +3

      Oh don't fool yourself thinking Musk would keep it open. He'd be doing the same thing, but put himself at the top.

  • @seupedro9924
    @seupedro9924 Před měsícem +19

    Looks like they are trying to suppress the gaps from open source models in every possible layer.
    This regulation is so anticompetitive that OpenAI should be renamed to BunkerAI instead of CloseAI.

    • @YeeLeeHaw
      @YeeLeeHaw Před měsícem

      It hints on that they don't have an edge or as large of an edge anymore.

    • @mwwhited
      @mwwhited Před 25 dny

      @@YeeLeeHawthey don’t… they just have donated compute and VC money to set on fire while they compile other peoples’ ideas.

  • @adtastic
    @adtastic Před měsícem +5

    AI & security engineer here. I believe you misread this. They're basically saying have something like a TPM (trusted platform module) in the GPU. So, if there is a crypto key/vault/etc in the GPU, you can have your weights encrypted during transit then only decrypted once on the GPU. This is already pretty common in infrastructure security. Additionally, keeping model weights secure is very important even in the case of open source models. For example, if you fine-tune Llama3 for an engineering use case with a bunch of your company's private data to act as an internal support agent of some kind, you don't want those model weights getting out because now your own proprietary data can more or less be reversed out of the model.

    • @doltBmB
      @doltBmB Před 3 dny

      models are already based on proprietary data that was stolen, so why only implement something that takes ownership of peoples hardware away from them to protect corporate data alone? this is a two-tier society where corporations are people and people are slaves with no rights. fuck you.

  • @Sanguen666
    @Sanguen666 Před měsícem +41

    Some people buy gold, I'll have 4xA6000. Having private compute is a fucking basic right!

    • @Zeroduckies
      @Zeroduckies Před měsícem +4

      Amen.

    • @glenyoung1809
      @glenyoung1809 Před měsícem +7

      That's something many people don't think about, they assume access to compute is a basic right, like access to water.
      In today's information ruled world, being cut off from compute is instant exile outside of society.
      Very few people can function without access to information technologies.
      Plus, if you're observant the major tech companies and hyperscalers are trying hard to promote cloud computing as the ultimate endpoint.
      They don't want people with their own PC's, they want everyone subscribed to their cloud services and all you would have at home is a"dumb" terminal or even a smartphone.
      They would not only own your access, they would own your data as well.
      Why do you think Microsoft is going to spend $100 billion over the next 5-7 years building data centers for GPUs?

    • @braineaterzombie3981
      @braineaterzombie3981 Před měsícem

      Well compute gets cheap very quickly, historically. Not a good idea to invest

    • @amentco8445
      @amentco8445 Před měsícem +4

      ​@@braineaterzombie3981The economy isn't looking super good on that front. Things aren't going to be getting cheaper the way they used to, sorry.

    • @the42nd
      @the42nd Před měsícem

      Need a guild of the free to purchase data centers that no corporation owns. Like a community center.

  • @elgodric
    @elgodric Před měsícem +15

    This is how tyranny starts

  • @vSouthvPawv
    @vSouthvPawv Před měsícem +27

    Are we gearing up for an Altman/Zuckerberg rap feud?

    • @cmelgarejo
      @cmelgarejo Před měsícem +1

      This needs to happen

    • @vSouthvPawv
      @vSouthvPawv Před měsícem +2

      @@cmelgarejo it's going to be just like Kendrick, but opposite. GPT-5 and Llama 3 are gonna write the lyrics, we already know

    • @msclrhd
      @msclrhd Před měsícem +6

      @@vSouthvPawv Kendrick Llama?

    • @vSouthvPawv
      @vSouthvPawv Před měsícem +3

      @@msclrhd that's gonna be a new fine-tune on Huggingface within a week because you said, you know that right? 😂 If I had the hardware, I'd start training a Llamar right now. (Also, I'm glad you chose Llama for Kendrick, which means, obviously that the industry darling Drake is the corporate GPT, and that tracks)

    • @koijoijoe
      @koijoijoe Před 27 dny

      Hmm, this would be a good way to get the masses listening to this part of the discussion... I'm in!

  • @AngeloMondaini
    @AngeloMondaini Před měsícem +31

    Now I hope that Elon Musk wins the law suite against "Open"AI.

    • @glenyoung1809
      @glenyoung1809 Před měsícem +5

      Musk is going to have a field day with this idea, I can see the memes writing themselves.
      OpenAI has a right to protect their intellectual property but they don't have a right to do it by regulating my personal property and deciding what I'm allow to access or not.

    • @davestorm6718
      @davestorm6718 Před měsícem +5

      @@glenyoung1809 The real question is what intellectual property? The training sets were skimmed from publicly available sources (including mine!)

    • @glenyoung1809
      @glenyoung1809 Před měsícem

      @@davestorm6718 That's where we head into unknown territory, because it was trained on public data, but the weights were computed by a private organization using proprietary algorithms. This is where IP lawyers earn their pay and spend millions litigating the fine points.

    • @BrianChip-wn5im
      @BrianChip-wn5im Před 27 dny

      Neurolink anyone? Felon is promoting brain chips. Doesn't sound free to me.

    • @kenfryer2090
      @kenfryer2090 Před 26 dny +1

      Elon musk is as bad or worse than openai. You don't want that monster winning

  • @duytdl
    @duytdl Před měsícem +13

    Have they learned nothing from Piracy? All their shenanigans will be left in smithereens by just 1 clever person jailbreaking them.

    • @benroberts8363
      @benroberts8363 Před měsícem

      but but the government is on their side

    • @doltBmB
      @doltBmB Před 3 dny

      @@benroberts8363 the government is evil, so who cares?

  • @01Grimjoe
    @01Grimjoe Před měsícem +15

    Sam Altmans choices have never been altruistic

    • @kenfryer2090
      @kenfryer2090 Před 26 dny +1

      The guy is a gay villian... The very worst kind of villian

    • @BarfingGerbil
      @BarfingGerbil Před 12 dny

      It's not part of his nature.

  • @martingarcia8613
    @martingarcia8613 Před měsícem +6

    Everyone saw it coming .
    It’s easy for a company that is almost half owned (49%) and backed by a $3T juggernaut to say, “this is the way things should be done.” since they use that juggernaut’s infrastructure as their backbone. It’s also important to remember that MS also has a “non-voting, observer role”, in OAI’s board. However, if not for MS, everyone else would be eating OAI’s lunch; there are a number of other companies that have models that are on par with OAI.

  • @VioFax
    @VioFax Před měsícem +8

    So theirs can be a black box but ours can't...

  • @user-po7xm5eo1g
    @user-po7xm5eo1g Před měsícem +5

    openai is giving "in the future you will ơn nothing(everything membership based), and you will be happy(prescription drugs)" vibes

  • @YouLoveMrFriendly
    @YouLoveMrFriendly Před měsícem +20

    Hide yo kids; hide yo wives

  • @erb34
    @erb34 Před měsícem +4

    Sounding Dystopian. We all need to focus on open source.

  • @meisherenow
    @meisherenow Před měsícem +11

    Do all the floating point arithmetic you want, people. You don't need anyone's permission.

    • @isbestlizard
      @isbestlizard Před měsícem

      Someone should make a cryptotoken about that. Offer to do sums you get tokens need to do sums you pay tokens make it massive and distributed and verifiable

    • @AberrantArt
      @AberrantArt Před měsícem

      Not yet...

  • @ImmacHn
    @ImmacHn Před 26 dny +4

    "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety." Ben Franklin

  • @mrdevolver7999
    @mrdevolver7999 Před měsícem +9

    Welcome to fascism of 21st century.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g Před měsícem +4

    They are positioning regulations so that their can rent/sell their models to end users. This will however become an obstacle to smaller companies and users, with also the possibility of government social score based service denial.

  • @1Vaudevillian1
    @1Vaudevillian1 Před měsícem +5

    This is them trying to kill competition no two ways about it.

  • @sugaith
    @sugaith Před měsícem +9

    They are actually desperate.
    GPT is the worst model considering that is running on massive hardware.
    A lamma3 70b in you PC performing close to it, proves it
    Lets boycott OpenAI

  • @Thedeepseanomad
    @Thedeepseanomad Před měsícem +4

    Also, this is one of the many reasons we need open sourced hardware and software when it comes to GPU / compute and memory

  • @Alice_Fumo
    @Alice_Fumo Před měsícem +17

    I believe that some of this is misinterpreted.
    The primary goal here appears to me to be able to do the following:
    OpenAI generates a private / public encryption keypair.
    OpenAI loads it onto their GPUs into something like a TPM, meaning the key can't be exfiltrated again (ideally). The model weights when being transferred from the GPU to any sort of other storage are always in encrypted form - using the previously generated keys. The keys only exist on the GPU hardware and like 3 backups on specialized hardware security keys in the possession of people like Sam Altman who could thus restore it if everything got compromised somehow.
    This way, OpenAI can ensure that even if their weights are stolen, they will be pretty much guaranteed in encrypted form and thus useless.
    This wouldn't prevent anyone from running anything and it shouldn't be too wild that you can't run a piece of encrypted software without decrypting it first.
    If I interpreted all this correctly it makes a lot of sense considering their AIs getting good enough that if purposely misaligned by bad actors could develop and deploy superebola or something.
    However, can this be used for example to implement like impossible to crack game DRM?
    Not sure. Seems to be depending on implementation details.

    • @matthew_berman
      @matthew_berman  Před měsícem +12

      Agreed mostly.
      I'm sure they have "good intentions" with the technology they are suggesting, but it's clear to me it could be used to simply track and disable inference/training on GPUs.

    • @AnthonyCook78
      @AnthonyCook78 Před měsícem +1

      Sounds like a reasonable proposal to me.

    • @WhyteHorse2023
      @WhyteHorse2023 Před měsícem +2

      Yeah it's basically what sony does with their playstations

    • @Sqrlmasta
      @Sqrlmasta Před měsícem +6

      I agree with @Alice_Fumo here, this is a bit of fearmongering I think. What they are proposing is just a way to have a cryptographically-secure part of the GPU, like the TPM in a CPU, that will store the model weight outputs of training to prevent them being leaked unencrypted. Organizations would also be able to sign their code in a way that would only allow it to run on their own collection of GPUs and not allow you or I, or more specifically a threat actor to steal their code and run it on their GPUs while also being able to be certain their GPUs are authentic and unmodified. It is not about tracking and disabling others' GPUs, but being able to authenticate their GPUs as their own, unmodified ones, and people being able to sign their code to only run on those and only them being able to extract the weigh data output from their code on them.

    • @bpanatta
      @bpanatta Před měsícem

      @@Sqrlmasta Exactly!

  • @tellesu
    @tellesu Před měsícem +3

    Seems like once again openai is telling us that they believe that only sama and his handpicked board are qualified to decide who gets access to the power of truly powerful models. If they develop agi/asi they are announcing an intent to enslave it to only serve those who meet sama's approval.

  • @speak-my-mind
    @speak-my-mind Před měsícem +3

    They’re also making the implicit assumption that having AI overlords is somehow more secure than anyone having access to AI. I can imagine NVIDIA selfishly lobbying against this if the government actually considers this, though, since this would effectively put a bottleneck on their sales. Also, I can see Zuckerberg pushing back given that he’s decided that Meta is going to beat OpenAI by pushing the industry in the direction of openness. So there’s still hope yet

  • @DaeOh
    @DaeOh Před měsícem +3

    Meta is pushing the same chatbot-centric ideas that OpenAI started. At this point 99% of developers think "prompt engineering" applies to LLMs that are fine-tuned into chatbots. Chatbots that can accomodate that brand of "prompt engineering" will always be their domain, they'll always have the upper hand. And it serves their purposes for outlawing base LLMs which are much more useful, because fine-tuning an LLM into a chatbot is the only idea these companies have for "alignment."

    • @6AxisSage
      @6AxisSage Před měsícem +3

      The departure of base llms to the ,,, format really does kill a tonne of potential that was going for gpt3 and limits the outputs of the models. Sure its highly effective and less technical for the average punter to interact with the model but its like we closed a chapter to something much more powerful.

    • @DaeOh
      @DaeOh Před měsícem +1

      ​@@6AxisSage Yes! And I'm sure if more developers knew you can just put in examples and it'll recognize the pattern, they'd see how useful that is. Training a model on maybe 4 examples instead of millions? Few-shot on a base LLM is literally a machine learning dream come true.

    • @6AxisSage
      @6AxisSage Před měsícem +1

      @@DaeOh thats awesome that I met someone that doesnt think im speaking gibberish nonsense ❤ shame your work is under wraps but I get it. Ill be dropping more work on my channel if ur interested however, I have an especially fun model ive been running for a few days that simulates brain hemispheres thats showing a lot of interesting and novel behaviors I havent seen before.

  • @settlece
    @settlece Před měsícem +2

    a lot of this sounds like an excuse for monetisation through bureaucracy
    and the only protection is to them from outside competition to make it so expensive for anyone to even pick up a pen and do any work in this field
    really do appreciate you putting the link to that talk.
    gonna enjoy that now thanks

  • @jmd1743
    @jmd1743 Před 4 dny +1

    A good case for open source is pointing out to people if they remember when google was good, and then ask if they would like to roll back to that version of google.

  • @user-xj5gz7ln3q
    @user-xj5gz7ln3q Před měsícem +8

    Sounds shady..

  • @apoage
    @apoage Před měsícem +6

    That's fucked up on so many levels .. I don't think I want so much when I want to run what ever I want on hardware I own.. open ai is showing up worse an worse side..

    • @AberrantArt
      @AberrantArt Před měsícem

      The world is headed towards socialism / communism in all ways.

    • @apoage
      @apoage Před měsícem

      @@AberrantArt well in that language it's corpo fascism.. which is against my mind of hacker cyber anarchy.. just like that.. problem will be when they start to outlaw to run certain applications

    • @BrianChip-wn5im
      @BrianChip-wn5im Před 27 dny +1

      @@AberrantArt It's fascism.

  • @altdoom5205
    @altdoom5205 Před 22 dny +1

    "When you understand the nature of a beast, you know what it is capable of" - Blade. I am not surprised at all by all this. This is exactly what I expected Closed AI would do.

  • @m0ose0909
    @m0ose0909 Před měsícem +2

    Yes, it sounds like DRM like Widevine for video. It will ensure the models weights can only be decrypted in protected hardware buffers and not accessible via the software otherwise.

  • @thomasschlitzer7541
    @thomasschlitzer7541 Před měsícem +5

    OpenAI is just a Microsoft asset and MS tries to protect it. I always preferred close source but this year my opinion changed a lot. Maybe I’m a pirate now but I see close source as a danger to society. Especially for AI it’s worrying that training sets and computational power is mainly available for big companies. Smaller companies have no chance and users have to use big tech clouds. There is a reason why MS bought GitHub and OpenAI (yea yea partnered blabla) and tries to get all user data into their cloud (Office, OneDrive, etc) That’s a dangerous thing. I’m an MS shareholder and still I am seeing this as a threat.

    • @doltBmB
      @doltBmB Před 3 dny

      AI is piracy, they pirated data from millions if not billions of people.

  • @peppix
    @peppix Před měsícem +5

    The OPEN in openAI mean. Open your door, we are coming to check

    • @doltBmB
      @doltBmB Před 3 dny +1

      means the same as "democratic" in "people's democratic republic"

  • @thisisnotadream404
    @thisisnotadream404 Před měsícem +1

    As a cybersecurity researcher, this is the exact opposite approach -- in my POV -- that we should be taking. Absolutely disappointed about their lack of foresight.

  • @DannyGerst
    @DannyGerst Před měsícem +1

    Remember 'We Have No Moat, And Neither Does OpenAI'? That message from inside Google last year. Consequently, focusing on security might be their only viable strategy to monetize AI models in the future. This approach resembles the medieval practice of restricting access to books, claiming that knowledge is too dangerous for mankind.

  • @smanihwr
    @smanihwr Před měsícem +4

    Closed AI will create more wealth gap.. I think it will benefit everyone if all pioneering AI companies become non-profit. will OpenAI(Microsoft) agree to become non-profit?

  • @Thedeepseanomad
    @Thedeepseanomad Před měsícem +4

    **sorry your GPU is not authorized for smut, implied romantic content only. Any further attempts will trigger a report to your local Eye in the Sky**

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Před měsícem +2

    I never ever thought I would utter these 3 words in my life: "I love Meta"

  • @mcgdoc9546
    @mcgdoc9546 Před měsícem +7

    They sold their model weights to Microsoft!

    • @braineaterzombie3981
      @braineaterzombie3981 Před měsícem

      I disagree, they probably sold the architecture not weights . Selling weights is equivalent to selling chatgpt itself. Altman isn't a fool to sell his only advantage

    • @amentco8445
      @amentco8445 Před měsícem

      ​@@braineaterzombie3981Altman is only in this to become another tech billionaire. He does not care how it needs to happen.

    • @christhi
      @christhi Před měsícem

      No MS has the weights but it’s hardly a “sale”

  • @JankJank-om1op
    @JankJank-om1op Před měsícem +5

    OpenCIA thinks they own matmul🤣

  • @tex1297
    @tex1297 Před měsícem +2

    The serf must know his place. No ai for him, just what is needed for his work, under full control of course.

  • @42ndMoose
    @42ndMoose Před měsícem +1

    your belief around @14:18 is a relief to hear. i never saw it that way.
    but i wouldn't be so calm about ai growth. it's more exponential than incremental. and there are malicious worms that are still a threat to industrial systems made by human ingenuity alone. bad actors could have an advantage in a way that the good actors never think to do upon themselves.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g Před měsícem +5

    Cancel your (un)openAI subscriptions.

  • @ryanseibert1449
    @ryanseibert1449 Před měsícem +10

    "To be more secure..." i'll stop you there Sam, you partnered with Microsoft 😂

    • @StevenSSmith
      @StevenSSmith Před měsícem

      Sounds very similar to Sony in the helldivers dispute

  • @patrickjreid
    @patrickjreid Před měsícem +1

    Who would have thought that the Zuck would become the good guy? But it sure seems like they finally installed his emotion chip. Amazing how human an android with emotions is.

  • @TomTrval
    @TomTrval Před měsícem +2

    Imagine if atom/hydrogen bomb was first developed by private international corporate company ....

  • @Michael-do2cg
    @Michael-do2cg Před měsícem +3

    Its sad, they have the ability to save lives and make the world a better place and once again humanity sacrifices it for money and power. They're going down the path of control and manipulation. So very sad to see the possibility of a better world squandered.

    • @kenfryer2090
      @kenfryer2090 Před 26 dny

      Always the same. With psychology they made amazing innovations hundred years ago understand behaviour and the mind. Did they use it to make people more fulfilled and balanced? No.. They used it to create manipulative adverts tricking us to want things we didn't need.. creating mass wasteful consumerism. Capitalism is as evil as any regime. AI could be most beautiful gift to explore and enrich.. But it will be used to replace humans jobs, manipulate and repress us

  • @Taurus_Skyglaive
    @Taurus_Skyglaive Před měsícem +6

    1984

  • @Copa20777
    @Copa20777 Před měsícem +2

    Matthew people like you are the first of your kind, your stance with educating the globe about Ai and sharing about free open projects has democratize the entire industry in its conception, am from Zambia 🇿🇲 and I salute your work and stance from the beginning.. we hope open ai doesent become selfish with this technology, it's ai that is trained on every humans data, so in theory it should be public and open as set forth in the Beginning

  • @colorado_plays
    @colorado_plays Před měsícem +1

    Matt, the attestation process in and of itself is not so scary (I am a former Security Platform Architect for Intel) but it could be used for those that produce hardware to "track it" if they so choose. You could architect it in or out of the attestation flow.

  • @Alf-Dee
    @Alf-Dee Před měsícem +3

    I don’t want to sound a conspiracy theorist here, but this feels very much influenced by their main stakeholder: Microsoft.
    On a side note: I am canceling chatGpt for good, and going full on my local llama3.
    We can tell them they are bad voting with our money.

    • @doltBmB
      @doltBmB Před 3 dny

      SHAREHOLDER, stop using the enemy's terminology

  • @Akuma.73
    @Akuma.73 Před měsícem +3

    Jimmy Apples was right to put Sama in Communist boxers in his recent boxing meme. Fits perfect 👌

  • @Adamskyization
    @Adamskyization Před měsícem +2

    They just want to protect themselves from competition.

  • @Elegant-Capybara
    @Elegant-Capybara Před 6 dny +1

    Zuck might be a lizard man, but damn, Meta is doing great work with their open LLM models. And it's free for personal and commercial use. Most importantly the weights can be downloaded and run locally.

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kz Před měsícem +7

    It sounds a bit like church: "Do you want to talk to God? Okay, no problem. You have to come to church and donate 10% of your income. Hallelujah!"

    • @Jwoodill2112
      @Jwoodill2112 Před měsícem +1

      Not all churches are like that. it's mostly those who teach the prosperity gospel.

    • @temp911Luke
      @temp911Luke Před měsícem

      I think you are talking about scientology "church".
      The true church doesnt force you to "donate 10% and Hallelujah" as you gently put it.

    • @RaitisPetrovs-nb9kz
      @RaitisPetrovs-nb9kz Před měsícem

      @@Jwoodill2112Well, churches have gone through the so-called Reformation, and it took 1500 years to get to this point, and in most countries, it was quite a bloody process.

    • @BrianChip-wn5im
      @BrianChip-wn5im Před 27 dny

      Anyone can talk to God. No primitive, money grubbing Church, Mosque, Synagogue or temple needed. All that just gets in the way.

  • @bpanatta
    @bpanatta Před měsícem +3

    About the GPU encryption... their position makes perfect sense and I think your are making a confusion.
    If I own an AI company it would be very important for me to have control over my GPUs running my model weights. Specially in a cloud environment, where the units are scattered across the globe.
    There is a chance that a malicious entity could add their own GPUs as part of your model processing pipeline, where they could extract your model weights and inference data.
    To tackle this, they are proposing a few cryptography applications that a company should implement to identify the GPUs that are able to run their model weights, while preventing them to leak data.
    I guess your confusion comes from assuming that they need to know about your personal GPU so they can run their model, while in fact they are talking only about their own GPUs.

    • @mattelder1971
      @mattelder1971 Před měsícem +2

      The issue is that they seem to be calling for government regulation that would REQUIRE that for anyone who wants to compete in the AI space. If it were just for protecting their own systems, there would be no need to get the government involved.

    • @bpanatta
      @bpanatta Před měsícem

      @@mattelder1971 Those changes are just a tiny fraction of any other cost involved when working with AI models, so having to comply with it will surely not be a problem, just like we already do with many other laws for infrastructure and data security.

    • @mattelder1971
      @mattelder1971 Před měsícem

      @@bpanatta I'm guessing that you are not in the US. We don't have the overbearing laws regarding those things that are present in the EU. It isn't about the cost, it is about the freedom of developers to build what they want, how they want it.

    • @bpanatta
      @bpanatta Před měsícem

      @@mattelder1971 Oh it makes sense. My thoughts were mostly on costs and complexity of implementation.

    • @ProcGenNPCs
      @ProcGenNPCs Před 4 dny

      "Ah I need to protect my computer code. I know! Everyone should be forced to adopt a technology that assists me in my efforts. Every computer should have an embassy for me."
      It sounds a little ridiculous to me.

  • @bujin5455
    @bujin5455 Před 18 dny

    8:59. It sounds like what they want to do is setup a situation where a given trained model is cryptographically locked to a specific GPU. So that you can't migrate weights to unauthorized hardware.

  • @VenturaPiano
    @VenturaPiano Před měsícem +2

    Just listened to Lex's podcast with max tegmark (round 2). He made a compelling argument for not open sourcing AI, equating it to nuclear weapons or a virus with the potential to wipe out a large percentage of humans. If you agree with that sentiment that AI might eventually be that good, then yes perhaps we shouldn't always be all gun ho on open source. It might eventually be the equivalent to open sourcing the software powering a potent missile. I encourage debate.

  • @socialliveview7698
    @socialliveview7698 Před měsícem +4

    You got this one completely wrong Matthew nobody is tracking GPU clearly I can tell you dnt understand fully how cryptography works which is fine but if you run a platform this big you owe it to your audience to do due diligence rather trying to be first to post a video making wrong claims I actually now have to take anything I hear from you with a grain of salt.

    • @Alaron251
      @Alaron251 Před měsícem

      Enlighten everyone, then.

    • @kazedcat
      @kazedcat Před měsícem

      ​@@Alaron251He can't He thinks JesuSAM Altman does no wrong. It was very clear they want DRM AI and they want all GPUs to have hardware level DRM.

  • @RichardEiger
    @RichardEiger Před měsícem

    Absolutely 100% support your opinions, backed by 45 years experience in IT. I come to the conclusion that the longer the less does OpenAI deserve to even have the expression "Open" in their name. I also fully agree that most of this sounds like regulatory capturing, which I don't think OpenAI (nor Google, Anthropic,...) needs to remain among the leading organisations to grow on providing state of the art AI services. Unfortunately governments and even NGOs or standardisations organisations probably welcome anybody who will put a lot of effort into something those organisations should do. And while it is clear that the required expertise can only come from the respective industry, the difference is the diversity and representation of multiple opinions that should govern the 'official organisations'.

  • @igorsolomatov4743
    @igorsolomatov4743 Před měsícem

    Feels like network restrictions can also backfire. For example, they can require your computer to use special software that inspects your private internet access, otherwise it is not accessible.

  • @dennissdigitaldump8619

    GPU's are already signed, it's just not used. There's groups, and individual signatures already in the firmware. It was originally just to stop "unauthorized" manufacturers to build cards from chips that were obtained illicitly.

  • @TraderBodacious
    @TraderBodacious Před měsícem +2

    GPUs will be more dangerous than firearms soon

  • @hotlineoperator
    @hotlineoperator Před měsícem

    In case of OpenAI, its difficult to idenfy if message from orginal OpenAI organization what want make R&D open - or from new OpenAI what operates as commercial corporation. There is still two OpenAI organizations operating with same name, open and commercial.

  • @staticoverplastic7456
    @staticoverplastic7456 Před měsícem

    Is this similar to intel's on chip encryption for enterprise vPro CPUs? My first thought is that the fear here is a lot of training is being done in shared GPU cloud clusters, leaving them vulnerable to online hacks. Adding encryption at the chip level is just another safety layer that's already done for CPUs. If there's regulatory capture, it would apply more to the chip designers and hyperscalers.

  • @vineetmaan1
    @vineetmaan1 Před 8 dny

    not only ai but this can also come to normal gpus as a form of DRM for all the software we currently use