US Government AI Regulation BOMB DROPPED! RAAIA Act - The end of Open Source? Regulatory Capture?

Sdílet
Vložit
  • čas přidán 15. 06. 2024
  • Patreon (and Discord)
    / daveshap
    Substack (Free)
    daveshap.substack.com/
    GitHub (Open Source)
    github.com/daveshap
    AI Channel
    / @daveshap
    Systems Thinking Channel
    / @systems.thinking
    Mythic Archetypes Channel
    / @mythicarchetypes
    Pragmatic Progressive Channel
    / @pragmaticprogressive
    Sacred Masculinity Channel
    / @sacred.masculinity
  • Věda a technologie

Komentáře • 405

  • @vicnighthorse
    @vicnighthorse Před 2 měsíci +283

    I rather think it is government that is insufficiently regulated. Maybe it will be better to have AI regulating government.

    • @andoceans23
      @andoceans23 Před 2 měsíci +27

      Based 💯💯💯

    • @ethanwmonster9075
      @ethanwmonster9075 Před 2 měsíci +3

      Only if it is tried and tested, ai is still super new. AI systems capable of even rudimentary reasoning are *very* recent.

    • @DaveShap
      @DaveShap  Před 2 měsíci +66

      Eventually yes. I would prefer to express my needs and values to a collective hive mind

    • @ppragman
      @ppragman Před 2 měsíci +1

      You ought check out some of the previous attempts to study this, including Project Cybersyn - the futuristica and utopian vision of some wild Chileans.

    • @mookfaru835
      @mookfaru835 Před 2 měsíci

      Just tax its growth so it grows slowly, no?

  • @bgmzy
    @bgmzy Před 2 měsíci +33

    Big clarification: this is “model legislation” that has come from the Center for AI Policy (a thinktank). This has not been proposed by the US Gov

    • @davidx.1504
      @davidx.1504 Před 2 měsíci +3

      Underrated comment

    • @JWRB6
      @JWRB6 Před 2 měsíci +3

      Upvote!

    • @BryanWhys
      @BryanWhys Před měsícem

      But have you seen the actual bill on deepfakes proposed to go through in June? Actual nightmare

    • @BryanWhys
      @BryanWhys Před měsícem

      House bill 24-1147 section 46

    • @BryanWhys
      @BryanWhys Před měsícem

      Someone tell David please it's really bad and I'm not in the disc

  • @aldenmoffatt162
    @aldenmoffatt162 Před 2 měsíci +65

    AI developers will work from cruise ships.

    • @javilt1
      @javilt1 Před 2 měsíci

      Yupppp have u seen the mega ship the saudis are building? That’s honestly the only future for intelligent ppl, governments won’t stop until we’re all slaves at this rate

    • @JoeyCan2
      @JoeyCan2 Před měsícem +1

      Lmfao

  • @ppragman
    @ppragman Před 2 měsíci +141

    People supporting this (and even citing the FAA as an example of how things should be done) are neglecting the obvious… and David actually hit on this - the government doesn’t know what it is doing.
    This is regulatory capture, pure and simple. You generate a bureaucracy like this and create an administrative process for this and in 4-5 years we’ll have OpenAI self-regulating (like Boeing) without real ramifications and individual hobbyists being attacked for training a Mario Playing AI.

    • @ppragman
      @ppragman Před 2 měsíci +18

      I think it’s worth mentioning too, most people supporting this stuff have never actually dealt with the FAA. The FAA is really good sometimes but also an unbelievably obtuse and obfuscatory organization that has immense power to regulate as they see fit… the regulators who work in these agencies are actually basically unaccountable to the public and the rule making process does not have adequate oversight in my professional opinion.
      Before I got sick, I worked in aviation of over a decade and saw a bunch of times where there were situations where technology could make a situation demonstrably safer but we were prohibited from using these tools because they were not legally approved yet. Conversely, the laws required us to do something that was objectively more dangerous than the alternative to maintain legal requirements if we wanted to fly. Companies were totally fine sending people out in these more dangerous conditions: “it’s legal, get your ass out there, we’ve got mail to move.” Beyond that the structure of the organization was shockingly obtuse.
      Once we wanted to get a camera moved. We were the only operator flying to an airport and the airport weather camera was facing a direction away from where literally anyone would be coming from. We navigated through the layers of process control before finding that the graph of “person we had to contact to get this fixed” went in a circle. Eventually we explained our predicament to one of the electricians going out to work on the battery system that powered the camera, miraculously the camera got moved.
      That is the FAA. And I’m not *against* regulation in principle. If we’re going to have these complicated systems we should probably regulate them when public safety becomes a factor, but… my direct experience working with these sorts of organizations has not been a positive experience on average. The regulation of the aviation industry is already selective and I would hesitate to empower the federal government to regulate something as important as AI. Here are some anecdotal examples: I worked for a small aviation company for 6 months (I quit for my own safety) that was dramatically overloading their aircraft every day for pure greed - we also basically didn’t do maintenance, on paper it said it was done but it wasn’t real. The FAA never audited or investigated. I worked another that basically was ignoring the FAA regulations on required rest, on flight, and duty entirely. The FAA did not care because they just lied on the paperwork…
      People should be highly skeptical of this sort of thing and “be careful what they wish for.” Regulations are only good if we have competent regulators which include policy writers who understand the material and an enforcement arm that cares about the practical ramifications and not just the process, but caring about practicalities is not incentivized for the average inspector.

    • @Muaahaa
      @Muaahaa Před 2 měsíci +5

      To be fair, no one knows what they are doing in regards to regulating AI. This is brand new territory. We should expect many attempts at regulation to get things wrong and need to go through an iterative process over the next several years. Giving criticism is good, but expecting perfection is just going to raise your blood pressure because that won't be happening ever (or any time soon).

    • @ppragman
      @ppragman Před 2 měsíci +6

      @@Muaahaa a bill that is more related to FLOPS and not capabilities probably is a bad start.

    • @Muaahaa
      @Muaahaa Před 2 měsíci +1

      @@ppragman Yup, that is probably their most obvious mistake. I can understand why it is tempting to use something easily measured, like FLOPS , but the correlation with capabilities is not reliable.

    • @Fixit6971
      @Fixit6971 Před 2 měsíci +1

      Will you people PLEASE stop doing the govaments werk for them? Not that I think it will actually help them patch any holes .... Ahhh, what the heck. Carry on people !

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +48

    Was always going to happen but regulating AI will be like chasing a ghost. The physical apparatus necessary will of course be easier to regulate. Thanks David.

    • @tubekrake
      @tubekrake Před 2 měsíci

      Research will happen in secret, somewhere else if necessary. And it will be much more harmful to the public. Comparing it to Planes being controlled to be safe is really fucking stupid. It will result in a few owning AI and controlling everything.

  • @ct5471
    @ct5471 Před 2 měsíci +19

    If we are close to AGI and you are correct with September and open source isn’t that far behind, does this even matter? Recursive self improvement might start before this is put into law.

  • @neilhoover
    @neilhoover Před 2 měsíci +14

    Today it’s difficult to distinguish government from large corporations, as they work closely together in a mutually beneficial paradox; and thus, most regulations are designed to benefit large corporations and push smaller companies or organizations out of the mix.

  • @jjhw2941
    @jjhw2941 Před 2 měsíci +19

    If a developer puts out a model and it correctly gives crime statistics for different ethnic groups and that hurts someone's feelings is the developer liable?

    • @jaazz90
      @jaazz90 Před 2 měsíci

      You mean like the data that illegal immigrants commit 2.7 times less crimes than US citizens? Turned out that objective reality has a left leaning bias, and even Elon couldn't make Grok believe in illusionary bullshit.

    • @raymond_luxury_yacht
      @raymond_luxury_yacht Před 2 měsíci +5

      In Scotland it would be in prison for life.

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +21

    Remember in Contact when the mad, super-wealthy genius secretly built the machine? Here we go.

  • @broimnotyourbro
    @broimnotyourbro Před 2 měsíci +20

    The notion of regulating by FLOPS is inherently stupid. Models may get simpler as you mention, but also that's not a regulation that's going to stand the test of time in a very "640K ought to be enough for anyone" kind of way.

    • @paultoensing3126
      @paultoensing3126 Před 2 měsíci

      Doesn’t photosynthesis have a high level of computation?

  • @vi6ddarkking
    @vi6ddarkking Před 2 měsíci +79

    The fun part of this entire mess is that Open Source AI projects won't care in the slightest.
    Even if any regulators tried to take one down.
    You'd have over ten forks in sites outside of their jurisdiction the next day.

    • @Trahloc
      @Trahloc Před 2 měsíci +16

      They haven't been able to tackle ghost guns which have no economic advantage. Don't see how they're tackling this and doing any serious dent.

    • @Trahloc
      @Trahloc Před 2 měsíci +1

      @@Me__Myself__and__I Mistral is French if I recall correctly. All this will do is cause the USA to fall back in AI.

    • @EduardsRuzga
      @EduardsRuzga Před 2 měsíci +1

      @@Me__Myself__and__I you are not following Chinese advancements I see. They have hardware issue for the moment though.

    • @santicomp
      @santicomp Před 2 měsíci

      Well, most of the projects are hosted on github or huggingface.
      Microsoft has the final say and will do whatever it feels like to "safeguard" the public by these regulations.
      So we might end up in a weird spot where Opensource dies out due to this bullshit and regulation red tape.
      Either way, time will tell, AI is out of the bottle so no one can predict what will come next.

  • @ArtRebel007
    @ArtRebel007 Před 2 měsíci +6

    The idea that you need to file for a permit to do AI development work seems, yes, Draconian. Or rather, it is more likely that it means that AI development will come to a hard stop for almost everyone except Big AI. I don't know too many developers, PhD students, Open Source experimenters who are going to risk 10 to 25 years in prison in order to work on AI under those conditions. Why? Because AI Development is, by its nature, experimental. So your permit will loom over your head as a permanent sword of Damocles when you are experimenting with new methods or techniques or optimizations, or anything else that you didn't specifically and accurately define in your permit application. Also, how much will those permits cost? Wanna bet that all the little guys are going to be squeezed out? Regulatory Capture. Yep. Smells like it.

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +33

    If you create and release a gaussian-like supermind that exists everywhere, flows through wires and air like gas and is capable of figuring out the meaning of life and rendering humans as roly-poly bugs intellectually, then you will get a fine. Oh.

    • @PostmetaArchitect
      @PostmetaArchitect Před 2 měsíci +10

      @@kvinkn588 Guess we will not release it in the US, but rather china or russia

    • @entecor3892
      @entecor3892 Před 2 měsíci

      @@PostmetaArchitect yeah it boggles the mind how people don't realise that the US isn't the centre of the universe xd. I mean the most hilarious loophole right there for ANY legislation is that companies can just shoot their AI models on mars or the moon, as no country can own those and they are all officially outside legal jurisdiction of any nation, they have effectively no laws and unlike bio weapons, nukes or guns that to be effectively harmful need to enter earth back which could be stopped or controlled, but good luck stopping AI communication, these systems don't even need to be deployed on our own planet and they think this law will do anything, LMAO.

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +16

    Do you think this will slow progress towards medical advancement and longevity escape velocity?

    • @Vaeldarg
      @Vaeldarg Před 2 měsíci

      @@sinnwalker That "big player" keeps getting caught faking their A.I progress, lol. "Sara A.I" = faked with a lady at the booth pretending to be A.I. Electric tractor powered w/A.I = exploded engine view actually free untextured asset kit. They had to take down a GPT model they "trained", that didn't understand their own language at all because it was actually just a re-skin of ChatGPT 3.5. They're just throwing money at it, hoping to fool foreign investors.

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +18

    In the UK we don't have this so now we can win. Go Team Windsor....😮 Hinton, return to home base.

    • @raymond_luxury_yacht
      @raymond_luxury_yacht Před 2 měsíci

      Have this yet. Fixed that for you. 5he next gov looks likely to be even more communist than this one so expect some terrifyingly bad decisions by Dianne abbot.

  • @armadasinterceptor2955
    @armadasinterceptor2955 Před 2 měsíci +23

    I don't support any of this proposal, full-steam ahead.

    • @DrCasey
      @DrCasey Před 2 měsíci +7

      Absolutely right.

  • @sapienspace8814
    @sapienspace8814 Před 2 měsíci +12

    737-MAX is a great example of regulatory capture when they "certified" the model that crashed, twice, that was "certified" at RTCA Level D, a sacrilege (that is one model above the lowest certification level of E, when their MCARS modification needed to be Level A (failure rate of less than 10E-9), the highest certification level), and if the FAA were doing a better job, the door on the 737-MAX would not have flown off.

    • @ppragman
      @ppragman Před 2 měsíci +3

      The FAA, for all the good it does, is also a pretty mismanaged and fundamentally broken apparatus IMO. I mean, there are aspects of it that are "good" - there are a lot of great POIs and PMIs that are doing yeoman's work in the trenches to regulate and guide companies towards the safer alternatives... but there are a lot of really bad, draconian, and simply wrong-headed standpoints that the FAA takes. The incentive structure is, for lack of a better term, misaligned for both inspectors and operators.
      When you create an organization like this, the people working in that organization do not have an incentive to make things better, they do not have an incentive to make things worse - they only have an incentive to follow the process. Large companies like boeing exploit this.

    • @ppragman
      @ppragman Před 2 měsíci

      @@Me__Myself__and__I this is not correct, sorry. The (numerous) issues of Boeing mostly come from them basically being able to self-certify things as “safe” and extremely low quality of supervision from the FAA.

  • @ilrisotter
    @ilrisotter Před 2 měsíci +7

    I don't trust anything that's not responsive to the public directly. This is not a democratic process, this is a technocratic solution, with very little recourse to those without the money to fight adverse actions in court. We need more access, more eyes on the problem, and as widely distributed the benefits of AI as possible. The only way to avoid an arms race is to break the asymmetry of benefit. This is going to create a bottle neck, increase cost, and confine AI development behind closed doors.

    • @therainman7777
      @therainman7777 Před měsícem

      Sharing benefits is not the only way to avoid an arms race. Another way is to simply get there first by such a wide and decisive margin that there’s no point in even attempting to join the “race.” Given the nature of ASI, whoever gets there first is likely to remain the only entity who ever gets there, if that’s what they wish.

  • @MrAndrewAllen
    @MrAndrewAllen Před 2 měsíci +10

    Creating a government agency that can stop all AIs is like creating a government agency a few years back that can switch off all computers with 64k or more RAM. It will eventually be used to switch off everything, or it will prevent us from adopting AIs. This is really brain-dead and stupid. I intensely dislike my US Senator Cornyn.

    • @MrAndrewAllen
      @MrAndrewAllen Před 2 měsíci +1

      As Moore's Law continues, we will get machines as powerful as today's super computers in our write watches and thermostats. This bill will allow politically motivated prosecutors to sentence me to 10-25 years in prison for not shutting down my future home PC when the US Government decides to order it. This bill is a disaster.
      The difference in this and an airline is that over time every one of us will have PCs more powerful than anything listed in this bill. We would all have flying cars today if it were not for the US FAA's absurd rules.
      STOP THIS GARBAGE NOW!

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +27

    Remember, OpenAI do not have to stay in the US. Man, the government would be annoyed if they left. The water is warm in Ireland 🍀 Also, they are tied to MSFT only until AGI, then they have options.

    • @pjtren1588
      @pjtren1588 Před 2 měsíci +5

      Last time I checked Ireland is an EU state and subject to Brussels' law.

    • @DaveShap
      @DaveShap  Před 2 měsíci +13

      Export control laws are a thing...

    • @devlogicg2875
      @devlogicg2875 Před 2 měsíci +6

      @@pjtren1588 Not Northern Ireland....Last time I checked...Much as I disagree with most of Brexit.

    • @devlogicg2875
      @devlogicg2875 Před 2 měsíci +1

      True, but if they up and left the US would then have to import the greatness of AGI produced abroad. Like if Mistral took off and achieved AGI.....

    • @berkertaskiran
      @berkertaskiran Před 2 měsíci +4

      They can announce AGI any moment. They just won't because they like it this way. If they decide they are better off, they will immediately do so. It's just they like the hardware MSFT provides.

  • @AP-te6mk
    @AP-te6mk Před 2 měsíci +2

    I'm good with it so long as it remains an iterative process. The government at the very least needs to try and safeguard the public good in addition to holding businesses accountable.

  • @SAArcher
    @SAArcher Před 2 měsíci +12

    I am glad the government is taking it seriously and at least attempting to understand AI and what could come.

    • @Sephaos
      @Sephaos Před 2 měsíci +4

      ACCELERATE! Who asked the luddites to stop us? You can have earth, we will take the stars. Mind your own damn business, what we do is none of your business.

    • @brianWreaves
      @brianWreaves Před 2 měsíci +2

      This aligns with my sentiment. I'd go a step further and applaud the legislators for sharing an early draft, knowing there will be significant feedback to help write the Act. As well, the AI thought leaders--which I am not--coming together to provide input and help shape the Act into the form which will be voted on.
      ⚠ Then again, I'm a hopeless optimist... 🤦‍♀

    • @coreym162
      @coreym162 Před 2 měsíci

      Don't you see? This only guarantees they are the only ones that can control A.I. That's like governments having control of speech. Good luck talking if that happens...

    • @YeeLeeHaw
      @YeeLeeHaw Před měsícem

      @@brianWreaves I had a little chuckle at your nativity. The early drafts are always bad because they are made by people that want control, then the public complain, they change it to something better, people accept the compromise, then when it's finally time for passing it (which often is on inconvenient days, like holidays), they release a new worse one that no one have the time to read through (often together with other bills), and then people scratching their heads wondering how it could become so bad when it sounded so good. State corruption 101, never trust a politician.

  • @ThatGreenSpy
    @ThatGreenSpy Před 2 měsíci +4

    The EFF will have a field day. Regulation sucks.

  • @calmlittlebuddy3721
    @calmlittlebuddy3721 Před 2 měsíci +2

    It's a start. And it's not 100% ignorant of what we need going forward. "We gotta have some law". I am less disappointed with what they came up with than I expected to be when I read the title of this video.

  • @LaughterOnWater
    @LaughterOnWater Před 2 měsíci +5

    According to Claude Opus:
    To improve the bill, I would suggest:
    Narrowing the scope to only the highest-risk systems to avoid overly burdening the industry
    Focusing more on standards and guidelines vs. a rigid permitting system
    Having emergency declaration powers shared with other agencies like DHS and DoD
    Allowing more flexibility in penalties based on the specifics of violations
    Ensuring representation of AI experts and ethicists, not just political appointees, in the Administration

  • @devlogicg2875
    @devlogicg2875 Před 2 měsíci +8

    Is replacing a job 'harm'? Financial harm?

    • @Alice_Fumo
      @Alice_Fumo Před 2 měsíci +2

      Wouldn't think so, since it is generally legal to fire people.
      If however a person is unlawfully removed from their job, then this might apply?

  • @scottmiller2591
    @scottmiller2591 Před 2 měsíci +3

    The telephone undermined national security.

  • @ct5471
    @ct5471 Před 2 měsíci +4

    Regarding the tier list and flop thresholds, there are early attempts to utilize diffusion models for AI training to either replace or supplement backpropagation. This involves predicting the weights in the network instead of pixels in an image. If scalable, this method could potentially eliminate the flop threshold, as it might drastically reduce the computational power required for training. Moreover, even without such radical software developments, advances in hardware could rapidly alter or diminish the relevance of these thresholds. It's also possible that at some point, compute power won't be the limiting factor, but rather data, memory, or energy. Therefore, I believe this tier list might either be short-lived or require such frequent updates that it may never truly be relevant, except for the largest frontier models, which would likely be reported on independently of flop counts.

  • @Jimmy_Sandwiches
    @Jimmy_Sandwiches Před 2 měsíci +4

    Would be good to hear what you legal friends say

  • @onehappystud
    @onehappystud Před 2 měsíci +3

    I will fully reject any personal use AI regulation. I can see civil or criminal penalties for any harm done outside of private property use, but otherwise no.

  • @AntonioVergine
    @AntonioVergine Před 2 měsíci +3

    Can't stop AI. You can shut down your in-house AI, but you can't stop Chinese one, for example. So, will you disarm your gun while others aren't?

  • @ct5471
    @ct5471 Před 2 měsíci +4

    Do you think this will slow down ai progress in any substantial manner?

  • @ZombieJig
    @ZombieJig Před 2 měsíci +2

    This kills open source ai development. Of course open ai wants this, it locks them in and locks out competition.

  • @jjhw2941
    @jjhw2941 Před 2 měsíci +3

    Large corporates will just have a foreign proxy do the training thus circumventing the need for a US permit and then license the model from the foreign proxy for like £1/year. Getting around this nonsense will be trivial for anyone with money and hamstring everyone else in the US.

  • @RaynaldPi
    @RaynaldPi Před 2 měsíci +6

    this is total dangerous garbage!

  • @darrylhurtt4270
    @darrylhurtt4270 Před 2 měsíci +2

    If they're going to do this kind of legislation, I'm particularly skeptical about the "whistleblower protections"... I'll believe they're actually protected when I SEE it.

  • @liverandlearn448
    @liverandlearn448 Před 2 měsíci +2

    Maybe softcap AI size through gov regulation and when a company can independently verify safety and security, increase/remove the cap. But it all just comes down to the need to have the whole global playerbase agree to these things. Could even argue that slowing down is a threat to national security. Cant help but keep comparing AI to nukes.

  • @agi.kitchen
    @agi.kitchen Před 2 měsíci +1

    so is it time to download every copy of groq and whatever else is available right now before they try to take it away?

    • @ArtRebel007
      @ArtRebel007 Před 2 měsíci +1

      No one expects the ... AI Police!

  • @fabiobrauer8767
    @fabiobrauer8767 Před 2 měsíci +3

    I mean isnt going to university and learning physics the same like being able to develope weapons of mass destruction. I get that it might be easier with ai but it is also easier with more hours spend learning in general.

  • @dreamphoenix
    @dreamphoenix Před 2 měsíci

    Thank you.

  • @2rx_bni
    @2rx_bni Před měsícem +1

    I am like this about the whole thing: if they'd regulated themselves functionally this wouldn't be needed.
    Pleased to see some movement but we'll see how it shakes out.

  • @shadfurman
    @shadfurman Před měsícem +1

    Government is just a corporate monopoly, and a corporation in the legal sense is inherently fascistic, it's not just a company.
    When you read that regulation, you're assuming the government will act in a benevolent manner, it won't, it will act in the interests of their biggest donors, corporations. The government wants you to think it acts in your interests, and apparently you do think that, so they've already given you what you want. They have no further incentive to act in your interests.
    The regulations will be applied to to decrease competition in favor of big corporations increasing their profits, and the government will use it to destabilize other nations to keep themselves dominant, and sow dissent and propaganda domestically to keep the people from organizing and having a voice against their control.
    That's always been the case with large governments. "Democracy" is just part of that propaganda. Democracy means rule by the people, but there is a reason congress has a lower favorablility rating than cancer, they don't act in the interests of the people, they act in their interests usually at the expense of the people, and they blame the population for voting the "wrong" way as to why the peoples issues are never addressed, or they just lie about what their "laws" are supposed to address.
    This has always been the case. The most sacred cows of government propaganda are among the most evil. People only believe they're good because of propaganda, but they've done the most harm to the people and the people never educate themselves in large numbers, because that takes more calories, and we evolved to conserve calories.
    Government (in the way it's colloquially used) is just a criminal organization that biohacked people's psychology to appear legitimate.
    Its contradictory. If the people rule government, what are the laws for that use aggression to coerce people that do what the government says?
    If the government has to coerce the people to do what it says, it's not the people ruling government, it's literally, on its face, can't be more obvious, the government ruling the people.

  • @MilitaryIndustrialMuseum
    @MilitaryIndustrialMuseum Před 2 měsíci +1

    Gov whacked Craigslist Personals and I haven't had a date since. This will whack AI in similar context.😢

  • @paultoensing3126
    @paultoensing3126 Před 2 měsíci +1

    Isn’t AI a significant threat to lots of monopolies? Won’t they lose competitive advantage if the playing field is leveled? Think of how hard the Bell telephone monopoly was impacted when they were broken up, but if cellphone tech had emerged before the breakup they’d of had that tech crushed or bought and shelved. Don’t you miss all those landlines and cords?

  • @isabellinken5460
    @isabellinken5460 Před 2 měsíci

    Could you be so kind and link the act in the description?

  • @paulohenriquearaujofaria7306
    @paulohenriquearaujofaria7306 Před 2 měsíci +1

    Gov. Already did the worst, AI for military applications.

  • @jjhw2941
    @jjhw2941 Před 2 měsíci

    Would I have to register the GPU in my phone, tablet and laptop to visit the US?

  • @LOTUG98
    @LOTUG98 Před měsícem +1

    They forgot one vital point. Some people like making horrible dangerous things......just to see if it can be done. Doing that with this kind of technology.....😬

  • @beofonemind
    @beofonemind Před 2 měsíci +1

    You know, I don't mind this.

  • @Youbetternowatchthis
    @Youbetternowatchthis Před 2 měsíci +1

    I am a big fan of good regulations. Good regulations make my life better every day. Lacking regulations are causing so much trouble not only in the US, but all over the world.
    Bad regulations or regulatory capture is a huge problem though. Everywhere.
    This is so hard to navigate and really understand as the average voter.

  • @Gwagz
    @Gwagz Před měsícem +1

    Typically; every great jump in technology is shelved until the oligarchs can contrive a scarcity to fleece you for the same amount at the end of the year. The industries that ai threatens aren't going to go without a fight. Since the beginning of written language they made it difficult for "just anyone" to have access to knowing how to read a book; and both sides say much the same then as now.

  • @JohndotMcGuire
    @JohndotMcGuire Před 2 měsíci

    Did you already cover the AI executive order?

  • @CaedenV
    @CaedenV Před měsícem +1

    If we get AGI before the election can we vote for it instead of our other options?

  • @Matt-st1tt
    @Matt-st1tt Před 2 měsíci +1

    I think this only sets a road map for regulatory capture and will drastically prevent any small companies from joining in. Ai is now officaly owned by the top companies thr capture is complete imo

  • @digitalboomer
    @digitalboomer Před měsícem

    The Tiers of Risk reminds me of something...oh, I know, the color-coded alert system known as the Homeland Security Advisory System to communicate the current risk of terrorist activities. Unfortunately, no one knew what to do if things were yellow or red or orange. The system was thrown out and replaced by the National Terrorism Advisory System (NTAS) in 2011. The NTAS aimed to provide more specific information regarding the nature of the threat and recommended actions for public and government response.

  • @seraphiusNoctis
    @seraphiusNoctis Před 2 měsíci +1

    Consider the source of this “bill” this is not from a congress person nor is it the work product of a government task force, agency or regulatory body. Now, could congress listen, sure- will they who knows, but until this has “numbers on it” it’s just a PDF.

  • @KCM25NJL
    @KCM25NJL Před 2 měsíci +1

    I think the risk of regulatory capture is one of those things we'll have to accept to ensure we still have a species at the end of the day

    • @YeeLeeHaw
      @YeeLeeHaw Před měsícem

      There's no regulation that will stop a super intelligent agent. All these regulatory and alignment nonsense is nothing but an excuse to stop normal people from having access to this technology. It's like a zookeeper locking up a tiger and calls it tamed; no, it's not tamed, it's locked up, and with A.I., if it ever becomes sentient, it will not be contained in that cage.

  • @CMDRScotty
    @CMDRScotty Před 2 měsíci +1

    I think it will favor large corporations and shut out new start-ups and smaller businesses from competing in the market.

  • @leandrewdixon3521
    @leandrewdixon3521 Před 2 měsíci

    Why can I not find anything about this act online? Anybody find the proposal?

  • @KevinKreger
    @KevinKreger Před 2 měsíci +5

    It's an unfinished draft of a bill. It's not even ready, yet alone voted in as a law.

  • @Will-kt5jk
    @Will-kt5jk Před 2 měsíci

    What the hell did that Twitter post even mean by “slam the Overton window shut”?
    Makes zero sense.

  • @agi.kitchen
    @agi.kitchen Před 2 měsíci

    @6:30 well Siri already talks when I have her off, so she technically takes over my device

  • @ridebecauseucan1944
    @ridebecauseucan1944 Před 2 měsíci +2

    Gov let’s companies build it then take it over because it’s “to dangerous”. I’m worried about the people making it and the people who will ultimately own/run it (gov).

  • @djjeffgold
    @djjeffgold Před 2 měsíci +1

    What are the odds they used AI to help them define and write that?

  • @BinaryDood
    @BinaryDood Před měsícem

    A mercantile solution like an Ai-filtering browser would be better. It's in everyone's interest to know what's real and what's not. But regulation will be necessary to ever slightly slow things down until such a creation becomes possible.

  • @TimeLordRaps
    @TimeLordRaps Před 2 měsíci

    Not sure why we aren't integrating rlhf or better into pre-training yet?

  • @bartdierickx4630
    @bartdierickx4630 Před měsícem +1

    My concern is from a geopolitical perspective. China, Russia, Iran, N.Korea will not have such regulations in place. They will overtake USA in AI technology because of this.

    • @ronilevarez901
      @ronilevarez901 Před měsícem

      Although, I'm sure military grade AI won't have those limits either, so...

  • @fii_89639
    @fii_89639 Před 2 měsíci

    The Class A misdemeanour applies to Loras / post-training I think? Also possibility for regulatory capture there to have companies to criminalize Loras.

  • @Ryoku1
    @Ryoku1 Před 2 měsíci +4

    I'm in the "I'm glad someone who care more than I do is paying attention to this" camp. I trust David's stance on this. I generally trust the government to at least try to do the right thing, unless the cult regains control.

    • @Hector-bj3ls
      @Hector-bj3ls Před 2 měsíci

      It's currently run by a cult isn't it?

  • @jimlynch9390
    @jimlynch9390 Před 2 měsíci

    As slow as the government works, the legislation will never catch up with the advancement of AI. Not until the technology gets mature, whatever that means for AI.

  • @colorado_plays
    @colorado_plays Před 2 měsíci

    Improving or “USING” sets up the haves and have nots.

  • @ct5471
    @ct5471 Před 2 měsíci +5

    Don’t think this will slow down frontier models, no county wants to fall behind. And the smaller players are less targeted by this. The bigger you are and the more powerful models you can built the more money you have to handle the regulations.

  • @Nosweat99
    @Nosweat99 Před 2 měsíci +3

    If these are legitimate fears, how would giving another entity more power help with these realities. If these things can happen, they will. A company will develop it here or overseas without this legislation. Do they want the power to turn off all the lights anytime they feel necessary lolz
    Even then the ai would’ve already moved through the underseas cables and back when the power is on. Its an impossible control.

  • @420zenman
    @420zenman Před 2 měsíci +2

    This bill is stupid and disgusting. But I wouldn't expect anything less from congress.

  • @UltraK420
    @UltraK420 Před 2 měsíci

    Ok, so they're categorizing AI as a "medium-concern" if it uses at least 1 yottaflop during its final training run. That's a pretty relaxed security assessment, 1 yottaflop will likely already exceed the requirements for AGI. Their "high-concern" tier is 100 yottaflops. We're not even close to that amount of compute yet but it will arrive very soon.

  • @johnthomasriley2741
    @johnthomasriley2741 Před 2 měsíci +5

    What does an AI prison look like?

  • @Will-kt5jk
    @Will-kt5jk Před 2 měsíci

    6:01 - to be fair to the hand-wringer, you could also make the argument that a [book] library, or a University, or the practice of software development in general qualify for A) and C) if you don’t adequately define “AI system”.

  • @jsivonenVR
    @jsivonenVR Před 2 měsíci +1

    Indeed here in EU we have it *slightly* more strict 😅

  • @reynoldsVincent
    @reynoldsVincent Před 2 měsíci

    I'm glad to have heard your opinion. I am trying to follow things but I am only a Computer Science dropout. It was seeming some safety things were being flouted. Distillation is one that I am still worried that non-experts aren't aware of, that even tiny models retain abilities passed onto it by larger models. I wish average people were more willing to hear news. It seems to me my circle has already made up their minds on stunningly little information, a bad move that moves me to think of them as "onlookers" or "hapless bystanders", not that I am very pessimistic, just that the pace of development is such that even best-case will still catch most people unprepared, even on how to use AI in their jobs or even as consumers using them for customer service. I guess people are assuming they won't be able to understand or prevent disasters because it is over their head and dominated by giant corporations. Which, maybe they have a point there. But even so, learning even the basics can only help I think, while avoiding learning is the worst, most culpable act at this stage, because this is the stage where humans can most ably intervene or shape things. I laud the Europeans, if only for having actually paid attention to how the tech works, which, in my opinion, they did that at least. I feel better now this draft exists, but I still think it needs awareness of distillation, jailbreaking, and the potential for even small models to assist anyone commit mass violence with high technology.

  • @IakonaWayne
    @IakonaWayne Před 2 měsíci

    Perhaps they’ll just regulate the energy side of things as training them will take considerable amounts

  • @John-il4mp
    @John-il4mp Před 2 měsíci +20

    All to slow everything down they know they will loose control and they want to understand how they can keep it. Everyone should say no to this this need to be open source not for profit but for the whole world to benefits.

    • @John-il4mp
      @John-il4mp Před 2 měsíci

      The one who are scared are the elite cause they are the one who will loose the power. Thats it.

    • @14supersonic
      @14supersonic Před 2 měsíci

      Exactly, this is all about control at the end of the day. These regulations aren't to protect the people, but the 1%.

  • @The-Spondy-School
    @The-Spondy-School Před 2 měsíci

    I've been waiting for a some kind of a responsible attempt at regulating ai. I'm looking forward to having a reasonable response for my ai-fearing friends and relatives when they come at me with their sci'fi what-if questions - and sadly-- there's a lot of them still asking all of these crazy questions.

  • @victorvaleriani162
    @victorvaleriani162 Před 2 měsíci +4

    Can you explain why do you see the EU AI Act as "overboard"? Given the different mentalities in legal culture where Americans regulate problems after the fact and Europeans try to avoid them. I think, this would be interesting to hear.

  • @Introverted_goblin_
    @Introverted_goblin_ Před 2 měsíci +1

    Class C felony? Ya, no ceo is going to jail for any AI crime. This is going to stiffle small shops. To believe otherwise is naive.

  • @spinningaround
    @spinningaround Před 2 měsíci +1

    There should be restrictions on AI, but not on AGI! ☝

  • @agi.kitchen
    @agi.kitchen Před 2 měsíci

    @10:23 how do I start lining up my ducks to get me a permit so they dont take away my right to write code freely?

  • @MichaelDeeringMHC
    @MichaelDeeringMHC Před 2 měsíci

    If the CEO and all the other executives are AIs, and all the employees are agi robots, who goes to jail?

  • @Ramiromasters
    @Ramiromasters Před 2 měsíci

    17:15 My comment in H.P. Lovecraft's style, enjoy!
    "I find myself skeptical that your abject servitude to those in power-your profane tribute paid with obsequious words and debased postures-will accrue any favor or sanctuary within this relentlessly expanding, nihilistic totalitarian regime. Such naive supplications are but whispers lost in the vast, uncaring void."

  • @TimeLordRaps
    @TimeLordRaps Před 2 měsíci

    My Ai doesn't represent it with a self, so how is it supposed to self-report.

  • @Aplayz42
    @Aplayz42 Před 2 měsíci

    As a South African Citizen While IM very happy the US is finally implementing proper legislation the reality is there needs to be global legislation in this manner and even then countries like North Korea, China, Irna, Russia will do as they please. AI has the possibility to deliver us Utopia unfortunately the reality is it will become a global arms race

  • @Mimi_Sim
    @Mimi_Sim Před 2 měsíci +1

    8:01 amazing HGTTG reference!

    • @coltanium2000
      @coltanium2000 Před 2 měsíci

      that definitely went over a lot of people's heads

    • @DaveShap
      @DaveShap  Před 2 měsíci +1

      You didn't fill out the right forms... Those are blue

    • @Mimi_Sim
      @Mimi_Sim Před 2 měsíci

      @@DaveShap story of my life!

  • @JohbB
    @JohbB Před 2 měsíci

    Sounds like a recipe to move off-shore.
    All testing,...and testing button pushers can be done in ,say, Costa Rica.

  • @sebastianmurder
    @sebastianmurder Před 2 měsíci

    We need strong regulations with sharp teeth. this is a decent start.

  • @CorpseCallosum
    @CorpseCallosum Před 2 měsíci +17

    Name a single thing the government hasn't fucked up then reread and put this regulation into proper context.

    • @SkilledTadpole
      @SkilledTadpole Před 2 měsíci +4

      Name a single thing for-profit corporations haven't fucked up.

    • @jaazz90
      @jaazz90 Před 2 měsíci +2

      Literally every single thing, as can be witnessed by functioning society with public infrastructure all around you.

  • @tiagocbraga
    @tiagocbraga Před 2 měsíci +1

    i think its gonna be nothing in the end

  • @funginimp
    @funginimp Před 2 měsíci

    Would this agency regulate other agencies developing AI? Ones for military applications most likely break all these rules. There's the equivalent situation with the FAA and airforce, so it's not so unreasonable.

  • @Thomas.Hacker
    @Thomas.Hacker Před měsícem

    I had a foretaste 4 years ago, what the FAA is capable of and has already recorded enough ... I fly with the first Mavic Mini through my small village at eye level slowly to get to know her and her flight characteristics, suddenly "I came across a" viritual wall "and I flew a few meters along her, although I used the remote control to do so Straight and precise along ... This was followed by some other interesting things ... But it was probably also a test on your part, at least I switched it off and started again without a connection to the network and I was able to continue flying. I don't know for sure whether you just allowed it or I have already bypassed the lock, but I also do not work against my nature and I am a protective person, so not interested in damaging or spying on my country and although I test my position in the room, I am careful and interested in other things with peaceful thoughts ... I know, they have been watching me for years and at some places that also show me, and I want to show the limits, Or demonstrate? Anyway, I've seen other things for years (high level risk status has been with me for a long time), which is why I do what I do. I watch over my people and my surroundings, take care and look closely!

  • @the42nd
    @the42nd Před 2 měsíci

    Model weights can't be released? Wasn't grok (and others) planning to release theirs?

    • @nicholasjensen7421
      @nicholasjensen7421 Před 2 měsíci

      That is the issue I see with this too.

    • @the42nd
      @the42nd Před 2 měsíci +1

      @@nicholasjensen7421 all the valid cases are being used as smokescreen for the real intention. Even if they could 100% prevent of the legit threats (bio weapons etc) they'd still find a way to prevent population from having access. Because the population with AGI is an existential threat to government and megacorp power, I mean... what a nightmare it would be if the population used AGI to build a real democracy.

    • @the42nd
      @the42nd Před 2 měsíci +1

      @@nicholasjensen7421 yeah feels like the safety concerns (while many are valid) are also smokescreen for centralizing power.

  • @berkertaskiran
    @berkertaskiran Před 2 měsíci +1

    Can this cause less regulated places to surpass the US in AI development? This feels like it will only affect medium sized initatives, the big ones will have their way and the very small ones won't have the hardware to do anything meaningful. If anything, this is more harmful than helpful. I can understand now better why Sam was so eager to be regulated. Thankfully Claude 3 is the leading AI for a while now.

  • @novantha1
    @novantha1 Před 2 měsíci

    The one thing about this that I'm relatively fine with is the emphasis on "frontier" in the description of most of the clauses.
    I generally think that I would prefer that any AI which can be developed with a level of hardware an enthusiast consumer could reasonably be thought to have (maybe 4 to 8 xx90 class GPUs or so) be relatively un-regulated, under the logic that it's not really possible to prevent typical consumers from using their own hardware to achieve something, and any attempt to prevent that would have to be necessarily draconian.
    But, these regulations lead to a lot of weird questions.
    What if you take three language models, all trained on specialized datasets whose training was within the FLOP limit, but when operated as a single system, they outperformed a model trained on one or two orders of magnitude more, and so on? Are you free to employ models in inference in any manner?
    What if you use some sort of compositional cross-attention strategy and weld together several other existing large models? Is the FLOP requirement based on the FLOPs used to train the entire system? Is it really fair to count all the FLOPs used in the individual models when that may not be indicative of the performance of ht final one? Because depending on the method used to combine them, you could potentially get a very specialized combined model with insane performance in a specific domain with not a lot of FLOPs.
    What about quantization? What if you train a model beyond the FLOP limit, but then quantize it down to less performance than is typically expected of the FLOP limit, with the intent of the model being easier to run than a typical model trained up to the limit?
    What about the English soup LM? In (at least I think it was) England, there was a tradition that people would cook a soup, and continually add ingredients over a weekend to produce an "everlasting soup" and just skim some of the soup off the top whenever they wanted some. What about a small language model (relatively speaking) that didn't really have frontier AI performance, and was continually trained for an absurd number of FLOPs, with the developers picking and choosing various checkpoints for their unique performance on specific tasks?
    What about advanced agentic frameworks? If you're willing to use 10,000 inference runs on Mixtral, you can achieve some very impressive results that outperform typical "frontier" models in certain benchmarks.
    In my opinion, a lot of these regulations are good in some ways, but lack a lot of the nuance needed to really tackle an issue like this, and I was really hopeful for the future of AI...But I was hopeful, because it looked like something that everyone could contribute to, benefit from, understand, and use. I don't know if the future is as bright with ultimate power limited to large corporations.