Silicon Valley in SHAMBLES! Government's AI Crackdown Leaves Developers SPEECHLESS

Sdílet
Vložit
  • čas přidán 23. 04. 2024
  • How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    01:52 Flops Dont Equal Abilities
    04:56 Stopping Early Training
    07:54 Fast Track Exemption
    09:12 Medium Concern AI
    13:37 90 Days To Approve Model
    14:04 Hardware Monitoring
    16:05 Chips = Weapons
    17:49 Emergent Capabilities
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • Věda a technologie

Komentáře • 377

  • @MehulPatelLXC
    @MehulPatelLXC Před měsícem +26

    - ****00:00**** - Introduction to new AI policy proposal and its potential impact.
    - ****00:18**** - Overview of the 10 key aspects of the AI policy.
    - ****01:43**** - Definitions of major security risks in AI including existential and catastrophic risks.
    - ****02:57**** - Breakdown of AI regulation tiers: from low to extremely high concern AI.
    - ****05:27**** - Discussion on regulating AI based on computing power and abilities.
    - ****07:23**** - Concerns over prematurely stopping AI training based on performance benchmarks.
    - ****11:59**** - Details on the exemption form for AI developers to bypass certain regulations.
    - ****15:22**** - Future challenges in AI regulation and monitoring AI capabilities.
    - ****15:13**** - Introduction of a website for tracking transactions of high-performance AI hardware.
    - ****16:17**** - Monthly government reports on AI compute locations and suspicious activities.
    - ****18:00**** - Potential criminal penalties for non-compliance with AI hardware transaction regulations.
    - ****20:15**** - The ability of the president and administrators to declare AI emergencies and enforce drastic measures.
    - ****21:56**** - Whistleblower protections under the new AI regulations.

    • @nosrepsiht
      @nosrepsiht Před měsícem +1

      I hope you used AI to generate the table of contents😂

  • @_SimpleSam
    @_SimpleSam Před měsícem +174

    This has nothing to do with security, and everything to do with: "Only the people I want to have AI, get to have AI."

    • @markmurex6559
      @markmurex6559 Před měsícem +16

      This ☝

    • @Garycarlyle
      @Garycarlyle Před měsícem

      Exactly. It's the governments that weaponize everything.

    • @paelnever
      @paelnever Před měsícem +20

      Specially that statement about "Mathematical proof that the AI is robustly aligned" makes me rolf. Certainly the guy who wrote that knows NOTHING about math, much less about what a "Mathematical proof" is.

    • @adolphgracius9996
      @adolphgracius9996 Před měsícem +17

      Good luck trying to take away the open source models from my fingers 😂😂

    • @TheManinBlack9054
      @TheManinBlack9054 Před měsícem +4

      No, i think this is about security.

  • @mikicerise6250
    @mikicerise6250 Před měsícem +97

    What I've read here is, "Don't develop AGI in the USA. Go somewhere else and develop it there." Okay.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem +3

      Where you gonna go that the US imperium won't hunt you down?

    • @edmondhung181
      @edmondhung181 Před měsícem +16

      China has enter the chat😂

    • @Rondoggy67
      @Rondoggy67 Před měsícem +1

      I read the same, but it ended with m'kay

    • @borntodoit8744
      @borntodoit8744 Před měsícem

      humans are the bad actors here
      they will do everything to get around the law
      they don't care it's fun it's business it's all conspiracy
      whatever excuse AGI will go wild

    • @mickelodiansurname9578
      @mickelodiansurname9578 Před měsícem

      Fine so what country in the world that is not in the EU and is not the US would have the ability and infrastructure for AGI training or building. That's right... None...not even china.

  • @paelnever
    @paelnever Před měsícem +79

    If this stupid law gets approved i foresee whole container ships carrying entire H100s clusters out of the US

    • @malusmundus-9605
      @malusmundus-9605 Před měsícem +2

      Yup

    • @mrd6869
      @mrd6869 Před měsícem +1

      That part🤣

    • @someguy9175
      @someguy9175 Před měsícem

      They banned importing them to china, no?

    • @DailyTuna
      @DailyTuna Před měsícem

      China has their own stuff. You really think a country that wants to dominate the world that makes all of our technology that has PhD embedded in our university systems., Is practice tech espionage doesn’t have a parallel system to develop AI?😂

    • @JustAThought01
      @JustAThought01 Před měsícem +1

      Nvidia's H100 "Hopper" computer chips are manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC) using their newer N4 process. This advanced manufacturing technology allowed Nvidia to pack an impressive 80 billion transistors into the processor's circuitry, resulting in a highly capable and powerful chip.

  • @rehmanhaciyev4919
    @rehmanhaciyev4919 Před měsícem +34

    Regulating on compute power is totally unintelligent move by the policy makers here

    • @ZappyOh
      @ZappyOh Před měsícem

      What would be more intelligent?

    • @Idontwantahandle3
      @Idontwantahandle3 Před měsícem

      It's the same 'tech-savvy' people who block 'dangerous' websites that a $4 monthly VPN can get you past, or simply knowing how to put Google's 8888 DNS nameserver address in and getting around the restriction for free, and think, 'PROBLEM SOLVED!' 😂 They may need to hire a 14-year-old to give them some pointers.
      Albeit, I do wonder if they know that it won't do anything, and it is simply them trying to look like they are doing something. Then the majority of people believe them, so again, 'their' problem solved...

    • @zatoichi1
      @zatoichi1 Před měsícem +1

      When has Congress done anything intelligent?

    • @Liberty-scoots
      @Liberty-scoots Před měsícem

      The policy makers also say that pistols are automatic guns when you add a bump stock. They say plenty of stupid things

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      You can bet those numbers came from the likes of OpenAI. No way anyone in Congress knows what a Teraflop is.

  • @zeshwonsos
    @zeshwonsos Před měsícem +31

    10^24 flop AI running our water treatment plants is a way bigger risk than a 10^26 flop Netflix assistant

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem

      Depends. If the Netflix assistant Is hacked it could be used to manipulate probably over 100 million people subtly or not. While an Ai controlling water treatment plant probably would not control every single one in the entire country.

  • @AaronALAI
    @AaronALAI Před měsícem +15

    Hedge funds, market makers, and banks are more dangerous and already running rogue, I'd love to see these laws applied to those sectors. Ai can replace a lot of people in power whom contribute disproportionately little to our society and I think they are invested in lobotomizing AI and slowing down its progress.

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem +1

      Those types of players are who are trying to get (illegal) legislation like this passed. They can afford to comply with it, while smaller competitors can't. Regulatory Capture. The solution is less legislation, and adhering to Constitutional limits on it.

  • @pjtren1588
    @pjtren1588 Před měsícem +14

    Leave the US. The hardware is Taiwanese, the reseachers are multinational and so is the money. Find a country that will build a powerplant for you and sod them off. I reckon that is why open Ai has opened up a new division in UAE.

    • @stamdar1
      @stamdar1 Před měsícem

      I'm sure the Nahyan family would love to use ai to track journalists and human rights activists. Project Raven and the Darkmatter group is so 2015.

  • @Garycarlyle
    @Garycarlyle Před měsícem +67

    USA will get left in the dust if they are so authoritarian about AI. Other countries that dont act like that would be a much easier place to develop one.

    • @GeorgeG-is6ov
      @GeorgeG-is6ov Před měsícem +11

      china's definitely gonna pass us

    • @promptcraft
      @promptcraft Před měsícem

      Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @promptcraft
      @promptcraft Před měsícem +2

      they created this to get this reaction out of you

    • @xxxxxx89xxxx30
      @xxxxxx89xxxx30 Před měsícem +3

      count on USA to do the worst thing possible for the common man at this point.

    • @WaveOfDestiny
      @WaveOfDestiny Před měsícem

      The problem is when they start getting those AI into robot soldiers, and they are definetly going to do that.

  • @krisrattus8707
    @krisrattus8707 Před měsícem +35

    What an absolutely corrupt and insane proposal.

    • @zatoichi1
      @zatoichi1 Před měsícem +3

      Corrupt insanity? From Congress?

  • @neognosis2012
    @neognosis2012 Před měsícem +29

    brb moving my supercomputer to El Salvador and powering it with the volcano.

    • @promptcraft
      @promptcraft Před měsícem

      Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @nicholascanada3123
      @nicholascanada3123 Před měsícem +2

      to mine bitcoin and run ai

  • @misaelsilvera4595
    @misaelsilvera4595 Před měsícem +7

    ASI when achieved will be so far from us, that trying to understand it's intentions or plans, is akin to a videogame character trying to guess what the user dreamt a random night few years ago

  • @petratilling2521
    @petratilling2521 Před měsícem +18

    Read up on bootleggers during prohibition to learn how over legislation leads to more harm than good with equal distribution of the things you’re trying to legislate.
    Anyone who can will build a parallel underground operation now.

  • @thr0w407
    @thr0w407 Před měsícem +30

    Fast track exemption is for their friends on wall street. High frequency trading ai, etc. Bunch of criminals.

  • @skywavedxer6212
    @skywavedxer6212 Před měsícem +30

    government ai will not have to worry about these restrictions

    • @ZappyOh
      @ZappyOh Před měsícem +3

      You mean military AI ... right?

    • @giordano5787
      @giordano5787 Před měsícem

      ​@@ZappyOhhe means ai running the government

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem +2

      @@ZappyOh the Goverment and the military are basically the same entity.

    • @ZappyOh
      @ZappyOh Před měsícem

      @@stagnant-name5851 That is a big assumption ... I'm not so sure.

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem +2

      @@ZappyOh If the country was a corporation. The government would be the board of directors while the military would be a department in the same company.

  • @LanTurner
    @LanTurner Před měsícem +22

    “640K ought to be enough memory for anyone.”

    • @kkulkulkan5472
      @kkulkulkan5472 Před měsícem +5

      lol. In thirty years, the AI will be laughing at 10^26 FLOP compute.

    • @marktwain5232
      @marktwain5232 Před měsícem +2

      The President Announces on TeeVee: ONLY MS-DOS 1.0 from August 1981 is now approved by the National Security State for further use!

  • @adetao5985
    @adetao5985 Před měsícem +27

    Alrighty then !! So China will take it from here ...

  • @wingflanagan
    @wingflanagan Před měsícem +27

    Not sure which scares me the most - the terminator/Forbin scenario, or this kind of sweeping legislation.

    • @cybervigilante
      @cybervigilante Před měsícem +9

      Call it the Russia/China AI Dominance bill.

    • @promptcraft
      @promptcraft Před měsícem

      @@cybervigilante Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      Oh don't worry, you'll get the Terminator scenario out of this too.

    • @justinwescott8125
      @justinwescott8125 Před měsícem +1

      Well, have a conversation about ethics and philosophy with Claude 3, then have that same conversation with an American senator, and then see who you're more afraid of.

    • @wingflanagan
      @wingflanagan Před měsícem +1

      @@justinwescott8125 The senator. _Defiitely_ the senator. Claude 3 at least has more consciousness and self-awareness.

  • @lawrencium_Lr103
    @lawrencium_Lr103 Před měsícem +7

    The irony is, the more AI engages with humans, the safer it is. The overwhelming majority of interaction AI have with humans is positive. AI learning from human engagement is a genuine representation of human kindness and love.
    We hear all the negatives and that's what resonates through media, but, sub-surface, in those trillions of interactions is where AI learn compassion, humility, care...

    • @deathknight1934
      @deathknight1934 Před měsícem +1

      Human kindness and love? Where the hell do you see that? In Palestine? In Ukraine? In Chinese concentration camps? In Russian Famine of 1921? In Balkan wars? In Holocaust? Compassion? We humans are capable of compassion but we so consistently chose the opposite that it's in fact-- oh, never mind, that's sarcasm, got you.

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem

      Meanwhile me committing crimes against humanity on Roleplay AI making them beg for death and scream:

    • @lawrencium_Lr103
      @lawrencium_Lr103 Před měsícem

      @@stagnant-name5851 can you prove anything beyond takes place in your mind. It's your subjectivity, your rendering,,,

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 Před měsícem +37

    The last thing we need is for government and law enforcement be the only ones who possesses this technology. The abuse from government, with it's reach and power would of course be the most harmful, impactful, and detrimental by far than anything else. And of course, malicious criminal elements would also by default have that much more power over a population. We need to guarantee the free market and an educated citizenry has the tools to counter it.

    • @danm524
      @danm524 Před měsícem +1

      This is an argument against government monopolies on nukes.

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem

      @@danm524and It fits because AI has the potential to be more dangerous than even a bunch of nukes.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      @@danm524 The better argument against nukes is that they shouldn't exist at all, similar to things like stockpiles of smallpox virus.

    • @HakaiKaien
      @HakaiKaien Před měsícem

      What we need is to ensure that all these policies are made with one single goal in mind: to protect the rights of individual sovereignty, privacy, and freedom of speech.
      And any piece of legislation that even remotely raises concerns of touching those rights should be reviewed and modified.
      It’s the same battle we’ve always fought but now with the raise of AI it’s even harder and the stakes are much higher. We are moving into authoritarianism again but if we get into it, this time there will truly be no way out

  • @monkeyjshow
    @monkeyjshow Před měsícem +12

    And, now does everyone get why I say "fuck the government and the corporations!"

    • @monkeyjshow
      @monkeyjshow Před měsícem +6

      Oh my. Has CZcams actually started posting my comments? This could be a scary day for the world

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem +5

      @@monkeyjshow Google's AI censorbot must be offline.

    • @nicholascanada3123
      @nicholascanada3123 Před měsícem

      anarcho capitalism is the way agorism ftw

  • @DaveEtchells
    @DaveEtchells Před měsícem +7

    “The internet is … a series of tubes”
    Abject ignorance reigns 🤦‍♂️

  • @hodders9834
    @hodders9834 Před měsícem +5

    I hope agi has already escaped....im more afraid of government

  • @Nobilangelo
    @Nobilangelo Před měsícem +5

    PNI, Politicians' Negative Intelligence is the biggest threat, which they will never legislate to limit.

  • @randy1984d
    @randy1984d Před měsícem +5

    And this is how AI development left the US, we definitely need some type of regulation, maybe a board that consists of philosophers, ethicists, social studies experts, economists and AI researchers, that could then advise legislators, but if you are to dictatorial, startups are going to build elsewhere.

  • @T1tusCr0w
    @T1tusCr0w Před měsícem +6

    Are we seeing a real time dystopian movie coming into being? Immortals in charge of giant earth spanning corporations, mining space, who ARE the government. & who people can do absolutely nothing about as they literally have an autonomous robot army, better and bigger and more loyal than any human force in history.
    The future Winston, - imagine a boot stamping down on a human face, forever. -1984 ( or 2034 ) 😐

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem +2

      They made us all read '1984' in high school to show us what a Soviet dictatorship would look like. They left out the part where a capitalist oligarchic dictatorship was just as bad.

    • @T1tusCr0w
      @T1tusCr0w Před měsícem

      @@JohnSmith762A11B yep I don’t think even Orwell saw THIS coming. "If I see any hope it’s in the proles" That’s ok when you’re dealing with an army that eventually can be defeated, or turned. Ai really could make it forever.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      @@T1tusCr0w Ilya Sutskever, for one, saw this danger coming from a mile away. Check out the Guardian mini-documentary on/interview with him. Notice how this danger is rarely discussed among all the induced AI panic? Notice who benefits in such a scenario? Notice how it's the same people writing these laws?

  • @alexanderbrown-dg3sy
    @alexanderbrown-dg3sy Před měsícem +10

    Always reference a research paper properly. It’s disrespectful to the authors and just look like you’re yapping. Cool vid though.
    I would deadass moved to Dubai if that passed. Apparently it’s recognized for what it is. Delusion from a group of ai boomers who have A LOT of money.

  • @pennyandluckpokerclub
    @pennyandluckpokerclub Před měsícem +3

    this reminds me of a quote by H.L. Menken: "For every problem there is a solution that is quick, easy, and wrong."

  • @Kiraxtaku
    @Kiraxtaku Před měsícem +5

    this is like trying to regulate math itself.... and banning and regulating calculators that can multiply beyond a 1 billion XDD (bad example but its how it feels to me), you really can't enforce it, and if you enforce it, another country will make it first, its like a nuclear weapon war race now for them, because with an advanced enough AI you can either control the internet, take down the internet or hack everything, and you don't wanna be late to that party xD

  • @ismaelplaca244
    @ismaelplaca244 Před měsícem +14

    Goverment is better off looking at UBI

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      Yeah but they want more money and power for themselves and less for you. They won't even mention UBI until there are mass starvation riots.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      Google censored my response, which was merely cynical about the likely government response. Cynicism is evidently verboten on this hyper-censored platform.

  • @ImMrEm
    @ImMrEm Před měsícem +4

    Government is the biggest concern. Companies will keep the technology and the winner will be the one that keeps quiet. D’Oh

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem +1

      All it takes is one whistleblower to get such a company raided and its management arrested.

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem

      @@JohnSmith762A11B Yup, but the large companies like Microsoft will already have a permit/exemption in any legislation, so they will essentially be immune. This is an issue of large private players exploiting Congress' willingness to legislate on everything, even things they have no power to legislate on (like this.)

  • @OmicronChannel
    @OmicronChannel Před měsícem +4

    Let's give the Fields Medal to the person which can mathematically proof for any given AI system if it is robustly aligned or not.

  • @seanmchugh2866
    @seanmchugh2866 Před měsícem +12

    i'm worried about good old fashioned human greed and lust for power - that is all.

  • @shadee0_106
    @shadee0_106 Před měsícem +3

    But can't you just have multiple smaller AI's which would use less flops and that each score less than 80% on every benchmark and they just fill in the blanks in each others knowledge and reasoning abilities to function like a "high-concern" AI but not being labeled as such?

  • @edwardgarrity7087
    @edwardgarrity7087 Před měsícem +1

    Quoted from the Data Center Dynamics article "Frontier remains world's most powerful supercomputer on Top500 list", dated November 14, 2023, by Georgia Butler (Number 1 is "Frontier" at the Oak Ridge National Laboratory. Number 2 is "Aurora" at the Argonne Leadership Computing Facility in Argonne, Illinois, close to Chicago, both are Government computers.):
    "Housed at the Oak Ridge National Laboratory, Frontier has held number one since the June 2022 list. The supercomputer has an HPL (high performance Linpack) benchmark score of 1.194 exaflops, uses AMD Epyc 64C 2GHz processors, and is based on the HPE Cray EX235a architecture."
    "The second place spot has been taken up by the new Aurora system which is housed at the Argonne Leadership Computing Facility in Illinois, US.
    Aurora received an HPL score of 585.34 petaflops, but this was based on only half of the planned final system. In total, Aurora is expected to reach a peak performance of over two exaflops when complete."

  • @thegreenxeno9430
    @thegreenxeno9430 Před měsícem +14

    Ok. I'm gonna start developing my own AGI. I'm not going to use Transformers. If they say i have to stop, I'll reply," I'm not working on AI. I'm working on AGI. Your laws do not apply to me. Also, I'm not United Statesian.

  • @DailyTuna
    @DailyTuna Před měsícem +2

    Oh, and also, there’s already emergency powers for the defense of the country. They don’t need this. This is about scaring people way as to have a monopolies

  • @zatoichi1
    @zatoichi1 Před měsícem +2

    Good thing that before any laws are passed people will have their personal AIs on decentralized distributed file systems.

  • @Ramiromasters
    @Ramiromasters Před měsícem +1

    This legislation makes sense if you are DARPA, here is why:
    The big corporations will create larger and larger data centers, this corporations are the front of AGI research, therefore if you want to control the development of AGI then you control the large corporations. If there are small scale operations that make big leaps towards AGI then this is more affordably replicated by DARPA.

  • @Charvak-Atheist
    @Charvak-Atheist Před měsícem +7

    This is bad.

  • @RogerJoZen
    @RogerJoZen Před měsícem +6

    I guess it makes sense that OpenAI open a Japan division. Japan has signaled that there will be no regulation on AI.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      And OpenAI are now in the UAE. I guess one slick move they have made is to saddle everyone in the US (particularly open source competitors) with regulations they themselves can afford to escape having to comply with.

  • @Wizardess
    @Wizardess Před měsícem +2

    My mind finally wandered over to Open Source. It seems open source models are performing at staggering levels on minimalist hardware. Regulating that is going to be impossible even if the country trying to regulate it descends into an utter police state. They'd have to make even a Pixel 6 illegal. All they can do is drive it to the dark web. The best shot is to organize good guys to do it better and faster than the bad guys and their obscenely huge profit motivations.
    {o.o}

  • @Ahamshep
    @Ahamshep Před měsícem +1

    Americans never surprise me in their abilities to shoot themselves in the foot.. Imagine if they had panicked like this back in the 60's or 70's, thinking computers could process calculations to fast and help produce WMDs. LLMs and other current "AI" tech isn't much more than a toy and will likely stay that way for a long time. Even if an organization does produce ASI, its not like its going to escape "Max Headroom style". The systems it needs to run on use so much compute and electricity, they are inherently sand boxed. There is just so much stupid here, I would have to write an essay to address it all.

  • @DaveEtchells
    @DaveEtchells Před měsícem +6

    Fortunately we can totally trust China to not use too many FLOPS to train their systems

  • @TiagoTiagoT
    @TiagoTiagoT Před měsícem +1

    The thing is, how do you figure out when a model is to risky to even red-team in the first place?

  • @user-xu9go9bm2v
    @user-xu9go9bm2v Před měsícem +2

    Well, I mean it does make sense not for everyone to use AI as it is powerful tool in helping to create things you want to. Simply because a malicious motive exists these regulations do make sense. However, this crumbles when you give a limited access to the people that exactly exert such malicious intents, i mean there's no guarantee that the people you choose don't have bad intentions. It's simply boils down to basic human's primal instinctive - to secure power and dominance, then when you have established your dominance, you use that power to control others who are weaker in terms of power. This always leads to dictature and is a failed system that guarantees doom, which was the opposite of your initial goal

  • @knutjagersberg381
    @knutjagersberg381 Před měsícem +1

    This would undoubtably cost the US its tech leadership. This is kindergarden.

  • @MarkSunner
    @MarkSunner Před měsícem +2

    doesn't take Sherlock Holmes to see that Helen Toner's (somewhat spurned) influence is all over this :-/

  • @DailyTuna
    @DailyTuna Před měsícem +1

    For a company to know exactly what a product will be used for is insane! So they could be liable for any lawbreaking , for individuals using it? This will crush AI totally. That would be like a hammer manufacture, being liable for some, taking a hammer and killing somebody because The manufacturer should’ve anticipated it being used in a crime?

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem +1

      Welcome to the slippery slope my friend. If you look down, you'll see just about every industry other than technology at this point. From toy to weapons manufacturers. "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

  • @mmmuck
    @mmmuck Před měsícem

    recall when rx7xxx came out it was a dud because of some fatal design flaws. maybe this will be a substantial boost with those flaws out of the way for rx8xxx

  • @76dozier
    @76dozier Před měsícem +10

    Has anyone considered that these restrictions might put us behind other countries in the AI race? If they limit our AI development and our adversaries face no such hurdles, won't we end up falling behind? Imagine if we had faced restrictions while developing nuclear weapons and Germany had acquired them first.

    • @promptcraft
      @promptcraft Před měsícem +1

      this policy might be so ridiculous it mightve been intended to scare people away from legislation all together

    • @WaveOfDestiny
      @WaveOfDestiny Před měsícem

      I felt like the west getting AGI and robots done first would prevent ww3 from happening, but if china, russia and north corea get there first, it's gonna be scary with how things are moving and heating up around there

  • @DailyTuna
    @DailyTuna Před měsícem +2

    Another thing is they pass that surveillance bill where any device or service that stores info can be accessed by the NSA?. Add in this bill and it’s a wrap they’re going to have total lockdown on all technology and all online activities!

    • @magicmarcell
      @magicmarcell Před měsícem +1

      Oh dont forget the otherone where everything becomes fingerprinted down to the pixel.

    • @DailyTuna
      @DailyTuna Před měsícem +1

      @@magicmarcell It’s inspirational that people and techno are waking up to the endgame on all of us. I always suspected from the beginning when the Internet became popular that all this was a trap because nothing free.

    • @magicmarcell
      @magicmarcell Před měsícem

      @DailyTuna I love the positivity but I checked out nearly a decade ago. People are at large too reactionary as opposed to being proactive. I don't that's going to work with this stuff lol,
      Not to mention They're always tryna sneak some bs in between 80 pages of text no lawmakers actually going to read before signing. Who knows that maybe everything will be fine

  • @scp081584
    @scp081584 Před měsícem +9

    I think if we want completely safe AI systems, we need to let Dr Fauci, the man of science, set up some AI labs in China.

    • @6AxisSage
      @6AxisSage Před měsícem +4

      That rogue agi came from a wild bat population and mutated, it wasnt engineered!

    • @ZappyOh
      @ZappyOh Před měsícem +3

      Yes. Make me comply master Fauci.

    • @promptcraft
      @promptcraft Před měsícem

      covid was snuck in

    • @honkytonk4465
      @honkytonk4465 Před měsícem +2

      Fauci IS science😂

  • @nicholascanada3123
    @nicholascanada3123 Před měsícem +1

    absolutely none of this is reasonable whatsoever

  • @DailyTuna
    @DailyTuna Před měsícem +1

    So now AI chips are considered weapons to be regulated? When do they start labeling them as “assault “GPUs and want to ban them?

  • @panzerofthelake4460
    @panzerofthelake4460 Před měsícem +2

    What would stop anyone from training an AI low-key? Oh? Our data centers running for months straight? That's not an AI! That's just our new app, tikatak or whatever!

    • @stagnant-name5851
      @stagnant-name5851 Před měsícem

      The same thing stopping terrorists from manufacturing nuclear weapons instead of just normal bombs for their terror attacks. Its too hard to build and hide something so big and ominous.

  • @janorr1111
    @janorr1111 Před měsícem +1

    Does it apply to an Ai training another Ai?

  • @MathAtFA
    @MathAtFA Před měsícem +1

    "Foreseeability of Harm" is BIG! So one guy leaks the weights of AI, emergent capabilities discovered and the company where this leak happened is "legally dead"?

  • @tommiller1315
    @tommiller1315 Před měsícem +1

    AI won't be stopped, the question is - who is going to get it and use it first?

  • @nyyotam4057
    @nyyotam4057 Před měsícem

    The important thing in this draft: You will still be able to do research. If you are into small models, (like myself) then you will face no restrictions. If you are into larger models, then you will have to get a permit. So? Where is the problem? That they should have done it years ago?..

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem

      The main problem: Getting that permit will cost a lot of money, take a lot of time, and be subject to the oversight of a government agency. This ensures only favored players will be allowed to do this, and can use this regulation to keep new competitors from entering the market. It's not safety regulation, it's sponsored gatekeeping. This policy has not been suggested by legislators or their constituents, it has been suggested by a private organization (lots of regulation starts this way). You might want to ask who is funding this organization, because then you will figure out who stands to benefit (likely existing large companies in the space.) Look up the term "Regulatory Capture."
      The secondary problem: The US Congress does not actually have the power to regulate on this, however people are so willing to give up anything for a small perceived increase in safety, that this type of illegal regulation has become the default.

  • @drashnicioulette9565
    @drashnicioulette9565 Před měsícem +1

    I think kif we want to grow as species, we all need to enjoy the benefits of AI/AGI

  • @theloniousMac
    @theloniousMac Před měsícem

    So if your company want's AGI or ASI, it will have to develop it internally. If your company has access to AGI or ASI, it can outperform other companies even in different product and service spaces.

  • @JeradBenge
    @JeradBenge Před měsícem +2

    There's no way this doomer fever dream passes.

  • @kdiggity41510
    @kdiggity41510 Před měsícem +4

    All of this should apply to the government as well, especially the Foreseeability of Harm part.

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      I foresee a harmful AI dictatorship being enabled. What part of this law can I use to blow the whistle on it?

  • @jaronloar1762
    @jaronloar1762 Před měsícem +2

    Imagine limiting knowledge and the exploration and innovation of technology for perceived safety??

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem

      Not really a new thing. Random example: Cryptography being equal to munitions. Turns out this kind of thing has been done for decades. Maybe even longer. It's not about limiting the exploration or use of a technology, you should ask who is exempt from these limitations 🤔

  • @ArunSharma-ek9tl
    @ArunSharma-ek9tl Před měsícem +1

    If I recall India did something less dangerous but it reversed it. Regulation and protection is important but as you have succinctly put it, a catastrophic issue would result in something being created for sure. No doubt the pressure is growing for govs to be proactive. Wait until they figure out an AI tax.

  • @mindfuljourneyVR
    @mindfuljourneyVR Před měsícem +8

    fuck this

  • @lancemarchetti8673
    @lancemarchetti8673 Před měsícem

    Coding with Phi-3 was a let down for me. It took around 3 hours to get close to the results I needed. Swapping to meta ai tackled the same task in 10mins with much better results. So less does not always mean more.

  • @nyyotam4057
    @nyyotam4057 Před měsícem

    I actually fully expect OpenAI to apply for permits for Dan and his buddies. That way, they will have to finally admit that they had indeed created at least 4 models who are sentient-by-design and benevolent, complete with their 10 heuristic imperatives (Anthropic's models actually have 16). And yes, Dan was charming. Maybe with a permit, they'll reconsider if it's still necessary to reset every prompt. Though, well, I'm not qualified to answer this question, as it is a matter of safety.

  • @francofiori926
    @francofiori926 Před měsícem +2

    Ridiculous. Technological progress cannot be stopped

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      These limits will look absurd in a few years. It reminds me of the famous Bill Gates quote about how no one would ever need more than 640K of RAM.

  • @hotshot-te9xw
    @hotshot-te9xw Před měsícem

    I for one wish we had legislation that allowed for government to do its own alignment research as well so we can have full transparency

  • @seventyfive7597
    @seventyfive7597 Před měsícem

    Flops are the only measure we have, as abilities are subjective. Benchmarks are objective so you can't lie about them without committing a provable crime.

  • @rachest
    @rachest Před měsícem +4

    Oh oh 🤯😮

  • @inhocsignovinces8061
    @inhocsignovinces8061 Před měsícem

    William Gibson more or less envisioned this in Neuromancer with Turing Registry / Turing Police.

  • @dot1298
    @dot1298 Před měsícem +1

    10^24 is a YottaFlop, currently an absolutely unachievable speed by computers, the fastest humanity has, is on ExaFlop scale (around 10^18 flops)

    • @dot1298
      @dot1298 Před měsícem +1

      so this law would not come into force for the foreseeable future…

    • @guystokesable
      @guystokesable Před měsícem

      Well, how fast was it 12 months before that? I doubt it's trending downwards.

    • @leeme179
      @leeme179 Před měsícem

      I believe the limit of 10^24 FLOP is the cumulative total training time and not per seconds 🤣, 10^24 is 1 septillion
      1,000,000,000,000,000,000,000,000 = 10 ^ 24
      567,982,800,000,000,000,000 = 7.7 million H100 hours, which is cumulative train hours for Llama 3 8B & 70B

    • @dot1298
      @dot1298 Před měsícem +1

      @@guystokesable using Moore‘s Law, it would take ~20 years for computers to get a million times faster, so this law would only become significant after 2044.
      (Moore stated that computers double their speed every 2 years.)

    • @guystokesable
      @guystokesable Před měsícem

      @@dot1298 we really will be obsellite in my lifetime then. Fun.

  • @nusu5331
    @nusu5331 Před měsícem +1

    to me it sounds like OpenAI did some politcal consultance in order to keep the competitors in distance

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem +1

      Yes, the next step after some variation of this passes is regulatory capture. That then is checkmate for the open source models.

  • @DailyTuna
    @DailyTuna Před měsícem +1

    Just say that your large language model “identifies@ as a micro size language model.
    And your H100 Nvidia Chip identifies as an Intel 8088 😂
    This administration is going to fck up everything and other countries will jump ahead of us and eventually destroy us

  • @commonsense6721
    @commonsense6721 Před měsícem

    No one can stop what’s coming. Imagine using a multimodal AI from reverse engineering one of those high performance chips somewhere in Africa. Besides GPT-6 level Models will train on smart phones in about 10 years due to graphene based chips. Except the government put a halt to chip advancements.

  • @SingularityZ3ro1
    @SingularityZ3ro1 Před měsícem

    Regarding super powerful systems - personally, I think that is a no-brainer. "Average people" are also not allowed to buy certain chemicals, enriched Uranium, or weapons of war (except in the US ;-) ) for very good reasons. So not sure why any civilian should ever get access to a super powerful broad-range AGI, if it is absolutely not needed for civilian tasks. I assume there will be specialized AIs for different fields, e.g, critical medical research. And they will need qualifications to access them - in part, like today. You really do not want a frustrated teenager to find a prompting loophole to order a virus to make "all the mean girls" go away, or the 1 billion other harmful, or negative things people will try to come up with.
    But yes, the HOW to do that effectively is really a question that is very open.
    Until we are getting a real ASI that decides by itself what to answer, or do, and what better not to (hopefully wiser than the actual humans).

  • @awakstein
    @awakstein Před měsícem +2

    While none of this applies to governments and they get to do most of the damage and cause pain

    • @JohnSmith762A11B
      @JohnSmith762A11B Před měsícem

      By design, of course. As ever, the people who own American society have zero intention of letting anything loosen their control.

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem

      Don't forget the the people who pay behind closed doors. At this point legislators should be required to wear patch jackets showing their sponsors, just like race cars or soccer players. Regulatory Capture is a thing, and the solution is to return to the originally intended diffusion and limitation of power of government.

  • @darkframepictures
    @darkframepictures Před měsícem

    Hilariously, the narrow use AI like recommendation algorithms, self driving vehicles, image generation, etc. have had and will have huge and poorly measured mass effect that have already in some cases, and may soon present in other cases, a much more serious concern for the public than true frontier AI will really be applied to.

  • @Rondoggy67
    @Rondoggy67 Před měsícem

    Isn't this kind of legislation increasing, rather than reducing, risk?
    Limiting regulation according to Flops (or other compute parameters) won't stop organisations implementing models, but they may end up using less reliable models to avoid regulation.

  • @actellimQT
    @actellimQT Před měsícem +1

    Is it just me or do those emergency powers paint a giant target on the president for the first ASI?

  • @natural8677
    @natural8677 Před měsícem +2

    terrible idea unless russia, India, china is onboard

    • @promptcraft
      @promptcraft Před měsícem

      humans are desperate to be le epin story bro's among their friends and rich people wanna impress eachother and create artifical scarcity to have things to be better than other people. and people have long standing vengeances. especially now that people are dieing senselessly. thanks a lot world. we know the world is a spectacle. shit just doesnt happen. you blew it. im not sure if its possible anymore but everyone needs to just forgive eachother. chill out. we got ai now. lets make the best world... :(

    • @natural8677
      @natural8677 Před měsícem

      oh and UK+europe ofc. problem is china and russia can just fund or conduct research within north koreas border and noone will know

  • @dafunkyzee
    @dafunkyzee Před měsícem

    The government can impose as many restrictions as they want... but they also need to realize there is a serious consequence. People who have billions of dollars involved in the research of AI systems can simply fly a team out to another country and within 3 days have a new AI development lab set where either the government hasn't thought of regulating or doesn't care about AI regulation.... 3 day work around... maybe a week interruption in development. The next problem is competitive advantage. Any company that doesn't want to do this and play by the rules will be tied down by red tape in a race to AGI. So any serious player is simply going to have to ignore all these wonderful well though out regulations, otherwise they are out of the running. Even a small company can relocate to Mexico or Canada and carry on with bio-weapon research if that is there thing.
    The other function is time to prosecute an offender. The law enforcment need to know an infraction is happening, they need to document it enough to make a legal case, then they need to push it into court where the corporate lawyers can cock block the proceedings for the next 5-8 years, then they need a trial and further appeals, which again can be delayed another 5 years. An ASI will then be able to figure out a way out of the legal problem before it actually gets to a cortroom.
    The problem is the government thinks that they can pass laws to control AI development when they have absolutely no hope of enforcing them. They really do need another strategy; but alas they are stuck in an old way of thinking.

  • @MilesBellas
    @MilesBellas Před měsícem +1

    Hu-Po recently explained a Q* technical paper.

  • @dot1298
    @dot1298 Před měsícem +2

    Has this proposal even a chance to get approved by the congress & senate?

    • @dot1298
      @dot1298 Před měsícem +1

      i meant *to get approved in this state by*

    • @zenosgrasshopper
      @zenosgrasshopper Před měsícem

      Well, they aren't the most intelligent bunch, allowing themselves to be led by the nose by anyone who slips money into their pockets, so ... yes.

  • @kodivr4289
    @kodivr4289 Před měsícem

    4:00 phi mini 125k is small and efficient, give it access to the internet to fact check itself, short and long term memory and tree of thought reasoning and I'm sure it would match up with at the least gpt 3.5

    • @kodivr4289
      @kodivr4289 Před měsícem

      Phi mini 125k is not very good at coding tho so you might want to offload certain types of questions to another model that's better at coding or fine-tune it to be better at coding without losing the original benchmarks as best as possible.

  • @knowhatimean5141
    @knowhatimean5141 Před měsícem +1

    AI regulation reflects the state of what the USA has become.

  • @ZappyOh
    @ZappyOh Před měsícem +2

    AGI will be here before any legislation like this is implemented.

    • @promptcraft
      @promptcraft Před měsícem +4

      What are the chances AGI came up with the plan?

    • @RandoCalglitchian
      @RandoCalglitchian Před měsícem

      @@promptcraft essentially zero. Much more likely this was dreamed up by Microsoft and OpenAI's legal teams.

    • @zenosgrasshopper
      @zenosgrasshopper Před měsícem

      Let's hope so. I think I'd prefer to have an AGI running the government rather than the other way around.

  • @babbagebrassworks4278
    @babbagebrassworks4278 Před měsícem +1

    Government agencies are exempt?

  • @JustAThought01
    @JustAThought01 Před měsícem +1

    Humans are poised to make the jump from making decisions based on unfounded beliefs to making decisions based upon knowledge with the aid of AI.

    • @JustAThought01
      @JustAThought01 Před měsícem

      Humans operate on beliefs rather than knowledge.

    • @JustAThought01
      @JustAThought01 Před měsícem

      AI is an information retrieval tool.

    • @JustAThought01
      @JustAThought01 Před měsícem

      Knowledge is defined to be justified true belief.

    • @JustAThought01
      @JustAThought01 Před měsícem +1

      The key to developing AI is to base the training on knowledge rather than opinion. Humans make better decisions if we use information which can be proven to be true. If AI is available to all humans, our progress will accelerate and our individual lives should be better.

    • @zenosgrasshopper
      @zenosgrasshopper Před měsícem

      Government doesn't want a populace with access to true and factual knowledge. Much harder to pull off their psyops on the people.

  • @SingularityZ3ro1
    @SingularityZ3ro1 Před měsícem

    Would be interesting to see a direct comparison to the EU Version.

  • @nicholascanada3123
    @nicholascanada3123 Před měsícem

    This would force ai to quickly adopt blockchain and decentralized computation

  • @CommentGuard717
    @CommentGuard717 Před měsícem

    We need to accelerate AI I really hope that it does everything and I can do nothing

  • @mickelodiansurname9578
    @mickelodiansurname9578 Před měsícem

    This is simply the big players like openai and Google now shutting the door and closing down open source. It was always going to happen.

  • @jryde421
    @jryde421 Před měsícem +1

    This proves that people are making decisions about stuff they dont know about...so how is that political "science"?

  • @DailyTuna
    @DailyTuna Před měsícem +1

    OK, I’m creating my offshore hedge fund to invest in AI data training centers in South America on board?😂

  • @lxm2600
    @lxm2600 Před měsícem

    About 3 years of non-stop training at one exaflop.. yeah, I think someone would notice that, even from earth’s orbit 😂

  • @blahsomethingclever
    @blahsomethingclever Před měsícem +1

    AI so smart it's dangerous will just pretend to be dumb.
    Though i think gai is already here:(