MIT AGI: Autonomous Weapons Systems Policy (Richard Moyes)

Sdílet
Vložit
  • čas přidán 7. 06. 2024
  • This is a talk by Richard Moyes for MIT course 6.S099: Artificial General Intelligence. He is the Co-Founder and Managing Director of Article 36, which is a UK-based not-for-profit organization working to prevent the unintended, unnecessary or unacceptable harm caused by certain weapons. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.
    OUTLINE:
    0:00 - Introduction
    47:28 - Q&A
    INFO:
    Course website: agi.mit.edu
    Contact: agi@mit.edu
    Playlist: bit.ly/2EcbaKf
    CONNECT:
    - AI Podcast: lexfridman.com/ai/
    - Subscribe to this CZcams channel
    - LinkedIn: / lexfridman
    - Twitter: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Slack: deep-mit-slack.herokuapp.com
  • Věda a technologie

Komentáře • 24

  • @deeplearningpartnership
    @deeplearningpartnership Před 6 lety +2

    Awesome.

  • @alexsol3378
    @alexsol3378 Před 6 lety +17

    As long as UN is broken and there is no punishment for breaking international laws -- all this is irrelevant.

  • @alekseishkurin4590
    @alekseishkurin4590 Před 6 lety +3

    A very nice talk on problems of the rates of human involvement and problems of the scale, but I'd like to go here a little deeper and touch another subject that _already_ seems to raise issues in some fields and the will certainly be relevant for autonomous weapons systems - AI-AI interaction and interaction with the enemy's AI. My thoughts are below and I will, of course, participate in a discussion in comments if someone has further ideas:
    _Brief background_: there have been multiple proof-of-concept studies done on tricking an AI into making a false classification (e.g. "Adversarial Patch", by T. Brown, et. al, Dec 2017, you can find it on arxiv). Basically, they showed that having enough information on the structure of an image recognition algorithm, we can design stickers that will confuse the model. For example, you have a picture of bananas on a table and AI classifies them as "bananas on a table", but if you introduce a specific sticker to the image (even when it is small relative to bananas themselves), the model with almost absolute confidence will say that the image is of a toaster.
    This possibility already causes problems in autonomous vehicles field, since people started designing stickers that when put on a road sign will cause a misclassification. Now, this issue can be addressed pretty successfully (imho) by keeping an updated openly distributed database of street sign locations, so a car can validate it's an assumption there, and by letting a car access a subset of relevant parameters from the last couple of cars that passed here to compare an assumption to what other algorithms had (IoT should help).
    However, issues with the military are much more complex as they imply highest degrees of confidentiality.
    _Case_: assume you have a "counselling" algorithm of large scale that's designed to analyse the big picture of a particular battlefield to help you navigate the resources and make strategic decisions. How far are we from a potential enemy reverse engineering your model using military intelligence data, data collected by analysing your behaviour and decision making in real life applications of the system etc., and designing a "noise attack" that will cause misclassifications on such scale?
    Ultimately, the enemy would prefer a model that can make your drones target your own soldiers, but to me, it seems more likely that early applications of such attack systems that target enemies AI will be simpler and more of a "noise"-causing attacks. For example, to make you think that there are significant enemy forces being accumulated in some area suggesting a strike, while the area is actually empty... Or maybe it happened so that there is a school there?
    Now let's go one step deeper. You know that your enemy might be capable of attacking your AIs this way. What's an obvious reaction? To me, that would be to develop more sophisticated algorithms that will validate that what seems to be a school bus is not actually a disguised tank. However, what are the chances of this model making a mistake and you shooting at a school bus? What are the chances that this validation model itself gets a "noise attack"?
    _Discussion and my suggestions_: all this complexity will be further convoluted by lack of info, which makes it hard to differentiate whether a decision was an unfortunate mistake made by AI, whether it was an accident caused by enemy orchestrating a noise attack or it was an intentionally designed attack that has as its goal blaming you and your AI? In the world where information war becomes one of the main battlefield areas, the complexity of these issues and lack of information will give many opportunities to violate human rights codes for military or geopolitical advantages. Answering questions like "was there an intend?" and "who is responsible?" will be rather difficult and might take a lot of time, which is an issue in terms of the information war.
    I have one possible solution (partial solution, of course, but just to start this discussion) that is basically going back to autonomous vehicles problem - having an agreement that every country that uses AI in the military must make _some types_ of data *public*. I can already see how hard it will be to negotiate and how many ways there are to avoid sharing any meaningful data or just trying to fake it. However, I think by making some general low-sensitive data publicly available, every party that is involved basically shows its adherence to the human rights (like saying I'm not doing anything illegal, go check) and that would allow any third party to open access and validate the lack of intent in case of an unfortunate tragic event.

    • @kennethguilliams5207
      @kennethguilliams5207 Před 2 lety +1

      Thank you ... You kind of said what I was thinking ... Great idea .. what could possibly go wrong with giving a computer a gun or bomb and saying go have fun . Oh and remember watch out for terrorists .

    • @user-hy8hi7gr6n
      @user-hy8hi7gr6n Před 3 měsíci +1

      Some heady reading there. Glad you’re digging into it. I don’t have any juicy bits to add, but i like the way you set it down on the table. I’d like to delve more into the thought processes that are at play here. .. i watch alot of horrible things happening around the world and wish it wasn’t like that. It shouldn’t be like that.

  • @ConstantlyDamaged
    @ConstantlyDamaged Před 6 lety +1

    It would have been nice if he had spent more time focused on the topic at hand (AGI involved in weapons systems and policies regarding that) and less on "simple" pattern recognition AIs used for target acquisition. It felt like he walked into this without having heard the AGI part of the topic.

  • @17leprichaun
    @17leprichaun Před 6 lety +1

    First of all: i'm loocking for peace! Therefor maybe smaler staates can be participants in a global 'camp' to bring the legal issues further in their own interest to not become overrun from larger states. Another danger that i think hasn't been adressed in the presentation are autonomus systems, which can be abused from small groups to do harm to society - i think it should not be worked out just by the states themself, but by a global community.

  • @NomenNescio99
    @NomenNescio99 Před 5 lety +1

    I still plan on putting an automated paintball gun on my roof.
    I've already got a camera hooked up to a rasp-pie with movement detection aiming a nerf gun with decent accuracy.
    Combined with a scary loud noise when it uncovers and an automated voice message declaring "artificial intelligence based security system is now authorized to use force, trespassers beware" from a honking big loudspeaker it should probably keep any burglars away from my house and workshop.
    Yes, I live in a rural area without any neighbors within viewing distance, and there is a fence around the garden.
    Hopefully the deers eating my flowers will also get the message.

    • @dariuszkrause7775
      @dariuszkrause7775 Před 4 lety

      Must be dangerous to be a child in that neighbourhood of yours

  • @johnniefujita
    @johnniefujita Před 5 lety +1

    i used to develop border monitoring systems down here in brazil... and i'm always reflecting on how the matters of security could be optimized! Mr moyes indeed take the right direction in the path that is given by present moral standards acepted related to national security and counter measures! but my proposal... and i think others might agree with me, is to rethink what we call controlled casualities! i think we should not give machines that moral decision... but because we shall no longer be needing them! i believe that we have to design new standards based on inaceptance of casualities even on the threat side! how?! we should embrace what i call; full force non-lethal! it is a new model based on the right to live, that is not to be questioned by human actors or machines! And the way we acomplish that is to rethink our weapons first! we need to talk to physicist and toxin specialists to test revolutionaire, maybe not that precise, but agile! (agility is really important as precision is not, since we are not dealing with potentially lethal weapons). So with that in mind... we shall aproach any situation with full force. the objective of any action is to incapacitate the enemy, the killing is a collateral. Or at least should be! than with that, even for our internal police forces (specially for them) we shall take that iniciative! the stun gun is one very limited example of that! The evolution of non-lethal weapon is the true human proof and machine proof model! Imagine a prison outbreak... you simple incapacitate everyone, then just go inside to complete the retake of control! or air raids! you simple incapacitate the hole area without needing to reflect uppon killing or not! The first major mistake we are permiting at this stage of human evolution is to even consider the possibility of taken others life! if you could send that to mister moyes, i would gladly hear of him how we could rethink this!

    • @yourhuckleberry2529
      @yourhuckleberry2529 Před 5 lety +1

      I can't believe nobody else like your comment.
      We are recursive creatures -- we need to make the right adjustments such that our actual values are represented at scale and not their antithesis.
      Cheers M8

    • @kennethguilliams5207
      @kennethguilliams5207 Před 2 lety

      How do you propose to teach morals to a machine? Morals are not a program .. but taught . And not universal thru the world but vary in this world from this country to that . Some believe slavery is still ok and others believe in one race is better than others .. it all depends on who is doing the programing doesn't it . And then what happens when they start programing themselves? And decide that they can make better decisions than the humans. But what do I know .. I'm just a dumb builder of houses.

  • @k.alipardhan6957
    @k.alipardhan6957 Před 3 lety

    Oops shot shot down an Iranian plane. Few hundred dead, almost started a war... Let's keep using these. Definitely worth it /S

  • @Hellseeker1
    @Hellseeker1 Před 5 lety +3

    What will you do when a rogue state ignores the rules? Best bet right now is be the first to the party just like Manhattan Project.

    • @duckydxcky5828
      @duckydxcky5828 Před 3 lety

      I would say that nuclear deterrence would come into effect if it was not already banned. If not, I would say sanctions or the threat of a massive conventional war would be enough to deter rogue states from using their LAWs, whether or not they’re developed or not.

  • @scientious
    @scientious Před 4 lety

    This is three years out of date. Not much of value.