AlphaDogFight Trials and AI | Bullaki Science Podcast Clips with Timothy Grayson

Sdílet
Vložit
  • čas přidán 5. 09. 2024
  • A super AI would be some sort of AI able to learn and achieve certain goals faster than humans. There are many discussions on cyber and physical existential threats. Probably people tend to get worried about AI when it’s in the context of warfare, and people would associate this AI to what we’ve seen in Sci-Fi movies like Terminator… Skynet, and all these things. Some philosophers said that the issue with the super AI might be that when a human assigns a certain goal, for example, “get rid of email spam worldwide”, then maybe the AI system finds its own inconvenient intermediate goals in order to achieve that primary goal, which is like “kill everyone”. Do you think this represents an actual risk, or maybe the risk is that there would be more TED Talks, more books, and more movies on the subject?
    This is the 9th part of our conversation with Timothy Grayson. We’ll be releasing this podcast in episodes every one or two days. If you wish to access the full podcast immediately please join us on Patreon ( / bullaki , otherwise subscribe and turn on the notification and you’ll know when the other episodes will be available.
    As the director of the Strategic Technology Office at the Defense Advanced Research Projects Agency (or DARPA), Timothy leads the office in development of breakthrough technologies to enable war fighters to field, operate, and adapt distributed, joint, multi-domain combat capabilities at continuous speed. He is also founder and president of Fortitude Mission Research LLC and spent several years as a senior intelligence officer with the CIA. Here he illustrates the concept of Mosaic Warfare, in which individual warfighting platforms, just like ceramic tiles in a mosaic, are placed together to make a larger picture. This philosophy can be applied to tackle a variety of human challenges including natural disasters, disruption of supply chains, climate change, pandemics, etc. He also discusses why super AI won’t represent an existential threat in the foreseeable future, but rather an opportunity for an effective division of labour between humans and machines (or human-machine symbiosis).
    CONNECT:
    - Subscribe to this CZcams channel
    - Support on Patreon: / bullaki
    - Spotify: open.spotify.c...
    - Apple Podcast: podcasts.apple...
    - LinkedIn: / samuele-lilliu
    - Website: www.bullaki.com
    - Minds: www.minds.com/...
    #bullaki #science #podcast
    ****
    A great example is something that got a lot of media attention a couple months ago, that was run out of my office, it was called the AlphaDogfight Trials.[33][34] It’s being conducted under the Air Combat Evolution (ACE) program. It got tons of attention. For those who are watching this, you can go on to DARPA’s CZcams channel and look up AlphaDogfight, and you can see the whole thing.[35] What got everyone’s attention is, and it relates to sort of your question, and it creates that situation “So maybe this is the first step to Terminator”. The final event was the winning AI agent. It was a contest with eight different AI agents that competed against each other tournament style. Then the winning AI agent flew against a human pilot. This was a real accomplished fighter ace, active duty Air Force sitting in a simulator. Sadly for the poor pilot, he lost five-nothing. It was pretty dramatic. There were lots of things that weren’t completely realistic or whatnot. But frankly, I think in the balance of things, the number of not realistic things was probably equally split in terms of who would favor. But despite how eye opening and sort of titillating it is that the AI beat the human so handily, it misses the point of the program.
    The real thing we’re trying to do in the ACE program, is figure out how AI and humans work together. In the subsequent follow on program, it’s going to be focused on how we train and how we create a protocol to train trust in AI. The analogy my program manager likes to use is, the first time he got in a car that had adaptive cruise control, and his car’s speeding down the road, and there’s the sea of red lights in front of them, it took a moment of panic: “do I trust the AI to stop? Or do I stomp on the brake?” That’s the real push in the ACE program, is “How to get a human fighter pilot comfortable with the plane flying itself?”
    But even more fundamentally, it gets back to your question and back to this notion of human-on-the-loop.

Komentáře • 2