Problems with Employing Weaponized AI in Militaries

 Topic A Articles: 

This week’s blog post is about our first topic: the militarization of AI. Below are some sources (definitely take a look at one or two, or if you have the time, all of them!) and some pertinent and potentially useful notes. Feel free to respond to questions as comments on the post. 


On AI Weapons in China

https://www.brookings.edu/wp-content/uploads/2020/04/FP_20200427_ai_weapons_kania_v2.pdf

A million mistakes a second 

https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-future-of-war/

The fog of war may confound weapons that think for themselves

https://www.economist.com/science-and-technology/2021/05/26/the-fog-of-war-may-confound-weapons-that-think-for-themselves

Henry Kissinger and Eric Schmidt take on AI

https://www.economist.com/books-and-arts/2021/11/20/henry-kissinger-and-eric-schmidt-take-on-ai 

Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.

https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html?



‘AI Weapons’ in China’s Military Innovation (Brookings Institute) 

Intro 

  • LAWS — lethal autonomous weapons systems 

  • LOAC — law of armed conflict 

Current Chinese military capabilities 

  • The People’s Liberation Army (PLA) concentrates on military robotics and unmanned group vehicles 

    • PLA Navy (PLA-N) experiments w/ unmanned surface vessels, and is developing submarines 

    • PLA Air Force (PLA-AF) — operates unmanned systems w/ limited autonomy 

  • China leads the world in export of medium-altitude long-range unmanned aerial vehicles (UAV’s) 

  • There have been reports of conversions of tanks into autonomous or semi-autonomous vehicles 

  • There have been PLA-N tests of unmanned underwater vehicles (UUV’s), including in the South China Sea 

    • Think about the implications on potential territorial conflicts! 

Future trends in R&D

  • A goal of china is to enable future cruise missiles with a high degree of autonomy and AI 

  • They already have domestic business conglomerates building unmanned surface vehicles 

PRC arms sales and approaches to global governance 

  • PRC is feeding weapons to MENA countries ‘advertised as capable of full autonomy, including the ability to conduct targeted strikes’ (Mark Esper, SecDef 2019) 

  • During an April 2018 UN Group of Governmental Experts (GGE) on LAWS, the Chinese delegations advocated for a ban on fully autonomous lethal weapons system but the definition it proposed was restrictive: 

    • Lethality 

    • Autonomy — absence of human intervention 

    • Impossibility of termination once set in motion 

    • Indiscriminate action regardless of conditions, scenarios, and targets 

    • Evolution such that it can learn autonomously through interaction with the environment 

  • Machine learning is an interesting domain of weaponizing AI since it potentially exposes militaries to adversaries falsely training the AI so as to weaken It 

  • US DoD directive 3000.09 defines autonomous weapons systems as those that ‘once activated, can select and engage targets without further intervention by a human operator’ 

    • The Chinese PLA has no equivalent, instead employing multiple definitions of ‘intelligentized’ weapons 

Implications for global security and stability 

  • The PLA lacks the US military’s overseas operational experience and, consequently, its institutionalized architecture of legal expertise to apply the law of war 

    • There is no analogous group to the JAG Corps, for instance 

    • Points like this should open you to considering how militaries handle ethics, morality, and civilian rule. 


‘A Million Mistakes a Second’ (FP)

  • Chinese military academics speculate a coming ‘battlefield singularity’ in which ‘the pace of combat eclipses human decision-making’ 

  • A clear risk is the possibility that accidents could cause conflicts to spiral out of control 

    • An interesting example they raise is of stock markets and brokers triggering ‘flash crashes’ due to the rapid automated decision making

    • What potential outcomes could an analogous situation, though with AI powered weapons, have on global conflict? 


Where humans have already ceded control to machines 

  • >30 countries use autonomous weapons defense systems 

  • Currently, many autonomous systems require human authorization 

  • Now, countries are designing stealth combat drones which will likely be deployed with the ability to make their own decisions in the case that they lose comms with their operator 

    • There have been no rules of engagement that have been explained by countries researching such stealth technology 

    • While a drone may be pre-approved for one target, for the time being, if it were to find new targets it would request further authority (but many would-be strategic targets are mobile — i.e.aerial defense systems on wheels — and so militaries may select the expedience and convenience of immediate execution over rules of engagement)

  • Autonomous speed could break-down command and control as commanders lose bearing on the situation (which is why commander intent is important in current armed conflicts, but how would commander intent be interpreted by machine learning algorithms that lack the empathy and mercy of humans?


Humans losing control 

  • There have already been incidents of fratricide — two; one during the 2003 invasion of Iraq, and another during the following occupation 

    • One was due to an outdated system misidentifying a descending aircraft as a missile 

    • Another due to a ‘ghost track,’ which is the false formation of a missile track on a radar due to EM interference between radars 

  • Isolated examples such as these may be discretely improbable, but zoomed out and applying the law of large numbers, they are bound to happen 


The fog of war may confound weapons that think for themselves

  • Israel’s iron dome is a missile system that attacks missiles without human request

  • Conflict environments are harsh and data are incomplete, and so there would have to be a great deal of autonomous extrapolation from existing data 

    • Enemies will attempt to fool AI systems 

    • Mixtures of civilians and adversarial military will confuse AI systems

  • A Chinese company totes a helicopter drone that is already capable of autonomous complex combat missions 


Henry Kissinger and Eric Schmidt take on AI

  • Kissinger argues that AI has ‘ended the enlightenment’ 

  • The central objective of national security policy has been deterrence 

  • His book actually calls for an increase in AI development by the US so as to act as a deterrent (as MAD did for nuclear arms development)


Killer Robots aren’t Science Fiction. A push to ban them is growing. (NYTimes)

  • Autonomous weapons may have the unintended effect of increasing the risk of war 

    • Antagonists would have minimal risk to human life if they chose to strike in this way 

  • There are fears in countries already w/ the tech that it’s deployment could be done mistakenly and induce new conflicts 

  • Robots lack compassion, empathy, mercy, and judgment 


Blog post:

Autonomous military technologies capable of intelligent decision-making create the potential for conflicts to accelerate without direct intervention. While current military paradigms emphasize human authorization of autonomous systems' actions, world & regional powers alike are quickly and quietly developing weapons systems capable of intelligent evolution — that is, deploying with a specific target but capable of recognizing strategic targets and autonomously triggering an attack. Current command in the military emphasizes "commander intent" to allow subordinates to execute actions that fall within the purpose of orders. When applied to autonomous systems, the doctrine of commander intent creates the potential for dangerous escalation; Unmanned Aerial Vehicles, or UAV's, for example, travel vast distances and would presumably be able to identify and carry out assaults on many secondary targets simply because they are en route to and from the initially programmed target. In theory, the attack of such secondary targets would necessitate further approval; however, global military powers are currently developing stealth UAV's that would require full autonomy when deployed over hostile territory (to avoid detection of communications). Autonomous target choice could quickly create an escalatory situation where world powers could asymmetrically respond and hastily be engaged in an all-out armed conflict. 


The information AI systems use to make decisions is often flawed. It is essential to recognize that AI is trained on data sets (collected by and/or from naturally imperfect humans), and when deployed in situations that deviate from that data, it must perform extrapolations that can be fallacious. For instance, an AI may be trained to recognize adversarial combatants, in part, based on their military uniforms and consequently be incapable of identifying enemy combatants of different national origin. The above is quite a basic example, and it is crucial to consider its implications in areas such as UAV use. 


Autonomous systems are currently not governed by any pertinent laws of war and related rules of engagement. The use of this kind of technology is already a murky subject, yet the secrecy of these technologies complicates further ethical considerations as state secrets. During a 2018 UN Group of Governmental Experts (GGE) session on Lethal Autonomous Weapons Systems (LAWS), the Chinese delegations advocated banning fully autonomous systems. Nevertheless, its definition of LAWS was restrictive, thus excluding the technologies already in PRC's possession. Russia has publicly said it will only approve a control treaty that is unanimously approved; researchers on the subject point out that any treaty will lack trust if even one significant military power declines to sign on. 


As problematic as Henry Kissinger has been, he recently published a book, "The Age of AI," detailing the risk of widespread AI deployment. Military precedent has been to follow a deterrence strategy — mutually assured destruction for nuclear arms, missile defense systems that render specific missile systems useless, etc. — which relies on all parties to a conflict to maintain similar technological capabilities. Kissinger claims that the U.S. must pursue research and development of LAWS to deter their use in warfare by global and regional powers, such as China, already in possession of such technology. 


Some questions to consider and answer below (it would be a great way to figure out which delegates have similar ideas to you!): How should we define LAWS? Is deterrence or disengaging (i.e., abandoning LAWS R&D) a better policy, and if you think the latter is better, how can we ensure that countries don't continue R&D creating an asymmetric balance of power? Should LAWS be required to be trained on ethical and moral considerations to avoid implicating civilians in warfare?



Comments

Popular posts from this blog

Sonika Vuyyuru

Joshua D'Amato