Cybersecurity & Tech Surveillance & Privacy

Great Power Competition and the AI Revolution: A Range of Risks to Military and Strategic Stability

Elsa Kania
Tuesday, September 19, 2017, 5:00 PM

Today’s rapid advances in artificial intelligence (AI) could disrupt and destabilize the existing military balance—and not necessarily for the reasons that have captured popular imagination. The potential realization of Artificial General Intelligence or “superintelligence” merits discussion, but it remains a relatively distant possibility.

Published by The Lawfare Institute
in Cooperation With
Brookings

Today’s rapid advances in artificial intelligence (AI) could disrupt and destabilize the existing military balance—and not necessarily for the reasons that have captured popular imagination. The potential realization of Artificial General Intelligence or “superintelligence” merits discussion, but it remains a relatively distant possibility. Yet advances in AI have already started to provoke fear, and often hyperbole, about the threats of “killer robots,” a looming Terminator Dilemma and the risk of an AI arms race. Earlier this month Elon Musk even tweeted, “Competition for AI superiority at national level most likely cause of WW3 [in my opinion].”


These fears may be premature, but AI’s disruptive potential is real. The recent progress in AI has primarily involved machine learning and particularly deep learning techniques, such as the use of deep neural networks, across disciplines including computer vision, pattern recognition and natural language processing. Since 2016, several critical milestones have revealed the rapid pace of advances and potential real-world applications. In 2016, the victory of “Mayhem” in DARPA’s Cyber Grand Challenge demonstrated the potential of autonomous detection and patching of software vulnerabilities to transform cybersecurity. With the computer program AlphaGo’s historic defeat of Lee Sedol in their 2016 match and subsequent victory over Ke Jie, the world’s top Go player, AI has achieved mastery of the game of Go, which requires complex strategizing, at least a decade earlier than expected.


Recognizing the disruptive, even revolutionary implications of AI for national defense, the United States, China and Russia are actively seeking to advance their capabilities to employ AI for a range of military applications. In spring 2017, the Department of Defense (DoD) revealed it had established an Algorithmic Warfare Cross-Functional Team “to accelerate DoD’s integration of big data and machine learning.” This summer, China released the New Generation AI Development Plan, which articulated the ambition to “lead the world” in AI by 2030. This plan calls for military-civil fusion in AI to leverage dual-use advances for applications in national defense, including in support of command decision-making, military deduction, and defense equipment. Meanwhile, the Russian military has been aggressively advancing its efforts in intelligent robotics, and Russian President Vladimir Putin recently declared, “Whoever becomes the leader in [AI] will become the ruler of the world.” Indeed, the advent of AI in warfare appears to be resulting in a transformation of the character of conflict beyond information-age warfare toward “algorithmic warfare,” in the U.S. military’s phrasing, or “intelligentized” (智能化) warfare, as Chinese military thinkers characterize it.


Despite recurrent calls to ban “killer robots”—and a recent open letter that articulated concerns that the development of lethal autonomous weapons would open a “Pandora’s box” and risk unleashing “weapons of terror”—an outright ban is unlikely to be feasible. It is improbable that major powers would accept constraints on capabilities considered critical to their future military power. Even attempts to pursue some form of regulation or an international treaty to restrain military applications of AI could be readily overtaken by technological developments. The diffusion of this dual-use technology will also be difficult to control.


Consequently, major militaries should take a proactive approach to evaluating and mitigating the potential risks introduced by advances in military applications of AI. This is in their interest, as the U.S., China and Russia still share at least a basic commitment to strategic stability and recognize the undesirability of inadvertent escalation.


To date, much of the analysis and attention associated with the risks of AI in warfare has concentrated on ethical concerns and operational risks associated with the potential use of lethal autonomous weapons systems (LAWS). There has been an international debate on the challenges of applying and adapting the law of armed conflict—with core principles of necessity, distinction, proportionality and humanity—to the use of autonomous weapons systems, but it remains unclear how different militaries will interpret and abide by this traditional framework. Concurrently, as Paul Scharre of the Center for a New American Security discussed in a report on the topic, the employment of autonomous weapons could result in a range of operational risks, including inevitable failure in complex systems, adversary attempts to attack or otherwise undermine autonomous systems (e.g., spoofing and behavioral hacking), or unanticipated interactions between adversarial systems.


These issues are critical and should also inform thinking on the risks that arise out of military applications of AI beyond the context of autonomous weapons. Indeed, risks that might result from current, comparatively uncontroversial military uses of AI also deserve sustained attention. Despite fears that AI is “summoning the demon,” the present limitations of AI, rather than its potential power or questions of controllability, could become problematic in the near term. In some cases, errors and issues might arise from seemingly routine applications, even when the algorithm in question is not used directly in a weapons system or to make a life-and-death decision.


Consider, for example, the implications of rote errors for the U.S. and Chinese militaries’ progression toward using AI for the automation of intelligence, surveillance and reconnaissance (ISR) functions, particularly in support of command decision-making. With Project Maven, the Defense Department plans to accelerate its integration of big data and machine learning, using computer vision algorithms to enable the automated processing of data, video and imagery. Through this initiative, the department intends to have the capacity to “deploy” algorithms in a war zone by the end of 2017. In addition, the U.S. Air Force seeks to leverage AI to process information at speed and scale, through algorithms and human-machine interfaces, and to enhance leaders’ decision-making capabilities. Similarly, the Chinese People’s Liberation Army (PLA) is developing algorithms that enable data fusion, enhance intelligence analysis, and support command decision-making. For instance, the PLA is funding research and development related to target recognition, as well as the processing of sensor data and satellite imagery, based on machine learning. In particular, the Chinese army is focused on the potential of AI in operational command and assistance to decision-making.


These military applications of AI do not directly involve decisions about lethal force but could cause errors that might contribute to crisis instability or exacerbate the risks of escalation. At this stage of development, AI remains far from intelligent, tending to make mistakes no human would make. Such errors can be unpredictable or difficult to mitigate. In certain cases, the results can be amusing or nonsensical. In a military context, however, there could be adverse consequences, with higher likelihood of errors or unexpected emergent behavior as the level of complexity increases and if a situation exceeds the expected parameters of an algorithm.


The use of machine learning in support of military intelligence, surveillance and reconnaissance capabilities or command systems could create new avenues for misperception or miscalculation. For instance, if a computer vision algorithm used to process satellite imagery makes an error in identifying a perceived threat or potential target, the result could be a mistake in analysis or even on the battlefield. Similarly, if an algorithm used for machine translation or natural language processing incorrectly renders a critical piece of intelligence, inaccurate information could be introduced into analytical and decision-making processes. (Of course, in some cases, the use of AI could serve to mitigate suboptimal analysis and decision-making that might otherwise arise from human cognitive bias.) There is also the threat that adversaries will develop countermeasures that deliberately damage or interfere with each other’s AI systems, seeking to distort data or target the algorithm itself.


Looking to another near-term application, there is likely to be a trend toward the use of AI-enabled cyber capabilities by multiple militaries (and even non-state actors), introducing greater degrees of autonomy to a complex and contentious domain. The 2012 Defense Department directive on “Autonomy in Weapons Systems” explicitly indicated that the policy, which called for the exercise of “appropriate levels of human judgment” over use of force, did “not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations.” The sheer speed required in certain cyber operations, as in air and missile defense, could necessitate greater degrees of autonomy. In August, the Defense Department officially purchased the technology associated with Mayhem, which will be used to automate the detection and patching of software vulnerabilities. Although Mayhem is intended for a defensive mission, similar techniques could be weaponized and leveraged for offensive purposes. The Chinese military is also likely to look to AI to enhance its offensive and defensive cyber capabilities, including under the aegis of its new Strategic Support Force. The trend towards integrating AI with cyber capabilities to achieve an advantage could exacerbate escalatory dynamics in this contested domain, particularly if such capabilities start to proliferate.


Looking forward, AI will enable disruptive military capabilities while creating systemic risks to military and strategic stability. Now is the time to start considering the potential ramifications of these technological trends and evaluating appropriate parameters that might mitigate risks of error or escalation. To start, great-power militaries might consider pursuing—and perhaps, as a future confidence-building measure, committing to—an initial set of pragmatic measures, such as:


  • Engaging in robust testing of the safety and integrity of military AI systems, focusing on potential errors or failures that might occur in open, uncontrolled environments;
  • Creating redundancies in military AI systems, including those used to support intelligence, surveillance and reconnaissance capabilities, to enable verification, such that there are multiple methods to detect errors and evaluate outputs, in order to ensure consistency with actual ground truth;
  • Exploring options for fail-safe measures or “circuit breakers” to allow for de-escalation or de-confliction in the case of unintended engagements or escalation; and
  • Ensuring the “explainability” of AI systems to lessen issues of compromised decision-making, while perhaps ensuring informed, “meaningful human control” whenever feasible.

Elsa B. Kania is an adjunct senior fellow with the Technology and National Security Program at the Center for a New American Security. Her research focuses on U.S.-China relations, China’s military strategy, defense innovation and emerging technologies. She has been invited to testify before the House Permanent Select Committee on Intelligence, the U.S.-China Economic and Security Review Commission, and the National Commission on Service. Her views are her own.

Subscribe to Lawfare