Cybersecurity & Tech Surveillance & Privacy

Predicting Enemies: Military Use of Predictive Algorithms

Ashley Deeks
Tuesday, April 10, 2018, 7:00 AM

The growing military-use of predictive algorithms and artificial intelligence is stirring up corporate and academic protests. Consider, for instance, a recent Reuters report that 50 AI researchers from 30 countries are boycotting South Korea’s top university because it opened an AI weapons lab in partnership with a large South Korean company.

Credit: Department of Defense/Ken Hammond

Published by The Lawfare Institute
in Cooperation With
Brookings

The growing military-use of predictive algorithms and artificial intelligence is stirring up corporate and academic protests. Consider, for instance, a recent Reuters report that 50 AI researchers from 30 countries are boycotting South Korea’s top university because it opened an AI weapons lab in partnership with a large South Korean company. The boycott prompted the university to clarify that it did not intend to develop lethal autonomous weapons systems (LAWS) and would ensure that its research activities preserved meaningful human control in decision-making. The researchers claimed that the university’s clarification will help influence the U.N. discussions next week on LAWS, which Hayley Evans detailed here.

At the same time, the New York Times ran a story about Google employees’ reaction to the company’s role in a Pentagon program called Project Maven. Some 3,100 employees have signed a letter protesting Google’s involvement in Maven, which will use artificial intelligence to interpret video feeds and could conceivably influence the military’s decisions about who, when, and where to strike with drones. The employees seem agitated about any Google involvement with warfare, whether the ultimate product retains a “man in the loop” or not.

Notwithstanding a growing sense of unease about autonomy and AI in war, the Pentagon and its advanced research arm, the Defense Advanced Research Projects Agency, are pressing ahead to expand the use of artificial intelligence. DARPA has started a program called Collection and Monitoring via Planning for Active Situational Scenarios (COMPASS), which seeks to develop software to gauge an adversary’s response to stimuli, discern the adversary’s intentions, and offer commanders information about how to respond. In the words of the DARPA program manager, the program is “trying to determine what the adversary is trying to do, his intent; and once we understand that ... then identify how he’s going to carry out his plans – what the timing will be, and what actors will be used.”

In short, battlefield-related predictive algorithms and AI are coming soon to a theater near you. The military seems to be envisioning the use of these tools for everything from identifying specific actors on the ground to anticipating adversary tactics and recommending responses. Although not mentioned in the news articles, militaries might also use predictive algorithms in attempting to predict how dangerous an individual or group of individuals may be—an issue that often arises in the context of military detention.

I have just posted an article arguing that these two goals—making individual predictions about dangerousness and anticipating the location of future acts of violence—have a lot in common with algorithms currently being used in the criminal justice context. There, judges and parole boards are using algorithms to help predict whether someone will jump bail if released before trial or will commit another crime if released on parole. Similarly, police departments around the country are using “predictive policing algorithms” to anticipate specific locations and times at which violence is likely to occur. Not surprisingly, these algorithms have faced a variety of criticisms, ranging from the use of biased data to train the machine-learning systems to the lack of transparency about how the algorithms produce their recommendations.

As the U.S. military develops algorithms that offer recommendations to military commanders about operational and detention choices, the military can and should learn from critiques of criminal justice algorithms. Indeed, some of these critiques will be even more potent in the military context, because the military will use algorithms in a way that is far less transparent and less subject to challenge than criminal justice algorithms. I argue in the article that it is in the military’s interest to be transparent about why it is deploying predictive algorithms, the benefits and costs of those algorithms, the legal basis for their use, and how the military will try to mitigate the problems identified by common critiques of predictive algorithms. Indeed, greater transparency on these fronts might help mitigate the types of concerns expressed by Google employees and, more generally, foster a stronger relationship between the military and AI experts in the private sector.

The article’s abstract is here:

Actors in our criminal justice system increasingly rely on computer algorithms to help them predict how dangerous certain people and certain physical locations are. These predictive algorithms have spawned controversies because their operations are often opaque and some algorithms use biased data. Yet these same types of predictive algorithms inevitably will migrate into the national security sphere, as the military tries to predict who and where its enemies are. Because military operations face fewer legal strictures and more limited oversight than criminal justice processes do, the military might expect – and hope – that its use of predictive algorithms will remain both unfettered and unseen

This article shows why that is a flawed approach, descriptively and normatively. First, in the post-September 11 era, any military operations associated with detention or targeting will draw intense scrutiny. Anticipating that scrutiny, the military should learn from the legal and policy challenges that criminal justice actors have faced in managing the transparency, reliability, and lawful use of predictive algorithms. Second, the military should clearly identify the laws and policies that govern its use of predictive algorithms. Doing so would avoid exacerbating the “double black box” problem of conducting operations that are already difficult to legally oversee and contest, using algorithms whose predictions are often difficult to explain. Instead, being transparent about how, when, why, and on what legal basis the military is using predictive algorithms will improve the quality of military decision-making and enhance public support for a new generation of national security tools.


Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare