Lawfare News

"War-Algorithm Accountability": A Briefing Report from the Harvard Law School Program on Law and Armed Conflict

Kenneth Anderson
Monday, October 10, 2016, 7:50 AM

Congratulations to Dustin Lewis, Gabriella Blum, and Naz Modirzadeh on the publication, just a few weeks ago, of their exciting new book, War-Algorithm Accountability.

Published by The Lawfare Institute
in Cooperation With
Brookings

Congratulations to Dustin Lewis, Gabriella Blum, and Naz Modirzadeh on the publication, just a few weeks ago, of their exciting new book, War-Algorithm Accountability. (Free, complete pdf here at SSRN.) War-Algorithm Accountability is framed as a “briefing report,” rather than as a book, apparently as it is issued by the Harvard Law School Program on International Law and Armed Conflict (where Lewis is a senior researcher, Blum is faculty director, and Modirzadeh is director; PILAC should not be confused with the Harvard Law School Human Rights Clinic). At 244 pages, however, War-Algorithm Accountability seems awfully like a book. It is clear, well-organized, and well-written, and has the intellectual scope of a book—a much better read than lots of academic press books that come across my book review desk.

The report itself, it’s worth noting, runs only 100 pages (including a cogent executive summary). The additional length is mostly on account of two major appendices providing country-by-country positions taken in the 2015 and 2016 meetings of experts under the review processes of the Convention on Certain Conventional Weapons (CCW)—highly valuable resources all on their own. The report also contains an excellent bibliography of current sources on the the several different disciplinary strands of the argument. (The authors might consider posting the two appendices to SSRN and the PILAC website as a standalone document, given their independent value to researchers.)

So potential readers should not be scared off by the apparent length of War-Algorithm Accountability. This is one of the most interesting new interventions in the last couple of years I’ve read on the highly vexed topic of weapons autonomy. Moreover, it’s also a contribution to broader debates over the design of human-machine interactions and accountability for complex, opaque software controlling dangerous physical objects. Though focused, as it says, on "armed conflict" as the background environment, its analysis is not limited to weapons, but includes military systems more broadly. And it covers many technologies that appear poised to take on significant roles in ordinary society, such as self-driving cars, quite apart from applications of these technologies by militaries in armed conflict.

The scope of the book’s analysis thus takes in any algorithm "expressed in computer code" and "effectuated though a constructed system,” situated specifically in the context of armed conflict, though not just weapons. In seeking to draw concepts of accountability together with machine algorithms, War-Algorithm Accountability draws upon many disciplines and topics: international law and the law of armed conflict; the practical operations and conduct of armed conflicts that take many forms; the very concept of the algorithm in computer science; the appropriate understanding of accountability in the context of armed conflict, but also its appropriate understanding in the context of machine code; the technical capabilities of various military systems, including but by no means limited to weapons; and many more.

I’m still thinking through the core arguments of War-Algorithm Accountability. I’m broadly in synch with the approach taken by the authors regarding accountability and accountability in the context of machine code. I likely have some important, differing “prior" views on law of war that might lead to different legal conclusions, but that would not be very interesting in the context of the arguments of the report; different legal starting positions probably yield different results, which is not a surprise. I probably have some disagreements, in a more interesting way, as to the concepts of accountability and the ways in which they are used to bridge the human-machine gap; these conceptual questions, perhaps disagreements, go beyond weapons or law of war issues and into the realm of human-machine interaction generally. But I want to hold off on any more detailed critique of the report at this stage; I’m convinced it is an important, original intervention. (I hope the authors might consider a post on Lawfare explaining the report and its core ideas.)

War-Algorithms Accountability is an excellent, thought-provoking report. It's well worth the time of those interested in weapons autonomy and law issues—but also the time of those more broadly interested in questions of accountability and machine algorithms and complex code systems. Abstract:

In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm.

Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports — in key respects but with important caveats — to be both universal and uniform. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation — and how those algorithms might already fit within the existing regulatory system established by international law. (Number of pages in PDF file: 244)


Topics:
Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."

Subscribe to Lawfare