Cybersecurity & Tech Lawfare News

Readings: Geoffrey Corn on Autonomous Weapons

Kenneth Anderson
Sunday, August 3, 2014, 2:00 PM
I'm pleased to note that Lawfare's good friend Geoff Corn has entered into the public discussion of autonomous weapon systems (AWS) with a new paper posted to SSRN, "Autonomous Weapon Systems: Legal Consequences of 'Taking the Man Out of the Loop'."  The paper is a relatively rough working draft, but it raises a number of considerations that merit close attention in the debates over AWS.

Published by The Lawfare Institute
in Cooperation With
Brookings

I'm pleased to note that Lawfare's good friend Geoff Corn has entered into the public discussion of autonomous weapon systems (AWS) with a new paper posted to SSRN, "Autonomous Weapon Systems: Legal Consequences of 'Taking the Man Out of the Loop'."  The paper is a relatively rough working draft, but it raises a number of considerations that merit close attention in the debates over AWS. Abstract:
This essay considers how the development and fielding of autonomous weapon systems will implicate compliance with the law of armed conflict (LOAC). It argues that assessing the legality of such weapon systems will be facilitated by analogy to the human soldier. Specifically, the essay considers how inherent human cognitive autonomy is managed through the process of training and responsible command to ensure the soldier functions consistent with the requirements of the LOAC. It then suggests that because fielding commanders will have little opportunity to influence the artificial intelligence based judgments of future autonomous weapon systems, they must undergo a 'compliance validation' process that produces a higher degree of confidence in LOAC compliance than that expected of the human soldier. This leads to consideration of the relationship between the traditional doctrine of command responsibility and the expectation of LOAC compliance for the human soldier, and why this doctrine cannot be expected to produce the same effect with regard to autonomous weapons. Accordingly, the essay proposes that the focal point of 'responsibility' must be reconsidered in relation to autonomous weapons, recasting the doctrine to one more focused on 'procurement responsibility.' The essay closes by raising and answering several theoretical questions related to the development and employment of autonomous weapons. (Number of pages in PDF File: 20.)
This draft essay is important for several reasons, one of which is that it unapologetically says that winning wars is a perfectly good reason for developing new technologies of warfare, whether ultimately they are adopted or not, and that force protection is likewise a perfectly good reason for adopting new weapon technologies.  These don't eliminate other concerns that are part of the legal review of weapons or laws of war regarding their use, but they are good and ethical grounds for considering new technologies. This is important because there is something of a tendency--one into which I myself have sometimes fallen--to focus on civilian protection as the only real moral value at issue.  "Winning" the war, by contrast, is sometimes treated as somehow not quite morally worthy, a merely "partial" interest of one side or the other, not something nobly universal, beyond mere sides or national self-interest.  Likewise the protection of one's own forces; Corn refreshingly and forthrightly declares this to be a moral and legal obligation of states on behalf of their own soldiers. He do so against a prevailing ethic that I would describe as seeming to regard humanitarian protection of civilians as the only genuinely worthy moral value in the conduct of war. The paper's examination of the question of accountability for machine systems is a very interesting one, in part because it takes up a question that has been at the heart of critiques of AWS--who does one hold accountable?  The answers that the paper gives are, I think, most suited to particular kinds of autonomous machines--ones that I'd regard, however, as necessarily advanced far beyond those that appear to me in the technological path for the time being. So I probably have some disagreements with the framing of the accountability mechanisms as the paper proposes them, but that likely is on account of thinking that the machines contemplated by the analysis are not the ones where, so far, the technology is foreseeably headed. That potential disagreement aside, however, the paper offers an important theoretical insight into the "information" problem posed by any complex military machine system, whether an autonomous weapon with a computerized decision process, a sensor mechanism such as radar or video, and much else besides: a commander is constantly dependent upon information sources that he or she will generally be unable to independently corroborate or verify or even have a basis to question. There is nothing special about AWS in that regard; it is true of the pilot firing the missile over the horizon or the crew below decks firing a missile at some target according to some set of coordinates.  It's not even special to reliance upon machines; in a complex, division of labor military, everyone is dependent on information (including information on the performance of their systems, both machine and human) obtained from everyone else and not independently verifiable in any real way.  The article suggests (for reasons I won't try to explain here, but fundamentally agree with) that an AWS will need to have a higher "compliance validation" confidence level than a commander would require of a human soldier.  But since the validation runs not to any piece of information at issue (e.g., this is a threat, that's not), but instead to the system as a matter of its design, validation will have to be about the design. If validation of the commander's confidence is about design, the article says, it thus becomes a matter of "procurement" responsibility in the weapons acquisition process.  Confidence in the system from a legal point of view, then, looks to the chain of responsibility running from the weapons procurement process down to the point of use by the commander in the field.  I broadly agree, and I look forward to seeing the article in final form.

Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."

Subscribe to Lawfare