Law and Ethics for Robot Soldiers

Kenneth Anderson, Matthew Waxman
Thursday, April 26, 2012, 3:40 PM
We're pleased to note that our new essay, Law and Ethics for Robot Soldiers, has been posted to SSRN.  The essay's fundamental point is that lethal autonomous weapons systems - the "robot soldiers" of our title - are going to come to the battlefield, sooner or later.

Published by The Lawfare Institute
in Cooperation With
Brookings

We're pleased to note that our new essay, Law and Ethics for Robot Soldiers, has been posted to SSRN.  The essay's fundamental point is that lethal autonomous weapons systems - the "robot soldiers" of our title - are going to come to the battlefield, sooner or later.  So we should be asking now how they can and should be regulated within the law of armed conflict. This is a bit of an experiment -- the essay, which grew out of our work with the Hoover Institution Task Force on National Security and Law -- will appear later this year in Policy Review for a general policy audience, and the house style does not use footnotes.  So with Editor Tod Lindberg's blessing, we've prepared an annotated and footnoted version of this 6000 word essay for academics, and put it up to SSRN. Lots of big conceptual questions have started to be asked as lawyers look beyond current drones and remotely-operated vehicles.  How, for example, will autonomous systems be accountable for compliance with the laws of war?  Who bears liability and what does that even mean in a system of ones and zeroes?   Is there something wrong with having a machine ever make the final firing decision - even if it might do a better job than humans?  Don't autonomous and remote-operated weapons incentivize conflict by making it too easy?  There are many big questions that apply to the end of the technological process - the autonomous robot weapon on the battlefield.  This essay points out, however, that though the big questions are important, the actual process of technological innovation is likely to move weapons systems along a gradual, sliding path that increases automation in this bit here and that bit there.  And then at some point that can't quite be identified save looking backwards, well, it seems like the system is not just automated, but genuinely autonomous.  The fact that automation happens gradually has considerable importance for how regulation can and should take place. For all the wonder of the advancing technology - who would have thought that robotic technologies ranging from driverless cars to remote-piloted aviation would have come so far, so fast? - we conclude that the best approach to regulation is a rather traditional one developed gradually through state practice applying basic principles of distinction and proportionality.  Ambitious multilateral treaties won't work, for many reasons. Moves to ban autonomous weapons outright would likely work as well as earlier prohibitions on aerial bombardment or submarine warfare, especially because of an insidious problem of pointing to an arbitrary place on the slippery scale of automation (we've automated flying the plane to make it fast enough to compete against the enemy's fast automated UAV; it won't be possible to operate the weapons in mere human time, so ...) and trying to declare, "Here and no further." At the same time, the United States does have a serious interest in some form of regulation and normative standards. That's so if only because there are systems that others (China? Russia? Others?) might deploy that the United States would think clearly failed the tests of distinction and proportionality.  The United States has a strong interest in the normative expectations and environment in a subtly evolving world of technology, and the question is how.  In the final sections of the essay we offer a policy blueprint for doing so – a blueprint designed around the assumptions that lethal machine autonomy will inevitably but incrementally enter the future battlefield.  SSRN abstract below the fold.
Abstract: Lethal autonomous machines will inevitably enter the future battlefield – but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe – but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them; U.S. policy toward resolving those dilemmas should be built upon these assumptions. The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — the contours of international law as well as international expectations about appropriate conduct — on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions or those who would prefer few or no constraints at all.

Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."
Matthew Waxman is a law professor at Columbia Law School, where he chairs the National Security Law Program. He also previously co-chaired the Cybersecurity Center at Columbia University's Data Science Institute, and he is Adjunct Senior Fellow for Law and Foreign Policy at the Council on Foreign Relations. He previously served in senior policy positions at the State Department, Defense Department, and National Security Council. After graduating from Yale Law School, he clerked for Judge Joel M. Flaum of the U.S. Court of Appeals and Supreme Court Justice David H. Souter.

Subscribe to Lawfare