Cybersecurity & Tech Lawfare News

Readings: Matt and Ken on "Law and Ethics for Robot Soldiers"

Benjamin Wittes
Monday, December 3, 2012, 7:27 AM
I linked earlier to this new article in the course of critiquing Human Rights Watch's report on "killer robots," but it's worth separate notice.

Published by The Lawfare Institute
in Cooperation With
Brookings

I linked earlier to this new article in the course of critiquing Human Rights Watch's report on "killer robots," but it's worth separate notice. Matt and Ken have written an excellent new treatment, published in Policy Review, of "Law and Ethics for Robot Soldiers." Here is the abstract from the SSRN version:
Lethal autonomous machines will inevitably enter the future battlefield — but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of these technologies comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe — but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them — U.S. policy for resolving those dilemmas should be built on these assumptions. The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — the contours of international law as well as international expectations about appropriate conduct — on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions — or those who would prefer few or no constraints at all.
Here's the opening of the article itself:
A lethal sentry robot designed for perimeter protection, able to detect shapes and motions, and combined with computational technologies to analyze and differentiate enemy threats from friendly or innocuous objects — and shoot at the hostiles. A drone aircraft, not only unmanned but programmed to independently rove and hunt prey, perhaps even tracking enemy fighters who have been previously “painted and marked” by military forces on the ground. Robots individually too small and mobile to be easily stopped, but capable of swarming and assembling themselves at the final moment of attack into a much larger weapon. These (and many more) are among the ripening fruits of automation in weapons design. Some are here or close at hand, such as the lethal sentry robot designed in South Korea. Others lie ahead in a future less and less distant. Lethal autonomous machines will inevitably enter the future battlefield — but they will do so incrementally, one small step at a time. The combination of “inevitable” and “incremental” development raises not only complex strategic and operational questions but also profound legal and ethical ones. Inevitability comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because nonlethal robotic systems (already proliferating on the battlefield, after all) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe — but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them; U.S. policy for resolving such dilemmas should be built upon these assumptions. The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those same features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies and recognize that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — i.e., the contours of international law as well as international expectations about appropriate conduct on which the United States government and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective, or dangerous prohibitions — or those who would prefer few or no constraints at all.
 

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare