Armed Conflict Cybersecurity & Tech

Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems

Rebecca Crootof
Tuesday, November 24, 2015, 7:00 AM

Human Rights Watch and the International Human Rights Clinic at Harvard Law School have released their latest report regarding autonomous weapon systems: Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition. While new regulation is needed, the report fails to address crucial distinctions between the successful ban on permanently blinding lasers and the proposed prohibition on autonomous weapon systems.

Published by The Lawfare Institute
in Cooperation With
Brookings

Human Rights Watch and the International Human Rights Clinic at Harvard Law School have released their latest report regarding autonomous weapon systems: Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition. While new regulation is needed, the report fails to address crucial distinctions between the successful ban on permanently blinding lasers and the proposed prohibition on autonomous weapon systems.

Notwithstanding the variety of definitions for “autonomous weapon systems,” most agree that they are weapon systems capable of independently selecting and engaging targets. They are not to be confused with drones, for example, which are remotely-piloted by human operators who determine which targets to engage.

A ban on autonomous weaponry is intuitively appealing. Human beings have long found “killer robots” scary at a visceral level, as evidenced in popular culture by everything from Frankenstein’s monster to the Terminator. (Of course, human beings have also long seen robots’ potential as helpmates, as highlighted by the myth of the golem and an abiding affection for R2-D2 and Lt. Cmdr. Data.) And there is genuine cause for concern with regard to the use and unregulated proliferation of such weaponry, as discussed in an open letter on autonomous weapons, signed by Elon Musk, Stephen Hawking, and Steve Wozniak. Given these concerns and the success of weapons bans in stigmatizing the use of certain classes of weapons – including chemical weapons, biological weapons, and anti-personnel landmines – why not ban autonomous weapon systems?

The prohibition on permanently blinding lasers is particularly tempting precedent. First, it is by far the most successful ban on a new type of weaponry; no signatory has ever violated the treaty and no state has ever employed permanently blinding lasers in armed conflict. Second, it is one of the few attempts to preemptively ban a weapon (the other commonly cited one being the ban on projectiles diffusing asphyxiating gases). Ban advocates maintain that autonomous weapon systems do not yet exist, so they have a particular interest in highlighting this successful preemptive ban as a useful precedent.

However, autonomous weapon systems share few traits with permanently blinding lasers – or with other weapons that have been successfully banned. Based on a historical analysis, I have identified eight characteristics common to successful weapon bans; in a similar study, Sean Watts found seven factors relevant to whether a weapon was amenable to regulation. Our combined analyses suggest that a weapons ban is more likely to be successful where

  • The weapon is ineffective.
  • Other means exist for accomplishing a similar military objective.
  • The weapon is not novel: it is easily analogized to other weapons, and its usages and effects are well understood.
  • The weapon or similar weapons have been previously regulated.
  • The weapon is unlikely to cause social or military disruption.
  • The weapon has not already been integrated into a state’s armed forces.
  • The weapon causes superfluous injury or suffering in relation to prevailing standards of medical care.
  • The weapon is inherently indiscriminate.
  • The weapon is or is perceived to be sufficiently notorious to galvanize public concern and spur civil society activism.
  • There is sufficient state commitment in enacting regulations.
  • The scope of the ban is clear and narrowly tailored.
  • Violations can be identified.

Of these, only a single factor – civil society engagement – supports the likelihood of a successful ban on autonomous weapon systems; the others are irrelevant, inconclusive, or imply that autonomous weapon systems will resist regulation. Consider the following. Not only are such weapons in development and in use today, they are uniquely effective, particularly in those situations where a human being could not survive or that require superhuman reaction time. They are novel, in that no other weapon is capable of similar independent action, and therefore they have no history of related regulation and significant potential for disruption. As they are primarily delivery systems, they need not cause superfluous injury, and they can be used in a discriminate manner. It would be extremely difficult to determine when a ban was violated, in part because of the hidden nature of autonomy and in part because the scope of a ban is unclear. As of yet, there is no consensus as to what distinguishes weapon systems widely deployed today and the “fully” autonomous weapon systems that pro-ban advocates find so disturbing. Accordingly, notwithstanding valiant efforts by civil society, most states currently developing and using autonomous weapon systems remain uninterested or even explicitly opposed to an outright ban.

In contrast, the prohibition on the use of permanently blinding lasers has been successful precisely because so many of these traits weigh in its favor. The prohibition is clear and narrowly tailored: state parties cannot employ or transfer “laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness.” Although such weapons had not been used in combat, states knew precisely which capabilities they were foregoing, as the usage and effects of permanently blinding lasers were well-understood, predictable, and limited. Nor were states sacrificing a uniquely effective weapon; states may use lasers in armed conflict for sighting or other purposes, even if they cause permanent blindness “as an incidental or collateral effect.” And though permanently blinding lasers intended to blind enemy troops may be effective, they are not uniquely effective. Temporarily blinding lasers, like the dazzlers used by U.S. forces in Iraq, can often accomplish the same military objective. Given this alternative, permanently blinding lasers will likely cause superfluous injury, galvanizing the public and civil society to advocate for a ban. The resulting prohibition permits the use of laser weapons intended to kill, but not those intended to permanently blind – a morally bizarre but, in light of the above factors, explainable distinction.

In short, autonomous weapon systems are far more akin to crossbows, submarines, and other weapons that were not successfully banned than they are to permanently blinding lasers. Unfortunately, Precedent for Preemption fails to wrestle with these crucial distinctions. The report does note that permanently blinding lasers are a type of weapon that “would not have been a groundbreaking addition to warfare,” while autonomous weapon systems “encompass a broad class” of weapons and “have the potential to revolutionize [warfare].” But rather than evaluating the impact of these differences on states’ willingness to voluntarily relinquish such weaponry, the report summarily concludes that “[i]nstead of undermining the calls for a ban, . . . the unique qualities of fully autonomous weapons make a preemptive prohibition even more pressing.” But these and the aforementioned distinctions are precisely what make the prohibition on permanently blinding lasers poor precedent for a ban on autonomous weapon systems.

Still, states may be willing to regulate what they will not prohibit, as occurred with nuclear weapons. There are numerous problematic moral, strategic, and legal implications of the unchecked development and transfer of autonomous weapon systems. These systems may be susceptible to certain kinds of misuse, by the original deployers or hackers, and their lawful use may undermine fundamental humanitarian principles and protections. They may take actions for which no one can be held accountable under existing international criminal law. They may make entering into war politically easier (and thereby further concentrate the U.S. war power with the Executive branch). For these and other reasons, there is a real need to clarify the legal landscape.

States would ideally negotiate an entirely new framework convention on autonomous weapon systems, with additional protocols addressing issues of accountability, proliferation, and other specific subjects. Barring that, I agree with Precedent for Preemption’s recommendation that states build on momentum within the CCW system to create “an open-ended Group of Governmental Experts or Working Group,” preferably comprised of government, industry, and civil society representatives, to discuss what makes autonomous weapon systems unique and reach consensus on how best to regulate them.


Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof's primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberspace, robotics, and the Internet of Things. Work available at www.crootof.com.

Subscribe to Lawfare