Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

Reflections on the Chatham House Autonomy Conference

Paul Scharre
Monday, March 3, 2014, 5:34 PM

Chatham House recently held a conference on autonomous military technologies, the focus of which was really the current debate regarding autonomous weapon systems. Kudos to Chatham House for leaning forward in this critical area and for bringing together the right mix of people for an engaging and productive conference.

Published by The Lawfare Institute
in Cooperation With
Brookings

Chatham House recently held a conference on autonomous military technologies, the focus of which was really the current debate regarding autonomous weapon systems. Kudos to Chatham House for leaning forward in this critical area and for bringing together the right mix of people for an engaging and productive conference. The event was held under Chatham House rules, so of course I won’t identify speakers or affiliations, but I want to pass along some reflections from the event, both for those who weren’t able to attend and to continue the discussion among those who did attend.

Below are some of my key takeaways from the conference as well as some thoughts on how these might affect how we think about this debate. This post assumes significant prior knowledge about what an autonomous weapon is and the arguments for and against, so if you’re coming new to this, I would suggest first reading this, this, and this, or you may find yourself a little frustrated. Also, I will restrict my consideration to lethal weapons, since that’s where the discussion at Chatham House focused, but it is worth pointing out that fully autonomous non-lethal anti-personnel weapons also raise challenging issues.

Areas of agreement

Somewhat surprising to me, there was widespread agreement at the conference on a number of key issues: Technology---There seemed to be general agreement about the capabilities of autonomous technologies today and for at least the foreseeable future. Most people seemed to agree that the automated defensive systems that are in existence today, like Aegis and Patriot, were necessary to defend against quick-reaction saturation attacks and that such systems were not of significant concern, since they are defensive in nature and target missiles and aircraft. Where there was concern about weapons in use today, it was that they might be precursors to something more dangerous to come, not that the technology today was necessarily problematic. Per se, illegality---Nearly all agreed that autonomous weapons were not illegal weapons prohibited under the laws of armed conflict (LOAC), per se, but that most of the concerns raised about them were regarding possible illegal uses of these weapons. There were some, I would note, who argued that while there might be some limited legal uses under the traditional principles of LOAC, that autonomous weapons were, however, unethical and thus prohibited as offensive to the dictates of “public conscience,” citing the Martens Clause. This is an interesting argument that I’ll come back to later on, but for now it’s worth nothing that no one seemed to argue that there was anything about autonomous weapons that would render them inherently illegal. Limited lawful uses---Most also seemed to agree that there probably were limited circumstances in which a fully autonomous weapon that selected and engaged targets on its own could be used in a manner that was compliant with the laws of war (Martens Clause aside). Three illustrative examples:
  • First, in a situation where no civilians are present, such as underwater or attacking armored formations in open terrain away from populated areas.
  • Second, in a situation where the degree of damage rendered was so minimal that any possible nearby civilians would not be harmed. An example might be a small robotic system that attached itself to an enemy vehicle and disabled it with a small, shaped charge such that the weapon’s effect would be more akin to sabotage than a bomb.
  • Third, in a situation where the military necessity was so high that it vastly outweighed concerns about possible collateral damage, so that proportionality was an easy judgment for the human to make prior to launch. An example could be sending an autonomous weapon to hunt and destroy nuclear-tipped mobile missile launchers, where millions of lives were at stake and the alternative might be to use nuclear weapons.
There seemed to be widespread agreement, however, that satisfying the necessary criteria for use of force under the laws of war, particularly the requirement for proportionality, would be very challenging if not impossible in situations where civilians might be nearby. Even if a weapon could accurately discriminate targets---and a good example here is the Israeli Harpy, an autonomous weapon that selects and engages enemy radars on its own---the widespread view shared by many seemed to be that satisfying the criteria for proportionality would be very challenging in most situations. Accountability---No one, mercifully, seemed to be arguing that machines were moral agents that should be held responsible under the laws of war. Human operators using machines are responsible---directly responsible, in fact---for the consequences of their use. No one seemed to dispute this argument, but rather the concern was raised by some that an accountability gap might nevertheless develop if a machine behaved in an unpredictable manner and did something counter to what the human user intended. In that case, who would be responsible? This could occur because it was hacked or tricked by an adversary, because it encountered an unanticipated situation on the battlefield, because of poor operator training, or simply because of a malfunction. Participants agreed that programmers would not be guilty of committing a war crime if a machine malfunctioned, only if they had programmed it intentionally to commit a war crime. Human operators, meanwhile, might be justified in saying they didn’t know the machine was going to take some action if it malfunctioned. This is, however, more about predictability and reliability that autonomy, per se. A completely predictable and reliable system would not have this problem. The U.S. Department of Defense policy on autonomy in weapons attempts to address this issue by installing a rigorous set of test and evaluation procedures and requirements for information assurance. It also requires training for human operators to ensure that they are fully trained and understand how the system will function. Human control---There was, as near as I could tell, universal agreement that humans should remain in control of decisions over the use of lethal force. There have been different formulations of this principle, that humans should have “meaningful” control or “appropriate” control, but since these terms are not yet defined they seem to amount to the same sentiment, which is basically that judgment about the use of force should be retained by humans. Where parties disagreed was how to operationalize such a vague concept.

Areas of disagreement

The core of disagreements, of which there were a healthy and robust amount, seemed to come down to how to deal with a technology that on the one hand has significant risks, but also might have some limited lawful and even valuable uses. Those who focused on the risks were concerned that allowing any limited use could lead to a slippery slope where wider, possibly unlawful, use occurred. Those who focused on the potential value of autonomous weapons in some circumstances, such as the examples I cited above, were opposed to prohibitions that might take a weapon that could potentially save lives, both military and civilian, out of the hands of military commanders. There were also a range of disagreements about what the problem with autonomy is, if there is one at all. I am going to cover briefly some of the objections raised, possible uses for autonomy, and explore how one’s views on the validity, or not, of those arguments seemed to affect what one believed should be done about autonomous weapons.

Objections to autonomy

A number of different objections to potential autonomous systems were raised, but the core of them seemed to fall into three categories:
  1. Concern over the potential for civilian casualties, partly because of the problem of distinction but primarily because of concern that an autonomous weapon would be unable to determine proportionality.
  2. Morality/ethics---Some argued that even if machines could be discriminating and proportionate, and even if they only killed enemy combatants, there was something fundamentally wrong about allowing machines to decide whom to kill.
  3. Strategic stability---Another important concern raised was about strategic stability, particularly the interaction between two nations’ autonomous systems and the possibility that they might lead to an accidental war or an uncontrollable war, even if the systems were intended to only be defensive in nature. Although many seemed to recognize this as a serious issue, it was not discussed in much depth. A number of people raised the fear that the increasing pace of war could force greater and greater automation, thereby potentially taking humans out of the decision to start a war.
I would point out that these are not just different concerns, but act on fundamentally different levels. Concerns over civilian casualties are primarily about the legality of autonomous weapons, or the possible legal or illegal uses of such weapons. Concerns over morality and ethics are a different issue entirely. There might be weapons that are legal but are still unethical because they shock the human conscience. Concerns about strategic stability operate on still another level, which is really about policy and strategy. One could envision a weapon that is neither illegal nor unethical but, if it started an accidental war or made war uncontrollable, would be stupid. It is also worth mentioning that there is a fourth point that was not discussed, but is a possible objection to autonomous weapons on purely practical grounds. A weapon that is uncontrollable or vulnerable to hacking is not very valuable to military commanders. In fact, such a weapon could be quite dangerous if it led to systemic fratricide. While humans make mistakes on the battlefield that can result in fratricide, those mistakes tend to be isolated, individual instances that have relatively localized effects. What is different about autonomous weapons is that a failure in the system, either due to a malfunction, coding error, inappropriate use, or hacking by an adversary, could lead to mass systemic fratricide across the battlespace if all autonomous weapons of that type were affected. Any military system is vulnerable to these types of failures and attacks, and these are magnified when there is a large monoculture in the force, but the key difference with autonomous weapons is that a human in the loop provides a natural firebreak against large-scale fratricide. If a human has to authorize each engagement, then even if a weapon is discovered to have the potential to accidentally target friendly forces, that information can be passed along to other military personnel who can adjust tactics and procedures for use. A number of U.S. homing munitions are known to have the potential to lock on to friendly targets if fired at them, and so users compensate with tactics designed to prevent such incidents.

Rationale for autonomous weapons

The reasons why one might want an autonomous weapon system were given less attention, but came up briefly. In short, they are:
  1. If the speed of operations is so rapid that humans simply cannot keep pace and reactions must be automated. This sounds a little science-fictiony, but the reality is that this is why the Aegis ship-based defensive system and the Patriot air and missile defense system both have human-supervised autonomous modes. (Counter to popular belief, the U.S. counter-rocket, artillery, and mortar (C-RAM) system does not have such a mode. A human is “in the loop” and must positively authorize every single engagement.)
  2. To allow the use of force against new, emergent targets---either defensively or offensively against targets of opportunity---while an unmanned vehicle is in a communications denied environment and cannot reach back to human controllers in the time needed to make a decision.
In addition to the examples I gave above, other examples could be:
  • Hunting enemy submarines where communications was challenging and surfacing to ask permission to fire would mean losing the target;
  • Self-defensive fire by unmanned aircraft operating in hostile territory without reliable communications links back to human controllers.
There are, of course, many other scenarios one could imagine. A key point, though, is that a major factor in whether autonomous weapons are militarily attractive or even necessary may be simply whether other nations develop them. If I had one criticism of the conference, it was that the discussion often seemed to assume that if responsible states and civil society could all agree to ban a certain weapon, then it would not be used. But, of course, that isn’t the case. Particularly because the underlying technology behind autonomy is driven by the commercial sector, the ability to build at least crude autonomous weapons will be available to many actors. One scientist even argued that the technology to build a simple and indiscriminate anti-personnel autonomous weapon existed today (and explained how one would do it). The point is that while nations do have a choice over what weapons to build, they don’t choose in a vacuum. The choices nations make will affect each other. The potential for an arms race in autonomous weapons merits further consideration.

The endgame   

There was, needless to say, widespread disagreement about what one should do to address some of the challenges as we move toward increasing autonomy in weapon systems. These ranged from doing nothing – existing laws are sufficient – to implementing a comprehensive legally-binding ban. There isn’t really sufficient space here to argue all of the various possible permutations of restrictions, laws, or other regimes that could be implemented and the rationale for and against them. I would just observe that where one fell in terms of what the endgame should be seemed to depend partly on what one’s concern was. If one believed that any of the uses for autonomous weapons that might result in civilian casualties would be illegal under the laws of armed conflict anyway, that tended to push one toward believing that no new laws were necessary, but best practices or regulations could be valuable. If one had little faith that states would actually use autonomous weapons appropriately if left to their own devices, and that allowing any use at all was a slippery slope toward widespread use, that pushed one to a ban. If one was primarily concerned that allowing a machine to kill a person under any circumstances was unethical, even if the person was a legitimate military target, then that tended to lead one to argue for a complete ban as well. One view that was probably under-represented in the conference, but which I have heard elsewhere, is the view that any restrictions, even those that are legally-binding, will be pointless because some states will ignore the restrictions and will cheat anyway. This is of course a risk with any international treaty, but is particularly acute in this instance because of two factors. The first is that it is not currently known how significant autonomous weapons might be on the battlefield. If they turn out to be a game-changer, then it is hard to see how a country might be persuaded to give up such weapons unless they knew all other countries had given them up as well. In this respect, this becomes very similar to the problem of nuclear disarmament today. In many ways it is worse, in fact, since the number of actors with autonomous weapons could be much larger than those with nuclear weapons today. The second problem, which amplifies the first, is that it is very hard to see how a ban on could be verified. Whether a weapon is autonomous or semi-autonomous would be a feature of the software and would not be observable from the outside. It seems highly unlikely that States would release their source code, which would mean such a ban would really amount to “trust us.” If some States cheated, not only would enforcement be a problem, there would be no way to even catch them cheating. In that case, responsible States who respect the rule of law would find their hands tied while the States with the least respect for the rule of law could potentially gain the upper hand in warfare. This might be perhaps one of the worst of all possible outcomes, allowing the least ethical and lawful to States to have a game-changing technology, thereby tilting the playing field in their favor. In this respect, a non-legally binding norm against autonomous weapons or even just a set of “best practices” might be preferable, since it would at least put all states on an even playing field. I should caveat all of the above statements, however, with saying that I am actually referring to autonomous weapons directed against enemy vehicles, radars, materiel and the like – not personnel. I am also assuming that any regulation or prohibition, even one non-legally binding, would not include automated human-supervised defensive systems like the Aegis and Patriot. I did not hear anyone at the conference argue in favor of fully autonomous lethal anti-personnel weapons, nor did I hear anyone argue against already existing defensive systems like the Aegis and Patriot. If there are other arguments out there, I would be interested in hearing them.

What is our model?

As a parting thought, I just want to point out that however one views the challenges and advantages of autonomous weapons, there are historical examples that can help us come to grips with this new technology. Humanity has faced new and problematic weapons many times in the past. Prohibitions on certain types of weapons date back to antiquity. The Mahabharata of ancient India describes poisoned or barbed arrows as evil weapons. The Pope banned the crossbow in 1139 – for use against Christians, that is. Around the turn of the century, efforts to restrict or regulate submarine warfare, air-delivered projectiles, poison gas, and expanding bullets met with mixed results. Eventually, submarines and air-delivered weapons came to be “normalized” as weapons to be treated the same as any other weapon. Norms against poison gas and expanding bullets, on the other hand, solidified. The arguments against these weapons were also different. For submarines and air-delivered projectiles, the concern was about civilian casualties. For poison gas and expanding bullets, objections stemmed from the unnecessary suffering they caused combatants. The twentieth century saw an explosion of prohibitions, regulations, and treaties on certain weapons in warfare. Blinding lasers and non x-ray detectable fragments were prohibited because of their potential for unnecessary suffering. Campaigns were launched against land mines and cluster munitions because of concerns about civilian casualties. There were a host of Cold War-era treaties banning all sorts of weapons because of concerns about strategic stability: weapons on the moon, WMD in space, WMD on the sea bed, medium-range ballistic missiles, and anti-ballistic missile systems. The U.S. has since withdrawn from the anti-ballistic missile treaty, but the other Cold War-era treaties are still in effect. Still other weapons, like herbicides, riot control agents, and incendiary weapons are allowed in some cases and not others. Some weapons lack formal treaties, but nevertheless norms have emerged against their use, such as offensive weapons in space, anti-satellite weapons, and neutron bombs. Finally, nuclear weapons occupy their own special category, a weapon that is at least in principle kept “in a box,” the sole legitimate purpose of which should only be to defend one’s nation from existential threats. History provides many examples to draw from and many lessons for going forward. It is worth considering that different historical models might apply to different types of autonomous weapons. Even among the set of possible fully autonomous lethal weapons, there are human-supervised defensive systems like Aegis and Patriot, anti-vehicle or anti-materiel systems like the Harpy, and anti-personnel systems (which, thankfully, do not exist today). These have varying degrees of military utility and trigger to varying degrees the concerns about civilian casualties, ethics and morality, and strategic stability.

Where is technology going?

These are tough issues, and Chatham House deserves credit for hosting a great conference bringing together a good mix of folks to discuss them. We live in a world of rapid technological change, and on issues from government surveillance to personal privacy, cyber security, and warfare, technology is moving faster than our policies can keep up. Grappling with the issues raised by the technology we have today is hard enough. Attempting to project forward to what may come can be overwhelming, and our instincts for envisioning future technology can often be quite wrong. Mid-way through the conference a reporter from a major news organization pulled me aside for a quick interview to help distill the main ideas on the conference for the lay audience. Invariably, one of the first questions he asked was whether this was all leading to Robocop and the Terminator. When we think about autonomous weapons, our minds look to science fiction because we have no other good models. But, in fact, they are terrible models for where technology is taking us. Not only will future machines not necessarily look like us, they won’t think like us either. Many of the things machines are good at, humans are terrible at, and many of the things we excel at machines can’t do at all. With respect to autonomous weapons, two key shortfalls relative to human cognition are often raised. The first is object recognition. Computers are notoriously terrible at recognizing even simple objects, while humans actually have very powerful visual processing. For an autonomous weapon, this is a significant problem as computers can’t be expected to choose the right targets if they can’t even identify what they’re looking at. This isn’t true in all situations, since computers can also use sensors humans cannot, like non-visual parts of the electromagnetic spectrum, but it is a limitation for lots of automatic target recognition (ATR). The second limitation that is often cited is the inability for computers to make value judgments on things like proportionality. This is a problem of an entirely different magnitude. It is conceivable to imagine a computer with greater processing power and better algorithms being able to identify objects better. Whether an object is a rifle or a walking stick is an objective fact. But many decisions about the use of force do not have objective answers. Judgments about proportionality or hostile intent may be subjective and situation-dependent. Humans may differ significantly about what action is considered appropriate in any given situation. These types of decisions require understanding the broader political and social context of a situation, an appreciation of society’s values, and empathy. This is quite a high bar for machines to reach. If the current pace of technology is any indication, we are likely to live in a future world with faster, smarter machines that can solve complex problems involving large amounts of data at high speeds in ways that humans are simply incapable of. But these machines are likely to continue to lack what we would call “common sense,” much less empathy, emotions, and values. Machines are likely to be extremely useful in military settings, including perhaps for autonomous targeting in some situations. That they will lack the reasoning required to make judgments should make us wary of their use. The fact that they will lack common sense should make us concerned about their implications for strategic stability and mass fratricide. Robocop and the Terminator are unlikely visions of the future, but we are likely to see the deployment of complex military systems that are increasingly opaque to their human operators. Even under the best of circumstances, barring glitches, bugs, or hacking, their interaction with the real world as well as competitive adversaries brings the potential for unintended consequences. From Fukushima to flash crashes, we have examples of complex artificial systems that humans do not fully understand causing unintended and catastrophic accidents. Even if autonomous systems could be 100% collateral damage-free, a flash war would benefit no one. If there were easy answers to these issues, we would have settled them by now. The reality is that there are some circumstances, like defending military forces and civilian populations against high-speed saturation attacks from missiles, where autonomous systems are and will continue to be very valuable. There are other situations – and it is not hard to envision them – where autonomous systems would be illegal, unethical, or unwise. Discussions about where technology is going, and how we want to shape its development, are critical. Paul Scharre is a Fellow and Director of the 20YY Warfare Initiative at the Center for a New American Security. From 2008-2013, he served in the Office of the Secretary of Defense (OSD) where he led the working group that drafted the U.S. Department of Defense policy on autonomy in weapons, DoD Directive 3000.09. Prior to joining OSD, he served in the U.S. Army's 75th Ranger Regiment as an infantryman and reconnaissance team leader and completed multiple tours to Iraq and Afghanistan.

Subscribe to Lawfare