Too Early for a Ban: The U.S. and U.K. Positions on Lethal Autonomous Weapons Systems
As I discussed in a previous post, the Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS) is meeting for the second time to discuss emerging issues in the area of LAWS.
Published by The Lawfare Institute
in Cooperation With
As I discussed in a previous post, the Convention on Certain Conventional Weapons Group of Governmental Experts (GGE) on lethal autonomous weapons systems (LAWS) is meeting for the second time to discuss emerging issues in the area of LAWS. The discussions are grounded in four overarching issues:
- characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW;
- further consideration of the human element in the use of lethal force, aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems;
- review of potential military applications of related technologies in the context of the Group’s work;
- and possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the CCW without prejudicing policy outcomes and taking into account past, present and future proposals.
As two of the countries reportedly investing in developing LAWS, the U.S. and the U.K. are ones to watch at this week’s conference. For over a century, the U.S. and the U.K. have maintained a “special relationship,” one that “has done more for the defense and future of freedom than any other alliance in the world.” With one part of the special relationship being the closeness of the U.S. and U.K. militaries, it is unsurprising that the U.S. and the U.K. have much the same opinion when it comes to LAWS: It is too early for a prohibition. For the U.S., the reasoning is grounded in concrete examples of how past “autonomy-related” technologies have advanced civilian protection. Alternatively, the U.K. is wary based on a skepticism that LAWS will ever exist.
This post will outline the U.S. and U.K. positions on LAWS, as advanced by working papers, statements and policies.
U.S. Position on LAWS
Ahead of the first GGE meeting on LAWS, the U.S. submitted two working papers: “Autonomy in Weapon Systems” and “Characteristics of Lethal Autonomous Weapons Systems.” The U.S. also made three publicly available statements at that meeting: an opening statement, a statement on appropriate levels of human judgment, and a statement on the way forward. Similarly, the U.S. published a third working paper, Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems, in advance of the second GGE. The U.S. position as conveyed in its published statements is that there are potential humanitarian and military benefits of the technology behind LAWS, so it is premature to ban them.
Working Paper: “Autonomy in Weapon Systems”
In “Autonomy in Weapon Systems,” the U.S. sets out its assessment of a legal review of weapons with autonomous functions, ultimately concluding that rigorous testing and sound development of weapons—while not required by international humanitarian law (IHL)—can support the implementation of IHL requirements. The U.S. takes care to highlight a distinction between targeting issues and questions of weapons being “inherently indiscriminate.” Weapons that are “inherently indiscriminate” are per se against IHL, while most targeting issues arise on a case-by-case basis and are only determinable when set against the background of a particular military operation. As Defense Department policy requires the acquisition and procurement of Defense Department weapons and weapon systems to be consistent with applicable domestic and international law, the department’s legal review of a weapon focuses on that weapon’s illegality per se. While the use of autonomy to aid in the operation of weapons is not illegal per se, it may be appropriate for programmers of weapons that use autonomy in target selection and engagement to consider programming measures that reduce the likelihood of civilian casualties. To that end, practices like those espoused in Pentagon policy, requiring autonomous and semi-autonomous weapons systems to undergo “rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E),” can help reduce the risk of unintended combat engagements.
Next, the U.S. explains how weapon systems with autonomous functions could comply with IHL principles in military operations, augmenting human assessments of IHL issues by adding the assessment of those issues by a weapon itself (through computers, software and sensors). The law of war requires that individual human beings—using the mechanism of state responsibility—ensure compliance with principles of distinction and proportionality, even when using autonomous or semi-autonomous weapon systems. By contrast, it does not require weapons—even autonomous ones—to make legal determinations; rather, weapons must be capable of being employed consistent with IHL principles. The U.S. then points to a best practice (found within Defense Department policy) for improving human-machine interfaces that assist operators in making accurate judgements: The interface between people and machines for LAWS should be readily understandable to trained operators; provide traceable feedback on system status; and provide clear procedures for trained operators to activate and deactivate system functions. Instead of replacing a human’s judgment of, and responsibility for, IHL issues, the U.S. contends that LAWS could actually improve humans’ ability to implement those legal requirements.
The U.S. then describes how autonomy in weapon systems can create more capabilities and enhance the way IHL principles are implemented. The U.S. argues that military and humanitarian interests can converge, ultimately reducing the risk weapons pose to civilians and civilian objects. For example, the U.S. discusses how the application of autonomous functions could be used to create munitions that self-deactivate or self-destruct; create more precise bombs and missiles; and allow defensive systems to select and engage enemy projectiles. While the first two examples can be used to reduce risk of harm to civilian population, the third can provide commanders more time to respond to threats. Autonomous functions could allow for increased operational efficiency and a more precise application of force.
Finally, the U.S. characterizes the framework of legal accountability for weapons with autonomous functions. The U.S. first notes that states are responsible for the use of weapons with autonomous functions through the individuals in their armed forces, and that they can use investigations, individual criminal liability, civil liability, and internal disciplinary measures to ensure accountability. The U.S. next explains how persons are responsible for individual decisions to use weapons with autonomous functions, though issues normally present only in the use of weapon systems are now also present in the development of such systems. The upshot of this issue spread is that people who engage in wrongdoing in weapon development and testing could be held accountable. The U.S. states that the standard of care due to civilian protection must be assessed based on general state practice and common operational standards of the military profession. Finally, the U.S. observes that decision-makers must generally be judged based on the information available to them at the time; thus, training on, and rigorous testing of, those weapons—as described in Defense Department policy—can help promote good decision-making and accountability.
Working Paper: “Characteristics of Lethal Autonomous Weapons Systems”
In “Characteristics of Lethal Autonomous Weapons Systems,” the U.S. explains why it is unnecessary for the GGE to adopt a specific definition of LAWS. The U.S. contends that IHL provides an adequate system of regulation for weapon use and that the GGE can understand the issues LAWS pose with a mere understanding of LAWS’ characteristics, framed by a discussion of the CCW’s object and purpose. The U.S. further notes that the development of a definition—with a view toward describing the weapons to be banned—would be premature and counterproductive, diverting time that should be spent understanding issues to negotiating them.
In support of its idea that identification of characteristics of LAWS would promote a better understanding of them, the U.S. frames the way these characteristics should be set out. The U.S. asserts that characteristics of LAWS should be intelligible to all relevant audiences; not identified based on specific technological assumptions, such that those characteristics could be rendered obsolete by technological development; and not defined based on the sophistication of the machine intelligence. The U.S. articulates how focusing on sophistication of machine reasoning stimulates unwarranted fears. Instead, the U.S. stresses that these characteristics should focus on how humans will use the weapon and what they expect it to do.
Finally, the U.S. offers some internal Pentagon definitions to describe autonomy in weapon systems. Though created after considering existing weapon systems, the U.S. offers these definitions for the GGE’s consideration, as they focus on what the U.S. believes is the most important issue posed by autonomy in weapon systems: people who use the weapons can rely on them to select and engage targets. One of the definitions the U.S. offers is that for “autonomous weapon system,” which it defines as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”
Opening Statement at First GGE
In its opening statement delivered by State Department lawyer Charles Trumbull, the U.S. emphasized the importance of the weapon review process in the development and acquisition of new weapon systems. The U.S. also asserted that it continues to believe advances in autonomy could facilitate and enhance the implementation of IHL—particularly the issues of distinction and proportionality. Noting that “[i]t remains premature … to consider where these discussions might or should ultimately lead,” the U.S. stated that it does not support the negotiation of a political or legally binding document at this time.
Statement at First GGE: Intervention on Appropriate Levels of Human Judgment over the Use of Force
In the second statement delivered by Lieutenant Colonel John Cherry, Judge Advocate, U.S. Marine Corps, the U.S. answered one of the questions Chairman Amandeep Singh Gill posed in his food-for-thought paper: Could potential LAWS be accommodated under existing chains of military command and control? The U.S. responded in the affirmative, detailing how commanders currently authorize the use of lethal force, based on indicia like the commander’s understanding of the tactical situation, the weapon’s system performance, and the employment of tactics, techniques and procedures for that weapon. Understanding that states will not develop and field weapons they cannot control, the U.S. urged a focus on “appropriate levels of human judgment over the use of force,” rather than the controllability of the weapon system. The U.S. contended that a focus on the level of human judgment is appropriate because it both centers on the human beings to whom IHL applies, and reflects the fact that there is not a fixed, one-size-fits-all level of human control that should be applied to every weapon system.
Statement at First GGE on the Way Forward
In its last statement delivered by State Department lawyer Joshua Dorosin, the U.S. reiterated its support of a continued discussion of LAWS within the CCW. It further cautioned that it is premature to negotiate a political document or code of conduct when states lack a shared understanding of the fundamental LAWS-related issues. As emphasized in its working paper, the U.S. urged states to establish a working understanding of the common characteristics of LAWS. Finally, the U.S. stated its support for further discussions on human control, supervision or judgment over decisions to use force, as well as state practice on weapons reviews.
Working Paper: “Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems”
In Humanitarian benefits of emerging technologies in the area of lethal autonomous weapon systems, the U.S. examines how autonomy-related technologies might enhance civilian protection during armed conflict. The U.S. explores the ways in which emerging technologies in LAWS could enhance civilian protection. It contends that state practice shows five ways that civilian casualties might be reduced through use of LAWS: through incorporating autonomous self-destruct, self-deactivation, or self-neutralization mechanisms; increasing awareness of civilians and civilian objects on the battlefield; improving assessments of the likely effects of military operations; automating target identification, tracking, selection, and engagement; and reducing the need for immediate fires in self-defense. After discussing each way that emerging technologies in the area of LAWS have the potential to reduce civilian casualties and damage to civilian objects, the U.S. concludes that states should “encourage such innovation that furthers the objectives and purposes of the Convention,” instead of banning it.
The U.S. first discusses examples of weapons with electronic self-destruction mechanisms and electronic self-deactivating features that can help avoid indiscriminate area effects and unintended harm to civilians or civilian objects. The U.S. also points to weapons systems, such as anti-aircraft guns, that use self-destructing ammunition; this ammunition destroys the projectile after a certain period of time, diminishing the risk of inadvertently striking civilians and civilian objects. Though the mechanisms are not new, more sophisticated mechanisms such as these often accompany advances in weaponry. Just as Defense Department policy dictates that measures be taken to ensure autonomous or semi-autonomous weapon systems “complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so … terminate engagements or seek additional human operator input before continuing the engagement,” so too could future LAWS be required to incorporate self-destruct, self-deactivation, or self-neutralization measures.
Second, the U.S. underscores the way that artificial intelligence (AI) could help commanders increase their awareness of the presence of civilians, civilian objects and objects under special protection on the battlefield, clearing the “fog of war” that sometimes causes commanders to misidentify civilians as combatants or be unaware of the presence of civilians in or near a military objective. AI could enable commanders to sift through the overwhelming amount of information to which they might have access during military operations—like hours of intelligence video—more effectively and efficiently than humans could do on their own. In fact, Pentagon is currently using AI to identify objects of interest from imagery autonomously, allowing analysts to focus instead on more sophisticated tasks that do require human judgment. The increased military awareness of civilians and civilian objects could not only help commanders better assess the totality of expected incidental loss of civilian life, injury to civilians, and damage to civilian objects from an attack, but could also help commanders identify and take additional precautions.
Third, the U.S. discusses how AI could improve the process of assessing the likely effects of weapon systems, with a view toward minimizing collateral damage. With the U.S. already using software tools to assist in these assessments, more sophisticated computer modelling could allow military planners to assess the presence of civilians or effects of a weapon strike more quickly and more often. These improved assessments could, in turn, help commanders identify and take additional precautions, while offering the same or a superior military advantage in neutralizing or destroying a military objective.
Fourth, the U.S. points out how automated target identification, tracking, selection and engagement functions can reduce the risk weapons pose to civilians, and allow weapons to strike military objectives more accurately. The U.S. lists a number of weapons—including the AIM-120 Advanced Medium-Range, Air-to-Air Missile (AMRAAM); the GBU-53/B Small Diameter Bomb Increment II (SDB II); and the Common Remotely Operated Weapon Station (CROWS)—that utilize autonomy-related technology to strike military objectives more accurately and with less risk of harm to civilians and civilian objects. The U.S. contends that those examples illustrate the potential of emerging technologies in LAWS to reduce the risk to civilians in applying force.
Finally, the U.S. engages with ways that emerging technologies could reduce the risk to civilians when military forces are in contact with the enemy and applying immediate use of force in self-defense. The U.S. contends that the use of autonomous systems can reduce human exposure to hostile fire, thereby reducing the need for immediate fires in self-defense. It points to the way that remotely piloted aircraft or ground robots can scout ahead of forces, enable greater standoff distance from enemy formations, and thus allow forces to exercise tactical patience, reducing the risk of civilian casualties. The U.S. also notes how technologies—like the Lightweight Counter Mortar Radar—can automatically detect and track shells and backtrack to the position of the weapon that fired the shell; this and similar technologies can be used to reduce the risk of misidentifying the location or source of enemy fire. The U.S. lastly asserts that defensive autonomous weapons—like the Counter-Rocket, Artillery, Mortar (C-RAM) Intercept Land-Based Phalanx Weapon System—can counter incoming rockets, mortars, and artillery, allowing additional time to respond to an enemy threat.
U.K. Position on LAWS
Unlike the U.S., the U.K. did not submit working papers in advance of either the first or second GGE on LAWS; however, as I described in this Lawfare post, last fall, the U.K. Ministry of Defence (MoD) published its doctrine on best practices for Unmanned Aircraft Systems: Joint Doctrine Publication (JDP) 0-30.2, Unmanned Aircraft Systems (JDP 0-30.2). Within JDP 0-30.2, the MoD provides an updated policy on LAWS, and acknowledges that its definition of a “lethal autonomous weapons system” differs from those of other states. The MoD defines an autonomous system as one “capable of understanding higher-level intent and direction…[i]t is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control.” As I noted previously, defining LAWS in such a futuristic way makes it difficult to discern the U.K. position on other, less sophisticated LAWS that are actually on the cusp of development.
The Defense Ministry states, “The UK does not possess fully autonomous weapon systems and has no intention of developing them. Such systems are not yet in existence and are not likely to be for many years, if at all.” Importantly, the Defense Ministry further asserts that “[t]he UK Government’s policy is clear that the operation of UK weapons will always be under human control as an absolute guarantee of human oversight, authority and accountability.” The position articulated in this joint doctrine publication echoes the U.K. position set forth in the 2016 informal meeting of experts on LAWS.
Prior to the November GGE on LAWS, U.K. NGOs Article 36 and the U.K. United Nations Association—both members of the Campaign to Stop Killer Robots—wrote to the U.K. Foreign and Commonwealth Office (FCO). Article 36 and the United Nations Association urged the FCO to change its position on LAWS and to address the range of legal and ethical issues LAWS pose. Concerned that the U.K.’s current definition of LAWS might indicate that “the UK is effectively giving a green light for the development of future systems with an unacceptably high degree of autonomy,” the organizations called on the government to increase the pace of CCW discussions so as to catalyze a global standard for the level of human control necessary in weapons systems. In its reply letter, the FCO stated that “it has been the UK’s consistent view that the GGE should focus on establishing working definitions of LAWS and Meaningful Human Control.”
Though the U.S. is signaling that it does not want a political or legally binding instrument limiting the use or development of LAWS on the international stage, it seems to straddle both sides of the fence regarding the debate about the potential advantages and disadvantages of LAWS in its own policy. For example, Defense Department Directive 3000.09 states that “[a]utonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” As this Human Rights Watch and International Human Rights Clinic memo points out, “[t]he Directive is in effect a moratorium on fully autonomous weapons with the possibility for certain waivers.” By the same token, the U.S. has very clearly demonstrated its belief in the idea that LAWS can prevent much harm to civilians during armed conflict, based on its past experience with weapons with autonomous functions.
The U.K. has similarly equivocated with regard to its definition of LAWS. By defining LAWS as narrowly as it has—“machines with the ability to understand higher-level intent, being capable of deciding a course of action without depending on human oversight and control”—it is easier for the U.K. to state that it has not, and will not, develop such weapons systems.