Lethal Autonomous Weapons Systems at the First and Second U.N. GGE Meetings
Based on trends in advancing robotics technology, many experts believe autonomous—and even lethal autonomous—robots are an inevitable and imminent development.
Published by The Lawfare Institute
in Cooperation With
Based on trends in advancing robotics technology, many experts believe autonomous—and even lethal autonomous—robots are an inevitable and imminent development. Although the capabilities of future technologies remain uncertain because of the complexity of the task to be executed, and the environment in which that technology operates, it is fairly clear that exhibiting the human characteristics necessary to comply with the international humanitarian legal principles of necessity and proportionality—like a split-second judgment call or concern for humanity—represents a formidable challenge to fully autonomous weapons.
As a result, many experts, states and interest groups—including the international coalition, Campaign to Stop Killer Robots—have advocated for a limit to the development of lethal autonomous weapons systems (LAWS). As of Nov. 16, 2017, twenty-two countries have called for a ban on LAWS; neither the U.S. nor the U.K. have joined this call. Last summer, tech companies followed suit, with CEOs including Elon Musk signing an Open Letter to the United Nations Convention on Certain Chemical Weapons. This letter warned that LAWS “[t]hreaten to become the third revolution in warfare.” Noting that “[w]e do not have long to act,” artificial intelligence and robotics tech companies cautioned that “once developed, [LAWS] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
In Dec. 2016, the Fifth Review Conference of the High Contracting Parties to the CCW decided to establish a Group of Governmental Experts (GGE) on LAWS. From Nov. 13 to 17, 2017, the United Nations held its first meeting of the Convention on Certain Conventional Weapons (CCW) GGE on Lethal Autonomous Weapons Systems; the second meeting will take place from April 9 to 13. The GGE’s mandate is to examine emerging technologies in the area of LAWS, in the context of the objectives and purposes of the CCW, and with a view toward identification of the rules and principles applicable to such weapon systems.
This post covers the current status of LAWS, summarizing technical and legal developments, as well as the first GGE on LAWS, which focused on four dimensions of LAWS: technical; military; legal and ethical; and cross-cutting. The post concludes with what to look forward to in the second GGE on LAWS, beginning on April 9.
What are lethal autonomous weapons systems?
Though states have not agreed on a definition of LAWS, the International Committee of the Red Cross (ICRC) has defined “autonomous weapon systems” as “[a]ny weapon system with autonomy in its critical functions—that is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.” When the U.S. Department of Defense’s Office of Technical Intelligence inquired into the current research and development opportunities surrounding an increase in technological autonomy, DoD similarly identified three multidisciplinary technical fields on which autonomy relies: perception, cognition and action.
Current status of development
A 2014 ICRC expert meeting report, Autonomous weapon systems: Technical, Military, Legal and Humanitarian Aspects, identifies ship and land-based defensive weapon systems and fixed gun systems as examples of autonomous weapons systems currently in use. These weapon systems operate in stationary roles, with differing degrees of human oversight. According to Lawfare contributing editor and senior staff researcher at the International Computer Science Institute, Nicholas Weaver, autonomous weapons are not a “hypothetical threat tomorrow,” but rather “a real threat today.” In addition to discussing what it would look like to design and manufacture such weapons, Weaver also points to the ways in which drug cartels have begun to produce armed unmanned aerial vehicles (UAVs), and the Islamic State has begun to adapt off-the-shelf drones into bombers. The ICRC notes that “existing weapon systems, including manned aircraft and defensive weapon systems, are fitted with rudimentary capabilities to distinguishing [sic] simple objects in ‘low clutter,’ relatively predictable and static environments.”
The U.S. Third Offset strategy is central to the U.S. approach to the development of LAWS. The Third Offset strategy is an effort to focus DoD innovation on the means to offset near-peer competitor advances, allowing the U.S. to maintain its military dominance. While the first and second offset strategies were developed to counter Soviet conventional superiority (President Eisenhower’s New Look Strategy in the early 1950s) and Soviet nuclear superiority (Defense Secretary Harold Brown and Deputy Defense Secretary William Perry’s Offset Strategy in the 1970s), this third offset strategy is more focused on maintaining the U.S. and its military allies’ current competitive advantage. Former Deputy Secretary of Defense, Robert Work, has said this of DoD’s Third Offset Strategy: “It basically hypothesizes that the advances in artificial intelligence and autonomy—autonomous systems—is [sic] going to lead to a new era of human-machine collaboration and combat teaming.” Work has further identified five key areas of development on which the Third Offset Strategy focuses: 1) autonomous learning systems (“learning machines”); 2) human-machine collaborative decision-making; 3) assisted human operations; 4) advanced manned-unmanned systems operations (“human-machine combat teaming”); and 5) network-enabled autonomous weapons and high-speed projectiles. In March 2017, the Center for Strategic and International Studies put together this report assessing the Third Offset Strategy.
Legal Framework
While no existing law specifically addresses LAWS, the International Court of Justice made clear in its 1996 Advisory Opinion on the Legality of the Threat or Use of Nuclear Weapons that international humanitarian law “[p]ermeates the entire law of armed conflict and applies to all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future.” As such, the UN Convention on Certain Conventional Weapons is an appropriate forum in which states may seek to change the legal framework on LAWS.
The CCW, known officially as The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, concluded at Geneva in Oct. 1980 and entered into force in Dec. 1983. The CCW was adopted in order to restrict weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” While the CCW’s Protocols originally only applied to international armed conflicts, since the Second Review Conference in 2001, the CCW’s scope of application has been broadened to apply in non-international armed conflicts as well. In addition to regulating behavior during armed conflict, Protocols II and V, specifically, require parties to a conflict to take specific measures post-conflict to reduce the dangers posed by mines, booby traps, and other forms of unexploded and abandoned ordnance.
One of the CCW’s most important features is its ability to be expanded in response to the development of new weapons by allowing states parties to append protocols to the Convention. Five protocols—Protocol I (Non-detectable fragments); Amended Protocol II (Mines, booby-traps, other devices); Protocol III (Incendiary weapons); Protocol IV (Blinding laser weapons); and Protocol V (Explosive remnants of war)—are currently annexed to the CCW. The addition of these protocols means that the CCW covers non-detectable fragments, landmines, booby traps, incendiary weapons, blinding laser weapons, and the clearance of explosive remnants of war.
Notably, the U.S. is not a party to Protocol II, and submitted a 2009 reservation to Protocol III, reserving “the right to use incendiary weapons against military objectives located in concentrations of civilians where it is judged that such use would cause fewer casualties and/or less collateral damage than alternate weapons.” According to Campaign to Stop Killer Robots coordinator, Mary Wareham, many rejected this reservation as being incompatible with the CCW’s object and purpose.
One proposed approach to the issue of how policymakers, lawyers, armed forces, etc. should conceptualize accountability for technical autonomy in relation to war is war algorithms. Dustin Lewis, Naz Modirzadeh and Gabriella Blum from the Harvard Law School Program on International Law and Armed Conflict, define a war algorithm as “[a]ny algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.” Lewis, Modirzadeh and Blum anchor war-algorithm accountability in state responsibility for an internationally wrongful act, in individual responsibility under international law for international crimes, and in wider governmental, industry, and other forms of scrutiny, monitoring, and regulation. Complex algorithmic systems underpin more and more aspects of war-fighting, including in navigating and utilizing unmanned maritime, aerial and terrain vehicles; in producing collateral-damage estimations; in deploying “fire-and-forget” missile systems; in using stationary systems (e.g., close-in weapon systems); and in designing cyber operations that are capable of producing kinetic effects. But the war-algorithm accountability approach also folds in how computational procedures are likely to shape many other elements of war, too—such as in the allocation of humanitarian relief; in the determination of who may be deprived of liberty (and for how long); and even in the provision of legal advice to armed forces. To learn more about war algorithms, see their report, as well as Lawfare posts on the topic here, here and here. The last link includes a discussion of the Pentagon’s “Algorithmic Warfare Cross-Functional Team."
This Lawfare post provides a brief extract from Lawfare contributing editors Kenneth Anderson and Matthew Waxman’s more in-depth examination of the legal and ethical debates surrounding LAWS. The ICRC recently published a report discussing whether the principles of humanity and dictates of public conscience—drawn from the Martens Clause of the Second Hague Convention—can allow human decision-making on the use of force to be effectively substituted with computer-controlled processes. Human Rights Watch—in collaboration with the Harvard International Human Rights Clinic—has published three reports covering the way LAWS should be preemptively banned because of the threat they pose to civilians during times of war; threaten to contravene certain elements of human rights law; and would create an accountability gap were they manufactured and deployed.
First Meeting of CCW GGE on LAWS
The UN held its first Group of Governmental Experts on Lethal Autonomous Weapons Systems meeting from Nov. 13 to 17, building upon the discussions of three prior informal expert meetings. Ambassador Amandeep Singh Gill of India chaired the meeting, in advance of which he submitted this food-for-thought paper. The GGE heard presentations in panels relating to the different dimensions of the LAWS debate: 1) Technology dimension; 2) Military effects dimension; 3) Legal/ethical dimensions; 4) Cross-cutting issues. The UN published a report of the proceedings on Dec. 22, 2017. The below summary of the meeting contains information found within the UN report, the meeting’s agenda, and reports produced by the Women’s International League for Peace and Freedom’s (WILPF) Reaching Critical Will program.
General Exchange of Views
On Nov. 13 and Nov. 15, the GGE held meetings including a “general exchange of views.” Reporting on the Nov. 13 meeting, NGO Reaching Critical Will stresses the discrepancy between the way that almost all participants expressed concerns about LAWS, but only twenty-two states have thus far expressed their willingness and desire to preemptively prohibit their development and deployment. As France and Germany suggested in their working paper, one of the ways that states could deal with the challenges posed by the potential development of LAWS would be to create a political declaration affirming state party desire to maintain human control with regard to lethal force. According to Reaching Critical Will’s CCW report, the EU, Belgium, Spain, and Switzerland indicated preliminary support for such a political declaration. Even so, the EU—in addition to Australia and Cambodia—prefer encouraging transparent national weapons reviews, that could in turn allow states to exchange information and best practices stemming from those reviews.
One common understanding amongst the majority of states participating in the Nov. 13 exchange of views was the importance of retaining human control over weapon systems, including control over both the selection and engagement of targets. Another general risk highlighted by a number of countries included that of proliferation, and maybe even an autonomous weapons arms race.
Covering the Nov. 15 meeting, Reaching Critical Will notes that views on definitions of the technology or systems continued to differ. For example, Argentina urged the adoption of a precise and unequivocal definition, while the International Committee for Robot Arms Control suggested defining LAWS as “weapon systems that once launched can select targets and apply violent force without meaningful human control.” While all states seem to agree on the fact that IHL provides the most appropriate framework for guiding legal decisions about the development or use of LAWS, concerns with IHL compliance led to a discussion of levels of human control. Some states have begun to outline policy on the nature of human control, with the U.K. asserting that human authority is always required for the decision to strike, and Canada stating that it has “committed to maintaining appropriate human involvement in use of military capabilities that can exert lethal force.”
States like Algeria, Egypt and China, along with organizations like the Campaign to Stop Killer Robots, Human Rights Watch, and Mines Action Canada see meaningful human control as incompatible with LAWS. They thus supported a prohibition on the development and use of LAWS. States like Israel, Russia, Turkey, the U.K., and the U.S. oppose such a ban, with the U.S. and the U.K. indicating that it is “too early” to support a prohibition. States like Finland, Israel, Sweden, the U.K., and the U.S. argued that national weapon reviews are the best way to deal with LAWS. By contrast, China contended that, while national reviews are advantageous, multilateral work is necessary.
Technical Aspects
On Nov. 13, the GGE held its panel on the technical aspects of LAWS. While the UN summary of the panel emphasizes that “[t]he achievement of ‘strong’ or general AI is not as close as many believe,” NGO Reaching Critical Will underlines the panel’s differentiation between the engineering of autonomous weapons, and the actual coding of those weapons with human judgments. In essence, it is still too technologically difficult to couple artificial intelligence (AI) and autonomous physical systems; thus, for now the only autonomous weapons systems that can be built are “stupid” ones. As there are more risks from “dumb machines” and failures in human-machine interaction than from “smart machines” that can outthink humans, human participation is key to addressing risks with LAWS.
The session on technological dimensions of LAWS featured six experts from industry and academia. Professor Margaret Boden, of the University of Sussex, emphasized that—even 50 years from now—robots will be unable to be moral or ethical beings. Professor Gary Marcus, of New York University, underlined how AI systems cannot yet deal with anomalous situations or common sense. Mr. Gautam Shroff, of Tata Consultancy Services, India, expressed concern with the conflation between AI and IT systems, noting that AI systems cannot be “debugged” in the same way. Mr. Harmony Mothibe, of BotsZa, South Africa, spoke of the challenges involved with the degree of interpretation in processing languages. Professor Stuart Russell, of the University of California, Berkley, simultaneously highlighted the positive benefits of AI and warned against the development of LAWS, especially with regard to the feasibility of their compliance with international law. Mr. Sean Legassick, of DeepMind, noted how the level of human control over LAWS must be high enough to match the potential for harm caused by LAWS.
Delegations also discussed the technology dimension of LAWS during an interactive discussion on Nov. 15. During this discussion, states expressed different views on whether LAWS exist or could exist in the foreseeable future.
Military effects
The GGE held its panel on the military effects of LAWS on Nov. 14. The UN summary of the panel listed a number of military applications of technologies related to laws. These include: enhancement of combat efficiency, reduction of physical and cognitive load for soldiers and commanders, cost reduction, and enlarging the area and depth of combat operations. Such military applications could be desirable from an IHL perspective, possibly allowing for less collateral damage, the use of non-lethal force for force protection, and better discrimination between civilians and combatants. Even so, there are limits to the applications of AI to the military domain, including the way that certain tasks, like that of the infantry soldier, cannot be replaced by automation. Reaching Critical Will notes the panel’s discussion one of the most crucial issues under debate: the degree to which LAWS operating outside of “meaningful human control” can comply with IHL.
The panel on military effects of LAWS featured six experts on that dimension of weapons development. Brigadier Patrick Bezombes, of France, divided automated systems into four levels, with each subsequent level moving further from human command and control: 1) teleoperated, 2) supervised, 3) semiautonomous, and 4) autonomous. Explaining that technologies from the first three levels are already developed and deployed by many countries, Bezombes recommended that the Group focus on fully autonomous weapons. Professor Heigo Sato, of Takushoku University, Tokyo, argued that, regardless of how states decide to deal with the development or prohibition of LAWS on an individual basis, all will have to contend with the interdependent political and military aspects of LAWS, like the differences between objectives and military doctrines and dual-use control and proliferation management. Lieutenant Colonel Alan Brown, of the U.K. Ministry of Defence, stated that the “moral horror” over an accountability gap based on delegation of agency to machines is inaccurate; humans are held accountable in the current context, and would be held accountable if improper choices are made. He also asserted that a weapons review process is the best way to conduct an assessment of LAWS. Dr. David Shim, of the KAIST Institute for Robotics, Republic of Korea, suggested programming a process into LAWS by which humans can determine why certain decisions were made. Lieutenant Colonel Christopher Korpela, of the Military Academy at West Point, asserted both the operational and humanitarian benefits of autonomous systems. Ms. Lydia Kostopoulos, of the National Defense University, set forth a three-part approach to the drivers of autonomous weapons: 1) trust of the technology, 2) cultural acceptance, and 3) availability.
Delegations also discussed the military dimension of LAWS during an interactive discussion on Nov. 15. On the one hand, there was discourse about the potential military advantages that may result from semi-autonomous technologies and human-machine teaming. On the other hand, there was discussion of the potential negative security implications of LAWS, as well as the military undesirability of weapons beyond human control.
Legal and ethical dimensions
Also on Nov. 14, the GGE held its panel on the legal and ethical dimensions of LAWS. The UN summary relays how respecting IHL principles like distinction, proportionality, and precaution requires a minimum level of human control and supervision, including an ability to intervene after weapon activation. The GGE also explored the potential need for standards of predictability and reliability for weapons with autonomy in their critical functions. The ICRC’s working definition of autonomy focuses on the degree of human involvement—specifically on autonomy in the critical functions of selection and targeting. The ICRC plans to update its guide on Article 36 reviews (with Article 36 of the First Additional Protocol to the Geneva Conventions being the article that obliges review of the legality of new weapons); this guide will be available in 2018.
The session on legal and ethical dimensions of LAWS featured six experts from industry and academia. Ms. Kathleen Lawand, of the ICRC, noted that national weapon reviews are crucial, but not necessarily sufficient, for dealing with the legal issues posed by LAWS. She also set forth the ICRC’s proposal for a working definition of autonomous weapon systems, which is weapons with autonomy in their critical functions of selecting and attacking targets. Lawand emphasized that compliance with IHL requires a direct link between the human operator’s decision and the outcome of the attack. Ms. Marie-Hélène Parizeau, of the World Commission on the Ethics of Scientific Knowledge and Technology, UNESCO, communicated that the moral responsibility of taking human life cannot be delegated to a weapon, as that would both violate human dignity and devalue human life. Professor Xavier Oberson, of the University of Geneva, argued that, as LAWS have no legal personality, a “chain of liability” would need to be developed from production to use of the weapon in order to prosecute its misuse. Mr. Lucas Bento, Attorney at Quinn Emanuel Urquhart & Sullivan and President of the Brazilian-American Lawyers Association, articulated that the conversation should be shifted from terms of “autonomy” and “lethality” to those of “intelligence” and “violence.” Professor Bakhtiyar Tuzmukhamedov, of the Diplomatic Academy, Russian Federation, argued that without a definition of LAWS, states are not ready for a legally-binding instrument. Professor Dominique Lambert, of the University of Namur, argued that, in order to preserve a “principle of anthropological self-consistency,” states must prevent the development of unpredictable systems that allow humans to lose responsibility for their actions.
Delegations also discussed the legal and ethical dimensions of LAWS during an interactive discussion on Nov. 15. During this discussion, delegations reaffirmed that IHL applies to LAWS, and that states and humans have the ultimate legal responsibility for their use.
Cross-cutting dimensions
Finally, on Nov. 17, the GGE held its panel on the cross-cutting dimensions of LAWS. The UN summary explains how true AI has three components: 1) machine learning; 2) the ability to understand natural language; and 3) the ability to interact with human beings in a human-like manner. Relatedly, the Institute of Electrical and Electronics Engineers (IEEE) actually considers the term “AI” to be misleading, and prefer the use of “intelligent autonomous systems.” Practitioners sought to introduce regulation, including through IEEE standards, which is centered on the concept of ethical design. The UN summary further emphasizes a need to examine whether a vulnerability in an autonomous weapon would be able to be patched remotely, whether the weapon would need to be recalled, or whether an operator would be able to rely on some fail-safe mechanism. The UN summary underscores that states should avoid certain pitfalls when discussing autonomy, such as viewing autonomy as a general attribute of a system instead of one applying to its various functions, attempting to distinguish autonomous and automated systems, and solely focusing on full autonomy.
The Way Ahead
The GGE held its discussion on the way ahead on Nov. 16. Delegations supported continuation of the Group in 2018, and emphasized that future work should focus on building a shared understanding of characteristics related to LAWS, as well as practical measures for improving compliance with international law. Reaching Critical Will notes that China, Japan, Latvia, the Republic of Korea, Russia, and the U.S. did not want to consider tangible outcomes at the time of the conference. Delegations expressed divergent views on the pursuit of a politically-binding declaration, a code of conduct, or a technical group of experts on LAWS. There were similarly divergent views on the pursuit of a legally-binding instrument. The Non-Aligned Movement, Algeria, Argentina, Brazil, Chile, Costa Rica, Cuba, Nicaragua, Pakistan, Palestine, Panama, Peru, Sri Lanka, and Uganda all support a legally binding instrument on LAWS.
Conclusions and Recommendations
The CCW GGE reaffirmed that IHL applies fully to all weapons systems, and that the responsibility for the deployment of any weapons system in armed conflict remains with states. Noting that intelligent autonomous systems are developed for civilian, as well as military, purposes, the Group affirmed that its mandate should not hamper progress in or access to civilian research and development and use of intelligent autonomous systems. The GGE suggested that the next meeting focus on the characterization of the systems under consideration in order to promote a common understanding of the concepts necessary to the CCW’s object and purpose. It suggested a further focus on the aspects of human-machine interaction in the development, deployment, and use of LAWS.
Calls to limit development
Although reasonable minds can differ as to whether an increase in lethal weapon autonomy will have a positive effect on warfare, including—potentially—a reduction in civilian casualties, the majority of interest groups seems to favor discontinuation in the development of this sort of technology. Exemplifying the polarization about development was a May 2014 debate between Georgia Tech professor and roboticist, Ronald Arkin, and chair of the International Committee for Robot Arms Control, Noel Sharkey, at an informal UN meeting of experts on LAWS. While Sharkey has expressed concern with the ability of LAWS to comply with IHL and argued that weapons need to remain under meaningful human control, Arkin contends that if suitably deployed, LAWS “may assist with the plight of the innocent noncombatant caught in the battlefield.” For more information on both the perceived advantages and disadvantages of LAWS, see this Military Review report.
In April 2013, concerns about the unlawful harm that LAWS pose led a number of NGOs to launch the Campaign to Stop Killer Robots. This international coalition works to preemptively ban the development, production, and use of fully autonomous weapons. In order to achieve such a preemptive ban, the Campaign to Stop Killer Robots advocates a number of possible solutions, including an international treaty, national laws and other measures, and an implementation of the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Chrisof Heyns’, recommendations stemming from his 2013 report on lethal autonomous robots. While weapons reviews are required under Article 36 of Additional Protocol I to the Geneva Conventions, the Campaign to Stop Killer Robots believes such reviews are insufficient to address all of the challenges raised by LAWS. Other notable NGOs that campaign for a ban are Human Rights Watch, the International Committee on Robot Arms Control (ICRAC) and Article 36.
See this Lawfare post on the possibility of a ban of “killer robots” and an opportunity to reframe the conversation around alternative approaches to addressing the regulatory challenges posed by LAWS.
Second Meeting of CCW GGE on LAWS
The first meeting of 2018 will be held from April 9 to 13, and the second meeting of 2018 will be held from August 27 to 31. The issues that will be addressed at the first meeting include:
- Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the Convention;
- Further consideration of the human element in the use of lethal force; aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems;
- Review of potential military applications of related technologies in the context of the Group’s work; and
- Possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of lethal autonomous weapons systems in the context of the objectives and purposes of the Convention without prejudicing policy outcomes and taking into account past, present and future proposals.
Six working papers have been submitted in advance of the 2018 GGE on LAWS. Those include papers from the Non-Aligned Movement and Other States Parties to the CCW, Poland, the U.S., the ICRC, Argentina and the Russian Federation.
Regardless of whether states come to some agreement on the regulation of LAWS, or whether they continue to greatly disagree about their potential for abuse, this week’s discussions are well worth the attention of those involved in the development of international security and humanitarian policy.