Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

Considering a Legally Binding Instrument on Autonomous Weapons

Charlie Trumbull
Monday, October 28, 2024, 2:00 PM
The UN secretary-general’s call for a legally binding instrument on autonomous weapons presents challenges and opportunities for states. 
UN Secretary-General Antonio Guterres (Photo: UN Geneva/Flickr, https://www.flickr.com/photos/unisgeneva/51146210463, CC BY-NC-ND 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

Autonomous weapons systems have taken center stage in the field of disarmament. Discussions on these weapons began over a decade ago with the Group of Governmental Experts (GGE) in Geneva, Switzerland, in the context of the Convention on Certain Conventional Weapons (CCW), and have recently expanded to the Human Rights Council and the UN General Assembly. In December, the General Assembly stressed “the urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems.” It requested that the secretary-general “seek the views of Member States” on ways to address these challenges and “submit a substantive report” to the General Assembly at its 79th session. 

In response, the UN secretary-general submitted a report in July that included submissions from over 70 member states. Although states presented a variety of views on the path forward, the secretary-general “call[ed] for the conclusion, by 2026, of a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.” Negotiating such an instrument poses both challenges and opportunities.

Background

The GGE has affirmed that international humanitarian law (IHL), the legal framework applicable in armed conflict, applies to any use of autonomous weapons. It has not reached consensus, however, as to whether new international law is needed. A minority of states argue that existing IHL—including the rules on distinction, proportionality, and feasible precautions—can appropriately regulate the use of these weapons. These states also note that “smart” weapons generally pose less risk to civilians than “dumb” weapons. The international community, they argue, should not stigmatize technology that can make weapons more precise and accurate, thus reducing risks to civilians. 

A majority of states, the International Committee of the Red Cross (ICRC), and numerous nongovernmental organizations (NGOs) claim that IHL is insufficient. They argue that weapons that autonomously seek and engage targets may operate in a manner that cannot be adequately predicted, understood, or explained. This unpredictability makes it difficult for humans to make the contextualized assessments that are required by IHL and can frustrate efforts to promote accountability. Autonomous weapons also undermine human dignity, as machine algorithms “should not determine who lives or dies.” 

Despite the differences in opinion among member states as to whether existing IHL can appropriately regulate the use of autonomous weapons, the UN secretary-general called for a legally binding instrument (LBI) to prohibit autonomous weapons that cannot be used in compliance with IHL, meaning autonomous weapons that are inherently indiscriminate or of a nature to cause superfluous injury. It appears inevitable that negotiations on an LBI, such as a multilateral treaty or CCW Protocol, will take place in some forum. 

According to Human Rights Watch, “[m]ore than 100 countries regard a new treaty on autonomous weapons systems with prohibitions and restrictions as necessary, urgent, and achievable.” Even the slow-moving, consensus-based CCW adopted a mandate in 2023 for the GGE to “consider and formulate, by consensus, a set of elements of an instrument, without prejudging its nature, and other possible measures to address emerging technologies in the area of lethal autonomous weapons systems[.]” This “negotiation mandate” adopted by CCW member states suggests military powers may be more open to taking steps that could lead to an LBI. Although some states (such as the U.S.) will almost certainly insist that the instrument be nonbinding, it could nevertheless form the foundation for a new treaty or protocol in the CCW, the UN, or elsewhere. It is therefore worth considering what an LBI might look like. 

Three Approaches to a Legally Binding Instrument

States could pursue one, or a combination, of three approaches. First, an LBI could seek to prohibit certain types of autonomous weapons, such as those designed to target humans. Second, an LBI could seek to impose a positive obligation regarding the use of autonomous weapons, such as a requirement to maintain “meaningful human control.” Third, an LBI could mandate due diligence or procedural requirements prior to any deployment of autonomous weapons. While the first two options would face significant obstacles, the third approach could present a meaningful path forward. 

Prohibiting a Category of Autonomous Weapon

An LBI could seek to prohibit the use of some class or category of autonomous weapon. This is the model for most disarmament treaties—the Anti-Personnel Mine Ban Convention and the Convention on Cluster Munitions being prominent examples. This approach is unlikely to succeed in the autonomous weapons context, however, for several reasons. For one, unlike landmines or cluster munitions, there is no generally accepted definition of autonomous weapons despite years of multilateral discussions. Some countries (e.g., France) define an autonomous weapon as a “fully autonomous” weapon system, meaning that it is “capable of changing its own rules of operation particularly as regards target engagement, beyond a determined framework of use.” Most countries and the ICRC define autonomous weapons more broadly to include weapons that can select and engage targets after activation without the intervention of a human operator. This definition could include existing weapons systems, such as the Israeli Harpy or various anti-missile defense systems. I use this broader definition below when referring to autonomous weapons. 

Setting aside definitional differences, the problem with seeking to prohibit some category of autonomous weapon is that autonomy is a capability rather than a specific classification of weapon. Weapons may have varying degrees of autonomy or have autonomous capabilities in different functions (e.g., navigation, target identification). Yet there is nothing inherently problematic about autonomy itself. In other domains, such as air travel, autonomous capabilities such as autopilot have made air travel even safer. Autonomy could similarly make weapons more precise and discriminate. An autonomous weapon that is capable of consistently distinguishing between military objectives and civilian objects, or combatants and civilians, would present little concern under IHL. 

This does not mean the international community should blindly embrace autonomy in weapons systems or ignore the serious risks that some may pose. The real concerns with autonomy relate primarily to the quality of the technology and how it is used. This makes it exceedingly difficult to define in advance a particular class of weapons that should be prohibited per se. Even weapons that are incapable of distinguishing between military objectives and civilian objects in all circumstances could be used lawfully in certain operational environments, such as those outside of densely populated areas. 

Positive Obligation

States could also seek to impose an affirmative obligation when developing or using autonomous weapons. This approach minimizes the definitional or categorization problem, as the positive obligation could apply to weapons with varying degrees of autonomy. The proposal supported by a majority of the GGE is a requirement that states exercise “meaningful human control” over autonomous weapons systems. (This positive obligation can also be framed as a prohibition such that weapons systems designed to operate without meaningful human control are prohibited.) A minority of states, including the U.S., have strongly objected to such a requirement. Human control is one means of ensuring compliance with IHL, they argue, but not the only means. These states have also argued that “meaningful human control” is an excessively vague standard, is potentially misleading to the extent that it suggests physical control, and is not reflective of how states use existing weapons systems. Humans arguably do not “control” an anti-vehicle mine, for example, once it is emplaced. 

Some form of positive obligation, however, is intuitively appealing. Uncontrollable weapons pose significant risks to civilians and serve no military purpose if they cannot be directed at military objectives. While states may disagree on the precise wording of such a requirement, there is broad consensus on the need for human involvement in decisions to use force. In 2021, a large cross-section of states (including the U.S.) signed a joint statement at the UN General Assembly claiming that “the human element is and must remain central in the use of force” and emphasizing the “necessity for human beings to exert control, judgement and involvement in relation to the use of weapons systems in order to ensure any use is in compliance with International Law.” The U.S. and nine other countries have also proposed similar language in a working paper to the GGE, asserting that “[a]utonomous weapons systems may only be developed such that their effects in attacks are capable of being anticipated and controlled as required in the circumstances of their use by the principles of distinction and proportionality.” 

The ambiguity of “meaningful human control,” or some similar standard, is not necessarily contrary to the interests of states with advanced militaries. Many IHL rules incorporate ambiguous standards, such as the requirement to take “all feasible precautions” or the prohibition on weapons of a nature to cause “superfluous injury.” The ambiguity in these rules affords states some margin of discretion in implementing their obligations while also allowing the meaning of these ambiguous terms to be clarified through subsequent state practice. Nevertheless, seeking to identify the degree and form of human involvement required in the use of force has proved exceedingly difficult within the GGE. 

Due Diligence Requirements

States could also impose procedural or due diligence requirements during the development and use of autonomous weapons. One such requirement should be the “Article 36 weapons review,” which requires parties to Additional Protocol I to determine whether the employment of a new weapon, means, or method of warfare “would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.” A requirement to conduct weapons reviews could be tailored and enhanced to account for the unique challenges presented by autonomous features in weapons systems. Reviews may be required, for example, throughout the life cycle of the weapon or before use in different operating environments. Other due diligence requirements—such as rigorous testing, evaluation, and risk assessments, as well as adequate training for personnel using such weapons—could also be part of any LBI negotiation. 

There is consensus within the GGE on the importance of these due diligence safeguards. The 2019 “Guiding Principles” reaffirm the need for weapons reviews and risk assessments. Adequate training is often referenced in the GGE’s annual reports and in states’ working papers. Codifying such due diligence requirements in an LBI would be a significant advancement. Despite the critical importance of weapons reviews, very few states conduct them, and numerous military powers (including the U.S., India, Pakistan, Turkey, and Israel) lack such an obligation, as they are not parties to Additional Protocol I. (The U.S. conducts weapons reviews as a matter of policy.) Thus, an LBI focused on these more procedural requirements would impose new legal obligations on key states that are developing autonomous weapons and reinforce and strengthen the obligation for states that are parties to Additional Protocol I. 

The primary obstacle to such an approach is political. Many states (especially in the Global South but also in Europe) have thus far refused to focus on due diligence measures to mitigate the risks posed by autonomous weapons for fear that those measures will legitimize such weapons. Such measures, they argue, may complement but do not exclude the need for new prohibitions or obligations. These states would almost certainly oppose any instrument that contained only procedural or due diligence requirements—even if such requirements would improve protections for civilians—absent some sort of prohibition or positive obligation. 

The Path Forward 

Negotiations on an LBI on autonomous weapons are likely to commence within the next few years. Autonomous weapons are on the agenda for the UN General Assembly this year, and numerous regional groups have called for the “urgent negotiation” of an LBI. Proponents of an LBI will need to decide whether to proceed in a maximalist fashion tailored to states that are unlikely to develop this technology in the near future (i.e., the approach adopted by proponents of the Treaty on the Prohibition of Nuclear Weapons) or whether to seek a more modest instrument that might attract a greater number of states that are already developing autonomous technologies. The former approach could be a step in developing a norm against autonomous weapons (similar to the effect the Anti-Personnel Mine Ban Convention had), but it would have little practical effect for the foreseeable future in deterring military powers from developing such weapons. Unlike anti-personnel landmines, which have limited military utility and pose significant risks to civilians, autonomous weapons can provide significant military advantages over an adversary due to their ability to process data and make decisions at speeds far exceeding human capabilities. States are thus investing huge sums in pursuing these weapons. Further, a maximalist approach to an LBI could have negative consequences, such as ending discussions and debate at the GGE and reducing transparency among militaries that do not ratify the treaty. 

Skeptics of an LBI will likewise need to consider whether they are willing to accept obligations regarding autonomous weapons that arguably exceed what is required under existing IHL—particularly a positive obligation regarding human control or judgment. The U.S., specifically, will need to weigh the costs of negotiating a treaty that could theoretically reduce its operational flexibility in the future (although a carefully negotiated treaty with U.S. input would be unlikely to prohibit weapons that have significant military utility) against the costs of a more expansive treaty negotiated without its input. While the U.S. would not be bound by such a treaty unless ratified, it could generate disproportionate reputational costs for the U.S., create interoperability issues in partnered operations, and lead to further fragmentation in the application of IHL. An expansive treaty could also stigmatize a broad array of weapons with autonomous functions, including those not prohibited by the treaty, given that states’ desire for secrecy relating to national security concerns would make it problematic to prove that any given weapon system is consistent with the treaty. 

In my view, the U.S. should seek an active role in shaping a seemingly inevitable LBI on autonomous weapons. Ideally, the GGE would proceed in a more pragmatic manner—creating shared understandings about the benefits and risks posed by these weapons, promoting good practices and norms, and identifying gaps in the law—before moving to negotiations on new international law. While the U.S. may reasonably argue that full compliance with existing IHL should mitigate international concerns about these weapons, this position ignores both the domestic political dynamics around autonomous weapons in many countries and the expressive function of international law. If states are going to negotiate an LBI, it is in the United States’ interest to be at the table to shape the negotiations, even if it ultimately decides not to become a party to any resulting treaty. And it is in other states’ interest to negotiate an LBI that major military powers, like the U.S., Russia, and China, could realistically join. 

Assuming negotiations commence on an LBI, the U.S. should advocate for a limited agreement that reflects military realities and actual challenges rather than speculation about future technology, focusing on achievable outcomes like procedural and due diligence requirements for weapons reviews, testing, and training. It could also encourage states to follow the model of Protocol V to the CCW on explosive remnants of war, which includes a nonbinding “technical annex” with “suggested best practice for achieving the objectives” of the Protocol. This technical annex could include good practices on limiting the geographic and temporal scope of autonomous weapons, self-deactivation or destruction measures, and other precautionary measures to minimize civilian harm. 

Developing a positive obligation will be a political imperative for most states. This is an opportunity for the U.S. to build greater trust in its own weapons development by acknowledging that new rules may be warranted for new means and methods of warfare that were not contemplated when Additional Protocol I was adopted in 1977. The U.S. can help ensure that any such obligation is consistent with responsible military practice for sophisticated weapons systems, building on its efforts to develop transparent procedures and policies on autonomous weapons.


Charlie Trumbull is an Assistant Professor of Law at the University of South Carolina Joseph F. Rice School of Law. Before entering academia in 2024, he served as an Attorney-Adviser in the Office of the Legal Adviser at the U.S. Department of State.

Subscribe to Lawfare