Making Unilateral Norms for Military AI Multilateral
The State Department’s political declaration on military AI is a good start to building global norms, but the U.S. needs to work with allies to make it a reality.
Published by The Lawfare Institute
in Cooperation With
The speed and pitfalls of artificial intelligence (AI) development are on public display thanks to the race for dominance among leading AI firms following the public release of ChatGPT. One area where this “arms race” mentality could have grave consequences is in military use of AI, where even simple mistakes could cause escalation, instability, and destruction. In an attempt to mitigate these risks, the State Department released the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The declaration is a good step toward improving the global conversation around AI in military systems. The United States can work with its closest allies to turn this unilateral statement into a multilateral commitment to promote norms for military AI use around the globe.
The United States—specifically the Defense Department—has already released policy documents on AI in military affairs, including the Ethical Principles for Artificial Intelligence, Responsible Artificial Intelligence Strategy and Implementation Pathway, and Directive 3000.09 which lay out principles and frameworks for developing autonomous weapons systems. The State Department’s political declaration builds on the accomplishments of these other documents. After a short statement of purpose about the need for ethical and safe AI and the dangers of poorly designed systems, the declaration lays out best practices for responsible AI development. More specifically, it urges states to conduct reviews of AI systems to ensure they comply with international law, build auditable AI systems, work to reduce unintended bias in the technology, maintain acceptable levels of human judgment and training, and test for safety and alignment. For the most part, the best practices are outlined broadly. While some observers may argue in favor of a more narrow approach, this is a strength for a declaration designed to build a normative framework, as many countries should be able to easily agree to these practices.
One particularly important inclusion in the declaration, especially because it is often left out of AI principles, is focused on nuclear weapons. The declaration notes, “States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons deployment.” This description has emerged quickly as stock language for autonomy in nuclear weapons systems for the United States, as it almost directly matches that from the latest Defense Department Nuclear Posture Review. The review states, “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.” The direct repetition is especially important given the gravity of determining the limits of AI in nuclear weapons: With something as important as maintaining human control of nuclear weapons, the United States has signaled a unified, government-wide policy. It is also a particularly powerful norm that should be relatively easy to agree with broadly, given that no country has stated a desire for AI control of nuclear weapon launches.
Even as it charts a course for a normative framework, the declaration has had its share of criticism. Members of the Campaign to Stop Killer Robots have been particularly vocal, noting that the document “falls drastically short of the international framework that the majority of states within the UN discussions have called for.” Notably, treaty discussions for a general prohibition on AI weapons systems have been stalled as the field they seek to regulate continues to advance. Previous efforts to facilitate these negotiations were stymied by the three states most needed to shape and implement any potential treaty to prohibit AI weapons systems: the United States, Russia, and China. In light of these challenges, it is important to recognize that a normative framework does not preclude a future treaty and is likely to be more effective in constraining troubling behavior. A normative framework that had a large degree of international buy-in could certainly become a blueprint for a future treaty. In the meantime, such a framework would be much more responsive to the speed of change occurring in AI technology.
Maj. Gen. (ret.) Charlie Dunlap—former deputy judge advocate general of the United States Air Force—has offered a critique from the opposite tack, claiming that the declaration unnecessarily limits the ability of the United States to develop AI systems that it may need in the future. Dunlap argues that “the U.S. needs to avoid imposing restrictions on itself and its allies that are not required by international law, but which could hobble the ability of commanders to exploit the battlefield potential of AI.” This argument, however, actually presents a strong case in favor of the declaration. By releasing the declaration, the United States is publicly backing the norms it intends to follow before development of AI weapons systems. The norms needed to build ethical and safe AI systems—such as preventing unintended bias—must be implemented at the very beginning of the development process. Outlining these norms in a public declaration ensures that they will not be an afterthought when these plans come to fruition. These practices are also necessary components of effective AI weapons systems: If a weapon exhibits unintended behaviors or has undue bias, it is not useful as a weapon system.
The political declaration attempts to stake out a U.S. position on AI systems in the military beyond Defense Department documents. But for the principles it lays out to become normative, other countries must agree to make similar declarations and adopt similar practices. It is in this aspect that the declaration is lacking. The document’s stated aim is “to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations[.]” Despite this goal, the United States released this supposedly multilateral document with only itself as a signatory while numerous countries signed a “Call to Action” released at the same conference that covered much less ground. There is much work to be done to make that multilateral vision a reality instead of a mirage.
To build a consensus, the United States can seek to gain support from other states around the world through as many international venues as possible. Agreement within NATO or other close allies such as Australia or Japan is likely to be the best course of action to achieve this goal, given the already-close military cooperation, similar political climates, and potential to assuage European fears over America’s allegedly permissive AI ecosystem. These principles are a good start for a normative framework, but crossing the finish line will require a great deal of work fostering international conversations.
The political declaration does not make AI safe with the stroke of a pen or the shake of a hand. It does, however, represent a broadening of U.S. policies and continuation of international conversations on AI. Criticisms of the document mistake the nature of a normative framework. It does not aspire to be a treaty, but it does introduce important limitations. Without significant effort from the U.S., the political declaration could easily die on the vine, and with it a structure for building AI technology responsibly.