Verifying Who Pulled the Trigger

Published by The Lawfare Institute
in Cooperation With
Editor’s Note: For those eager to restrict the use of lethal autonomous weapons, one of the biggest challenges is verification—a difficult-to-detect variation in software can mean the difference between full autonomy and complete human control. Drawing on lessons learned from drone forensics, King’s College’s Zachary Kallenborn calls for the use of post-hoc verification to determine whether states are violating restrictions.
Daniel Byman
***
For the past decade and a half or so, states and arms control activists have discussed the potential for new international restrictions on lethal autonomous weapons, which select and engage targets without human input. Critics argue that such weapons are prone to errors and make decisions too quickly for humans to effectively manage, risking inadvertent escalation and increasing the number of dead and injured civilians. Such technology is not science fiction: A March 2021 United Nations report alleges a Turkish-made Kargu-2 drone in Libya “hunted down and remotely engaged” retreating soldiers. (Although Ismael Demir, the Turkish president of defense industries, denied the Kargu-2 was used this way, he acknowledged Turkish platforms can field the technology.)
One major impediment to restrictions on autonomous weapons systems is the difficulty of verifying how autonomous systems are being used. Verification is valuable because it can help the international community trust that restrictions are being followed. If a state builds and uses, say, a mobile autonomous drone that targets human beings in violation of an international ban, verification measures can identify that. The international community can then issue economic or diplomatic sanctions or other punitive measures, which would hopefully also help deter future violations. Unlike the international limits on nuclear weapons proliferation, for which violations can have big, obvious indicators like the presence of centrifuge cascades for uranium enrichment, autonomy resides primarily in software. Lethal autonomous weapons may look just the same as a remotely operated weapon system from the outside, and sometimes “lethal autonomy” might be just a setting on an otherwise remotely operated weapon.
Even within autonomous software, there are significant nuances. Autonomy is a matter of code, and an autonomous drone contains programming that allows it to select and engage targets without human control. However, a drone equipped with machine vision to support terminal guidance and specialized artificial intelligence (AI) chips to improve data processing might still have a human in the loop (when a human selects and engages targets) or on the loop (autonomous systems that decide targeting on their own but a human supervises them and can override decisions). But different code might not have a human in or on the loop at all.
A paradigm shift is needed to ensure accurate verification for regulating these new systems. Traditional arms control verification is production and acquisition oriented, which makes sense because nuclear, biological, and chemical weapons all require specialized material, equipment, and skills. But a simple autonomous weapon just requires basic programming skills; hobbyists have even built their own at-home autonomous paintball sentry turrets. Instead, arms control approaches should emphasize in-situ and post-hoc verification: determining whether a controlled weapon is being or was used in combat, and inflicting appropriate levels of punishment in response. Although such an approach will not deter autonomous weapons use completely, it may help encourage global norms around responsible use of autonomous weapons and deter particularly egregious use.
The Opportunity of Post-Hoc Verification
Drones provide precise mass—their affordability and expendability allow many drones to be mobilized with the potential to overwhelm adversaries, and they couple this mass with extreme accuracy. Drone masses can overwhelm targets; even if most get shot down, a few may still get through to cause harm. Even if little direct harm is done, the drones can deplete adversary air defenses, while also fixing air defenses in place so that they cannot be used in other areas. Consequently, both Ukraine and Russia have built and fielded more than a million drones.
One touted value of autonomous weapons is their responsiveness and speed. Military leaders have long talked about a future of warfare that moves at “machine speed” with anticipation and trepidation. In a fast-moving fight, when seconds matter, a computer’s ability to sense, process, and decide on a course of action more quickly than a human could provide a real advantage. That scenario provides an opportunity for arms control verification. If autonomous weapons respond far more quickly than humans, that may be observable with appropriate sensors and data processing capabilities. Collected sensor data may provide additional evidence of lethal autonomy.
Similarly, the scale and nature of drone use may provide useful indicators of autonomy. When drones are used in mass or in a true, interconnected drone swarm, it is unlikely that individual operators are controlling each drone, especially if hundreds of drones are used. Humans are unlikely to have the cognitive capacity to control or coordinate so many drones simultaneously, though a similar effect could be achieved if the drones have been pre-programmed to follow a particular GPS route, a form of navigational autonomy about which arms controllers have not evinced particular concern. Nonetheless, observed adaptability and behavior changes in responses to external events with mass drone use may provide useful insights.
The mass production of drones suggests an expectation of mass loss, and when these losses occur, they create another verification opportunity. Ukraine and Russia need so many drones because they lose so many. These downed drones can be collected and analyzed to determine whether they meet the requirements of any arms control agreements.
Recovered data from drones may include flight logs, altitude data, images, and video that characterize drone operations. In addition, organizations like Conflict Armament Research have practiced hardware forensics to identify, analyze, and trace the origins of physical drone components. Reverse engineering is a related discipline focused on building tools for hardware and software forensics by characterizing the hardware and software involved in the drone system. Analysis of drones’ hardware and software architecture might also provide relevant clues. If a collected drone seems to lack the capacity to receive remote input, that would be a strong piece of evidence that it operated autonomously. Conversely, although the use of software chips specially designed for AI applications is not conclusive evidence of an autonomous weapon, the absence of such a chip would provide strong evidence that the system lacks autonomous capabilities. These forensic techniques could be used to analyze collected drones to assess the likelihood that they were used as autonomous weapons.
Drone forensics is unlikely to be a verification panacea. The difficulty of forensic analysis can vary greatly with the type and sophistication of the drone. Whereas commercial off-the-shelf drones may have readily available forensic images with known components and software, military drones do not. Drones, especially sophisticated military drones, may have self-destruct functions to deny investigators or rival militaries insights into their systems. They also may have sophisticated encryption and other protections to inhibit analysis. Nonetheless, forensic analysis might provide useful clues, especially in combination with other information.
Additional technical data exists in the larger military and governmental ecosystem, which also may provide supporting (or refuting) evidence of autonomous weapons use. States may collect signals intelligence regarding orders or discussions about plans to employ lethal autonomy. Policies, laws, and public statements regarding the use of autonomy will provide indicators of governments’ larger stance toward those weapons and how they view the associated risks. Any autonomous weapon must also be integrated into the larger doctrine, organization, training, materiel, leadership and education, personnel, and facilities systems of the country’s military and national security agencies. Of course, those considerations are useful only to the degree states are transparent about them, but they provide useful contextual information regarding the basic plausibility of a claim of lethal autonomous weapons use.
When it comes to autonomous weapons, typical verification means are quite difficult. But that doesn’t mean verification is impossible. International governmental and nongovernmental organizations concerned about autonomous weapons proliferation should invest in technical research in drone forensics and reverse engineering with a particular focus on approaches aimed at determining whether a given weapon system can operate autonomously.
The essay is derived from “Sherlock Drone: Drone Forensics and Global Security,” by Zachary Kallenborn and Marcel Plichta in RUSI Journal.