Armed Conflict Cybersecurity & Tech Foreign Relations & International Law Lawfare News

(Un)Dignified Killer Robots? The Problem with the Human Dignity Argument

Adam Saxton
Sunday, March 20, 2016, 10:28 AM

Editor's Note: Autonomous weapons systems are often vilified as “killer robots” that will slay thousands without compunction – arguments that the systems’ proponents often dismiss with a wave of their hands. Adam Saxton, a research intern at the Center for Strategic and International Studies , argues that the picture is neither black nor white. Autonomous weapons do pose ethical issues in the conduct of warfare, but often the arguments for or against them caricature the weapons and misunderstand their actual use.

**

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's Note: Autonomous weapons systems are often vilified as “killer robots” that will slay thousands without compunction – arguments that the systems’ proponents often dismiss with a wave of their hands. Adam Saxton, a research intern at the Center for Strategic and International Studies , argues that the picture is neither black nor white. Autonomous weapons do pose ethical issues in the conduct of warfare, but often the arguments for or against them caricature the weapons and misunderstand their actual use.

**

Autonomous weapons, referred to by their critics as “killer robots,” have gotten a progressively bad rap as of late. Since the open letter by the Future of Life Institute began collecting thousands of signatures this past summer, there has been growing public support for an international treaty to ban such weapons.

Among the most frequently raised arguments against autonomous weapons is that they violate human dignity, as they “select and engage targets without human intervention” and are incapable of thinking qualitatively, lacking empathy or compassion. This argument was raised at the last Convention on Certain Conventional Weapons and is likely to be an issue at the Lethal Autonomous Weapons Systems (LAWS) Meeting of Experts in April 2016. As UN Special Rapporteur Christof Heyns has argued, “A machine, bloodless and without morality or mortality, cannot fathom the significance of the killing or maiming of a human being.”

These arguments are not false, but they often fail to grasp the complexity of evaluating human dignity in warfare. Although the net effects of autonomous weapons on the battlefield may yet prove disastrous, the de facto use of these weapons should not be viewed as an inherent violation of human dignity due solely to the weapon’s autonomy. The development of autonomous weapons merits caution, but not a blanket prohibition based on a misunderstood notion of human dignity that foregoes potential benefits that automation may bring to warfare.

To begin with, the concept of human dignity is difficult to measure in war. Is dying before the brutal machinery of modern warfare characterized by artillery rounds, improvised explosive devices, and machine guns dignified? Are the deaths of infantrymen mowed down by machine gun fire in the muddy trenches in World War I substantially more dignified than soldiers killed by autonomous machine guns in potential future wars? War itself takes a toll on human dignity through the intentional sacrificing of lives to achieve military objectives.

Although the net effects of autonomous weapons on the battlefield may yet prove disastrous, the de facto use of these weapons should not be viewed as an inherent violation of human dignity due solely to the weapon’s autonomy.

Human dignity is further complicated as soldiers can be considered as part of a “war convention,” a sort of implicit or explicit social contract that establishes the rules of war and involves surrendering certain rights and enabling others. As articulated by Michael Walzer in his work reviving just war theory, military servicemen adopt a different set of rights than those that govern civilian life. Exceptional acts of humanity, compassion, and empathy are at times welcomed – but only at times. Indeed, the grim realities of war frequently cause one side to dehumanize the other, often generating grotesque stereotypes to alienate their opponents. Although humans are capable of overcoming these characterizations through acts of compassion (Walzer recounts several examples of such actions), these acts go above and beyond what is morally required in war.

Human dignity and autonomy can then be considered in two different lights. From a fundamental, intrinsic perspective, the loss of empathy, while tragic, does not necessarily render a decision inherently unjust and immoral. Decisions to do less than what is morally permissible can be acts of kindness and should be praised, but not always required. This is especially true if the decision significantly increases the risk to one’s troops. As Walzer elaborated, “soldiers need not risk their lives for the sake of their enemies, for both they and their enemies have exposed themselves to the coerciveness of war.” However, from a more consequentialist perspective, which evaluates the normative qualities of autonomous weapons by their practical effects on the battlefield, the loss of empathy may generate greater violence due to a lack of compassion to act as a restraint. Thus the use autonomous weapons may result in greater killing and suffering in war than would otherwise be necessary. This is a legitimate concern, and would hinge significantly on the degree to which the humans who launched autonomous weapons feel morally responsible for the ensuing destruction. If greater autonomy increases the moral distance soldiers feel from killing, this may lead to troubling consequences.

Autonomy, then, presents its greatest threat to human dignity by potentially changing the dynamic between weapons and their operators. However, autonomous weapons would not necessarily lead to a dramatic end of human control, as future weapons will likely require a human to make a qualitative decision to deploy and, to an extent, supervise the operation of the weapon.

For example, over 30 countries employ human-supervised autonomous weapons, such as the Phalanx Close in Weapon System (CIWS), as a last line of defense against missiles and rockets. Further, even a Super aEgis II sentry gun, used by South Korea that can potentially be placed on full autonomous mode to select and engage targets, would still require a qualitative decision made by a commander to field the weapon.

Autonomy, then, presents its greatest threat to human dignity by potentially changing the dynamic between weapons and their operators.

The introduction of truly artificially intelligent weapons with the independent capability to deploy themselves may begin to change this dynamic. Humans may theoretically be placed so far out of the loop in deciding where, when, and how an autonomous weapon functions as to effectively erode any dominant relationship between the soldier and his weapon. However, until that point is reached, autonomous weapons will still be partially dependent on the qualitative decisions of their human operators. It is also important to note these ethical dilemmas involving life and death decisions are not unique to autonomous weapons, as similar dilemmas are posed by autonomous systems used for purely civilian purposes.

Autonomous weapons may not blatantly violate human dignity, but there remain many unresolved issues regarding their use. I would suggest that future human dignity arguments proceed along two lines. On an individual level, further research into the potential impact of autonomous weapons on the human-machine relationship is needed to understand the acceptable degree of autonomy that is permitted in the kill chain. This may begin by testing specific weapons as they are developed to evaluate their potential for completely altering the relationship between human and machine for the sake of preserving accountability. In other words, it should be possible to hold a living, breathing human being culpable before the current, or adapted, laws of war for actions taken by an autonomous weapon on the battlefield. If an autonomous weapon went on an indiscriminate rampage, there should be some method to link the consequences back to a human operator’s intention, error, or negligence.

Autonomous weapons should also be considered on an aggregate level, as the reckless deployment of such weapons in mass may yet produce wanton destruction. However, as autonomous technology is still nascent, this is far from an inevitable conclusion, and such weapons may still have the potential to reduce civilian causalities by improving judgment and reducing human errors in the heat of battle. Autonomous weapons don’t panic or seek revenge and are otherwise immune from several human faults that can lead to unnecessary deaths on the battlefield.

Therefore, instead of implementing an absolute ban, caution should be exercised in evaluating specific autonomous weapons systems as they are developed for their potential individual and aggregate impact on human dignity.

Preserving what remains of humanity in war is a serious concern and demands vigilance and caution in testing future autonomous weapons to ensure they can comply with the laws of war. Yet, as the technology is still premature, the verdict should not yet be delivered regarding the humaneness of autonomous weapons.


Adam Saxton is a research intern in the International Security Program at the Center for Strategic and International Studies, where his research focuses on U.S. force structure in Europe, defense reform, and the impact of emerging technology on warfare. He was previously a Joseph S. Nye, Jr. research intern in the Defense Strategies and Assessments Program at the Center for a New American Security.

Subscribe to Lawfare