Armed Conflict Cybersecurity & Tech Foreign Relations & International Law

A Reply to Eugene Robinson

Benjamin Wittes
Sunday, July 3, 2011, 8:33 AM
My former colleague Eugene Robinson has a column in the Washington post entitled "Assassination by Robot," which seems to me to warrant a brief response. Robinson begins by saying that, "The skies over at least six countries are patrolled by robotic aircraft, operated by the U.S. military or the CIA, that fire missiles to carry out targeted assassinations.

Published by The Lawfare Institute
in Cooperation With
Brookings

My former colleague Eugene Robinson has a column in the Washington post entitled "Assassination by Robot," which seems to me to warrant a brief response. Robinson begins by saying that, "The skies over at least six countries are patrolled by robotic aircraft, operated by the U.S. military or the CIA, that fire missiles to carry out targeted assassinations. I am convinced that this method of waging war is cost-effective but not that it is moral." And he complains that "There has been virtually no public debate about the expanding use of unmanned drone aircraft as killing machines — not domestically, at least." Robinson's complaint about debate is false, at least in my view. There has been a significant public debate on the subject. But his moral anxiety, about which I a few a few thoughts, is interesting. Robinson writes:
Why should officials even think twice about using technology that can kill our enemies without putting American lives in harm’s way? Plenty of reasons. First, there’s the practical question of whether killing terrorists in this manner creates new ones. And in Pakistan, for example, the government has responded to public outrage by banning drone flights from an airfield that previously had been an operational hub, according to the Financial Times. There is also a legal question. The Obama administration asserts that international law clearly permits the targeting of individuals who are planning attacks against the United States. But this standard requires near-perfect intelligence — that we have identified the right target, that we are certain of the target’s nefarious intentions, that the target is inside the house or car that the drone has in its sights. Mistakes are inevitable; accountability is doubtful at best. Most troubling of all, perhaps, are the moral and philosophical questions. This is a program not of war but of assassination. Clearly, someone like Ayman al-Zawahiri — formerly Osama bin Laden’s second-in-command, now the leader of al-Qaeda — is a legitimate target. But what about others such as the Somali “militants” who may wish to do us harm but have not actually done so? Are we certain that they have the capability of mounting some kind of attack? Absent any overt act, is there a point at which antipathy toward the United States, even hatred, becomes a capital offense?
My response to each of these arguments is the same: Nothing about any of them depends on the fact that the weapon in question is a drone. If we were conducting precision air strikes from manned bombers, shooting people with rifles, or poisoning their soup, Robinson could still raise all of the same concerns. One might still worry, as he does, that our targeted killings were creating more terrorists than they were taking out. One could still worry that our targeting standards were insufficiently rigorous (though Robinson here rather mangles the requirements of the law of targeting). And one could still worry that, as a moral matter, we were suffering from a mission creep that was pulling us away from the Al Qaeda leadership and towards people in far away lands who may be threatening but with whom we were not properly at war. These questions are completely technology-neutral. We would face them if we were attacking Al Qaeda with our bare hands. Drones are a weapon. Their use raises some novel issues, but in many ways, those issues are more the logical extension of the issues raised by previous weapons technologies than departures from them. Ever since, once upon a primitive time, some neolithic fellow figured out that he would be safer if he threw his spear at the other guy from a distance, rather than running up to him and trying to jab him with it, people have been looking for ways to fight from more stand-off platforms--in other words, trying to assume less risk in going into combat. Guns and arrows are technological efforts to kill accurately from a distance. Air power and artillery are both efforts to deliver explosions to places one doesn't want to risk sending people. Drones are merely the extension of this logic--a means of protecting one's people almost absolutely while they fight a nation's battles. I don't see that as intrinsically problematic, morally or legally. I see it, rather as consistent with the entire history of the development of weaponry, which one should understand as a technological trend towards greater lethality from positions of ever lessening exposure.

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare