Armed Conflict Foreign Relations & International Law

Tom Malinowski Responds on Autonomous Lethal Systems

Kenneth Anderson, Matthew Waxman
Wednesday, December 5, 2012, 6:36 AM
Tom Malinowski of Human Rights Watch has responded to our critique of their recent report on autonomous lethal systems.

Published by The Lawfare Institute
in Cooperation With
Brookings

Tom Malinowski of Human Rights Watch has responded to our critique of their recent report on autonomous lethal systems.  We will post some more detailed thoughts on this later, but in the meantime we'd like to share Tom's response, which is printed below in its entirety. We will for now make one quick comment.  Some of the arguments below (and by others in favor of a treaty ban on autonomous lethal systems) assume the likelihood of an effective international prohibition.  We don't share that rosy assumption, especially with regard to compliance by states we're most worried about.  Specifically regarding Tom's point below that an international ban on autonomous lethal systems will help ensure that they're never used by brutal dictators like Syria's Assad on their own people, as we write this we're also reading U.S. government warnings to Assad that he'll probably face direct U.S. action if he uses his vast caches of chemical weapons against the rebels.  We don't think it's the Chemical Weapons Convention that's stopping him (the CWC hasn't stopped him from gaining this capability in the first place, and we're dubious that it's serving as a direct barrier to him using chemical weapons; international law here probably does serve as an important background marker for the U.S. and others legitimately to act against him, though).  Nor do we think it's the moral code of his commanders that will determine whether he uses these weapons.  And nor do we think he feels he needs sophisticated robotic systems to wipe out civilians and their property.  This is a case where only a strong combination of power and principles will stop Assad, and that's what we are trying to promote in our proposals. Here's Tom's response to our critique:
Having known Matt Waxman for many years, I want very much to believe that neither he nor his colleague Ken Anderson is a cyborg sent from the future to dissuade Lawfare readers from supporting a ban on fully autonomous war-fighting robots.  In that spirit, let me address at face value some of their objections to saving humanity from this threat.  I’ll reply to Ben’s additional critique of Human Rights Watch’s report on lethal autonomy in a separate post. First, readers should understand why HRW has joined many others in raising alarm about allowing robots to make targeting decisions, and in insisting that militaries always keep a “man in the loop” when deploying robots to the battlefield.   No one needs to tell us that human beings are capable of committing the most terrible crimes.  But we don’t believe that automating warfare is the answer to that problem.  The decisions an ethical soldier and commander must make, against adaptive enemies in unpredictable battlefield situations, are too subtle, and too human, for any computer program to replicate. Even if, in some distant future, advances in artificial intelligence give robots an artificial conscience (something most experts we consulted doubt), many nations, including US adversaries, will be able to deploy effective autonomous robots far sooner.  Could such machines (particularly the first generations to be deployed) distinguish between combatants and civilians in societies where combatants don’t wear uniforms, and civilians are often armed?  Will we ever be comfortable letting machines make calculations about proportionality, which would require them to weigh the value of a human life?  Could robots recognize surrender?  Show mercy?  Interpret the behavior of, and listen to appeals from, the civilians they encounter, and exercise the wisdom and judgment to adjust their behavior accordingly? To me, perhaps the most irreplaceable quality human soldiers possess is the capacity to refuse orders.  Even the worst dictators, who commit atrocities with little fear of being held accountable, must weigh the possibility of their troops rebelling, or simply failing to carry out their duty.   That check on tyranny would be removed if the Qaddafis and Assads of the world could command autonomous robots programmed to search out and kill protest leaders or strike at groups of congregating civilians in times of unrest.  The development and proliferation of autonomous offensive weapons would give them something no tyrant in history has had:  an army that would never refuse an order, no matter how immoral. Some of these arguments rest on the belief that autonomous weapons cannot meet the requirements of international law.  As Matt and Ken acknowledge, others rest on more fundamental philosophical objections to giving machines the power to decide whether human beings live or die.  As this debate continues, we may find that the strongest objections to lethal autonomy will come not from lawyers, but from lay people, including professional soldiers, who place moral concerns about loss of humanity above all else. For those who share such concerns, there are still questions about what should be done.  Some believe, as do I, that preemptive prohibition of lethal autonomy is possible and desirable.  Others argue that such systems are inevitable, and that we should therefore construct a legal regime that regulates their use, perhaps through a treaty requiring them to be programmed to respect the laws of war. Matt and Ken seem to be proposing a third, and weaker, alternative:  “to embed evolving internal state standards into incrementally advancing automation.”  If I read that correctly, they are hoping that the Pentagon will develop its own internal guidelines that incorporate legal requirements into autonomous systems; that other advanced militaries will do the same; and that everyone will talk to each other and cooperate to avoid the nightmare future of Terminator drones that critics fear. If that’s their position, I admire their idealism.  But in the real world, when Russia and China start developing autonomous systems, I doubt their generals will be eager to share codes and trade notes with the US and NATO about how to build in “the usual requirements to be met on the law of distinction and proportionality.”  Far more likely is the scenario Matt and Ken describe in their longer paper in Policy Review:  The U.S. “would find itself . . . facing a weapons system on the battlefield that conveys significant advantages to its user, but which the United States would not deploy itself because it does not believe it is a legal weapon.”  That happens to be the classic argument for when it is in the U.S. interest to propose a prohibitive treaty! Devising a prohibitive regime will not be easy, but I don’t agree with Matt and Ken that it would be infeasible.  They are probably correct that autonomous systems will advance incrementally, but that does not mean we won’t be able to identify an “ascertainable break point between the human-controlled system and the machine-controlled one.” I’m sure that the US military and others understand the difference between having a human being making life or death decisions and a machine.  And while it’s true that the technologies at the heart of autonomous robotic systems will be similar to those used in perfectly acceptable civilian applications, that’s also true of, say, the technology for blinding lasers, which are banned internationally. Fortunately, the Pentagon has begun to grapple with the challenge.  Two days after Human Rights Watch released its report on lethal autonomy, DOD issued its first directive on the subject, which prohibits, for now, the use of fully autonomous systems to conduct lethal and other kinetic operations.  The directive is not the final answer to any of the questions we are debating here.  But it does reflect the discomfort that military leaders feel with the notion of delegating war to machines.  Indeed, this is one issue (and far from the only one) on which I expect that Human Rights Watch and many members of the U.S. military will agree.

Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."
Matthew Waxman is a law professor at Columbia Law School, where he chairs the National Security Law Program. He also previously co-chaired the Cybersecurity Center at Columbia University's Data Science Institute, and he is Adjunct Senior Fellow for Law and Foreign Policy at the Council on Foreign Relations. He previously served in senior policy positions at the State Department, Defense Department, and National Security Council. After graduating from Yale Law School, he clerked for Judge Joel M. Flaum of the U.S. Court of Appeals and Supreme Court Justice David H. Souter.

Subscribe to Lawfare