Tom Malinowski Ups the Game in Lawfare's Discussion of Killer Robots
Published by The Lawfare Institute
in Cooperation With
Ben, Matt and Ken have posted additional thoughts on our exchange on autonomous lethal robots, and John Denn has weighed in with new commentary. I’d like to continue the debate.
In his post, John imagines several distinct roles that autonomous robots could play on the battlefield, such as targeting incoming munitions, machines (like tanks and planes), fixed objects, and different categories of human beings. He argues that “one’s view of the morality and legality of ‘fully autonomous weapons’ depends very much upon what function(s) they believe those weapons will perform. Without precision as to those functions, however, it is hard to have a meaningful discussion.” This is true, but I’m not sure it resolves the problem. Each of us can conceive of situations in which fully autonomous robots might use force ethically and lawfully (certainly against “things,” like incoming missiles, and even, in discrete cases, against people). Each of us, I hope, also can predict scenarios in which allowing them to kill would lead to immoral or unlawful results. We can and should try to describe and distinguish between these categories as clearly as possible. But we must still then decide: Will the world be better off if we try to regulate full lethal autonomy, or would we be wiser, on balance, to preclude it? Deciding what to do is the important part of this debate. Defining terms is just the beginning. For the United States, it seems to me that there are two possible choices (broadly speaking). The first would be to develop a complex set of internal policies allowing full lethal autonomy under some, but not all circumstances, and then to try to persuade other countries to adopt similar policies. I’ve argued in other posts that, absent a binding legal regime, it will be virtually impossible to convince other nations to adopt, voluntarily, the same restraints as the U.S., especially if they know that the U.S. could change its policies at any time. But setting that aside, just the setting of those policies (if we choose to regulate, not prohibit, lethal autonomy) would be extraordinarily hard. The difficulty we are having here in defining the functions a robot might perform in combat helps prove my point: If four thoughtful people struggle to be clear in an academic debate on this subject, when we have the luxury to deconstruct complex ideas into simple categories, imagine how much tougher it will be to write precise rules, in advance, for when robots can kill in every situation they might encounter on a dynamic battlefield. As John says, a future robot might be able to identify a lawful human target “based upon reasonably definable and detectable criteria,” like a military uniform and insignia. He suggests that the Pentagon would approve lethal autonomy in such a case because the robot would not have to “interpret important matters of context.” But soldiers in the real world confront contexts that change, and categories that morph, unpredictably. A robot programmed to expect enemy troops to wear uniforms might instead encounter to what the human eye are children wearing jackets they have scavenged from dead soldiers. An enemy might be expected to be riding around in tanks and APCs that are recognizable, but then, in the course of battle, it might commandeer some pickup trucks, even as civilians flee the scene in pickups of their own, forcing robot soldiers to shut down, or to improvise. And those are just some scenarios I can imagine sitting in my living room. Human beings can adapt to such contingencies, applying complex ethical reasoning and learning from experience. We are not nearly as good at foreseeing contingencies, a necessary condition for programming machines to meet them alone. The second possible choice would be to decide that allowing the technology for full lethal autonomy to spread would likely do more harm than good, and thus to try banning such weapons. There is no guarantee that we would succeed in winning support for a treaty, and a treaty would not guarantee universal compliance. But we have limited the spread of other technologies in this way. And prohibition would be simpler than regulation. We would know where to draw the line---precluding lethal force unless there is a “man in the loop.” The Pentagon has just adopted such a rule. I say, let’s keep it, and make it binding on others. Note that throughout this discussion, I have emphasized ethical and philosophical considerations, in addition to legal issues. This debate is partly about whether fully autonomous robots will meet the Geneva Convention requirements of proportionality and distinction, but it is not solely about that. Consider the scenario (which Ben and John discussed) in which a human being selects an enemy target who can lawfully be killed, and then sends a robot to conduct the strike, allowing it to select, autonomously, the precise time and place, taking into account possible harm to civilians nearby. Such a strike, even if it killed civilians, might meet the proportionality requirements of IHL. The robot, in other words, might reach the same “legal” conclusion in such a scenario as a JAG officer. But let’s remember: proportionality decisions require weighing the relative value of a human life. Would we be comfortable letting robots do that? How would we feel if someone we loved were killed in a military strike, and we were informed that a computer, rather than a person, decided that their loss was proportionate to the value of a military target? Current law alone may not tell us if it is right to give machines the power to decide if human beings can live or die; this is a moral and political judgment that policy makers will have to make. We must also consider who is likely to make use of fully autonomous robots that can target specific individuals. Of course, the United States might use them against suspected terrorists. But as I’ve suggested, dictators might see them as a perfect weapon for killing their political enemies, since, unlike with human soldiers, they would never have to worry about robots refusing orders or defecting. Ben is right that international law already would prohibit such nefarious use. But as Matt pointed out, the Assads of this world aren’t deterred by existing law. If we want to avert a future in which terminator drones police dictatorships, we may have to make sure such weapons are not mass produced and available for export. I think this is possible. For the foreseeable future, only the major industrialized countries will likely have the ability to produce in large numbers the sophisticated weapons platforms we are talking about here---and even Russia and China generally respect binding regimes governing the production and export of weapons. A final point: We are all striving in this debate to predict how human beings will interact with machines in the very distant future---territory once occupied chiefly by the authors of science fiction. The movies we grew up on portrayed robots in fanciful, sometimes silly ways, and we shouldn’t make them our main frame of reference in this debate. But we also shouldn’t dismiss science fiction entirely; it is, after all, a medium through which creative people have tried to imagine the moral dilemmas we will face in years to come. It’s interesting that for the last half century, perhaps the chief theme of science fiction has been the danger of giving machines too much autonomy, including in matters of war---from Star Wars with its droids and clone warriors, to Battlestar Galactica, with its endlessly repeating cycles of humans creating robots who rebel against their creators, to the Terminator. Now, to my friends in this debate: I am interested in its policy ramifications. But I am also concerned about you. Because, you know how this movie ends. If you don’t listen to reason, and autonomous fighting robots are built, the movies made about our times will assign you one of two possible roles: You will either be straight villains, killed by the robots in an act of poetic justice. Or you will be the naïve characters who dismiss warnings about the threat, only to see the error of your ways at the last moment, becoming, at best, sidekicks to the heroes who save humanity. Search your feelings, Ben. You know it to be true. In fact, I’m surprised you haven’t seen the obvious warnings in the classic films already made about this issue. There are a couple of relevant sequences in the Terminator 2, for example. Unfortunately, I could only find them online in a dubbed Hindi language version, but the English subtitles are pretty accurate. Check it out:
All kidding aside, I hope this conversation will continue. It is about many weighty things---the law (about which reasonable people may disagree), but also about the proper relationship between humans and machines in a world where fantastical advances in artificial intelligence will give computers the ability to make complex decisions for us. What choices will we reserve for ourselves in such a world, and which ones will we be comfortable delegating? We have time to answer that question. But we had better get it right.