Readings: Bryant Walker Smith, "Automated Vehicles are Probably Legal in the United States"
Bryant Walker Smith (a fellow at Stanford's Center for Internet and Society) has authored a new CIS White Paper on whether self-driving cars are, or can be, legal in the United States. His answer is ... Automated Vehicles are Probably Legal in the United States.
This is less obvious and more legally complicated than you might have thought.
Published by The Lawfare Institute
in Cooperation With
Bryant Walker Smith (a fellow at Stanford's Center for Internet and Society) has authored a new CIS White Paper on whether self-driving cars are, or can be, legal in the United States. His answer is ... Automated Vehicles are Probably Legal in the United States.
This is less obvious and more legally complicated than you might have thought. Particularly so once you break down "driving" into a series of discrete activities that can be apportioned between human and machine (e.g., braking the vehicle, setting the speed, maintaining distance to vehicle ahead, parking, etc.). This promises to be the baseline paper on the evolution of self-driving cars as a matter of regulation and law in the United States, and it deserves wide reading by anyone seeking to understand just how "into the weeds" regulation of robotic technology might have to be, to offer not just lofty principle in the abstract, but concrete guidance for highly particular activities. (Smith offers an excellent summary of his work in a new article in Slate, here.)
Why care about this from a national security context? I've posted the report as a Lawfare Readings because the issues of self-driving cars turn out to raise a surprising number of similarities (or perhaps better, parallels or family resemblances) to autonomous weapons. This is not to underplay the profound differences; weapons are intended to kill and cars are not. But it is hard not to be struck by the ways in which the actual regulation of actual machine systems rapidly moves from abstract principle to granular and discrete decisions in each case. These decisions with respect to highly particular activities - setting the terms of sensors to be able to distinguish a pedestrian from a vehicle, for example, or to distinguish an RPG-carrying fighter from a child, for another - establish the decision parameters for the system as a whole. (Over at Volokh Conspiracy, I discuss some of these comparisons at greater length. The granular, incremental regulatory approach to autonomous weapon systems is one that Matthew Waxman and I have urged in an essay at Policy Review, "Law and Ethics for Robot Soldiers," in a brief summation of that argument in the Hoover Institution's Defining Ideas, and here at Lawfare; it is also the approach embraced by the Department of Defense's recent Directive, "Autonomy in Weapon Systems.")
Smith's discussion in the White Paper of how some of these discrete, granular activities in the case of self-driving vehicles can be characterized as a matter of the law of the vehicle code helps one see the level of granularity at which decisions about automation of weapons will have to be made. It's not so say, of course, that one is the other, or that the way in which we solve these problems for automobiles is how we solve them for weapons. But it is to say that it's a useful and bracing exercise to see how these things are getting worked out in an adjunct field of robotics. (The report abstract is below the fold; report available in pdf, Kindle, or hardcopy.)
This paper provides the most comprehensive discussion to date of whether so-called automated, autonomous, self-driving, or driverless vehicles can be lawfully sold and used on public roads in the United States. The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal. The long answer, contained in the paper, provides a foundation for tailoring regulations and understanding liability issues related to these vehicles. The paper’s largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states. The Geneva Convention, to which the United States is a party, probably does not prohibit automated driving. The treaty promotes road safety by establishing uniform rules, one of which requires every vehicle or combination thereof to have a driver who is “at all times ... able to control” it. However, this requirement is likely satisfied if a human is able to intervene in the automated vehicle’s operation. NHTSA’s regulations, which include the Federal Motor Vehicle Safety Standards to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles, with the possible exception of one rule regarding emergency flashers. State vehicle codes probably do not prohibit—but may complicate—automated driving. These codes assume the presence of licensed human drivers who are able to exercise human judgment, and particular rules may functionally require that presence. New York somewhat uniquely directs a driver to keep one hand on the wheel at all times. In addition, far more common rules mandating reasonable, prudent, practicable, and safe driving have uncertain application to automated vehicles and their users. Following distance requirements may also restrict the lawful operation of tightly spaced vehicle platoons. Many of these issues arise even in the three states that expressly regulate automated vehicles. The primary purpose of this paper is to assess the current legal status of automated vehicles. However, the paper includes draft language for US states that wish to clarify this status. It also recommends five near-term measures that may help increase legal certainty without producing premature regulation. First, regulators and standards organizations should develop common vocabularies and definitions that are useful in the legal, technical, and public realms. Second, the United States should closely monitor efforts to amend or interpret the 1969 Vienna Convention, which contains language similar to the Geneva Convention but does not bind the United States. Third, NHTSA should indicate the likely scope and schedule of potential regulatory action. Fourth, US states should analyze how their vehicle codes would or should apply to automated vehicles, including those that have an identifiable human operator and those that do not. Finally, additional research on laws applicable to trucks, buses, taxis, low-speed vehicles, and other specialty vehicles may be useful. This is in addition to ongoing research into the other legal aspects of vehicle automation.
Kenneth Anderson is a professor at Washington College of Law, American University; a visiting fellow of the Hoover Institution; and a non-resident senior fellow of the Brookings Institution. He writes on international law, the laws of war, weapons and technology, and national security; his most recent book, with Benjamin Wittes, is "Speaking the Law: The Obama Administration's Addresses on National Security Law."