Thinking About Risks From AI: Accidents, Misuse and Structure
World leaders have woken up to the potential of artificial intelligence (AI) over the past year. Billions of dollars in governmental funding have been announced, dozens of hearings have been held, and nearly 20 national plans have been adopted.
Published by The Lawfare Institute
in Cooperation With
World leaders have woken up to the potential of artificial intelligence (AI) over the past year. Billions of dollars in governmental funding have been announced, dozens of hearings have been held, and nearly 20 national plans have been adopted. In the past couple of months alone, Canada and France launched the International Panel on AI (IPAI), modeled on the Intergovernmental Panel on Climate Change, to examine the global impacts of AI; the U.S. Congress created a National Security Commission on the subject; and the Pentagon tasked one of its top advisory bodies with devising ethical principles for its use of AI.
“AI” broadly refers to the science and technology of machines capable of sophisticated information processing. Current applications include face recognition, image analysis, language translation and processing, autonomous vehicles, robotics, game-playing, and recommendation engines. Many more applications are likely to emerge in the coming years and decades. These advances in AI could have profound benefits. To take just a few examples, they could save lives through advances in early disease diagnosis and medicine discovery, or help protect the environment by enhancing monitoring of ecosystems and optimizing the design and use of energy systems.
But any technology as potent as AI will also bring new risks, and it is encouraging that many of today’s AI policy initiatives include risk mitigation as part of their mandate. Before risks can be mitigated, though, they must first be understood—and we are only just beginning to understand the contours of risks from AI.
So far, analysts have done a good job outlining how AI might cause harm through either intentional misuse or accidental system failures. But other kinds of risk, like AI’s potentially destabilizing effects in important strategic domains such as nuclear deterrence or cyber, do not fit neatly into this misuse-accident dichotomy. Analysts should therefore complement their focus on misuse and accidents with what we call a structural perspective on risk, one that focuses explicitly on how AI technologies will both shape and be shaped by the (often competitive) environments in which they are developed and deployed. Without such a perspective, today’s AI policy initiatives are in danger of focusing on both too narrow a range of problems and too limited a set of solutions.
Misuse Risk and Accident Risk From AI
Dividing AI risks into misuse risks and accident risks has become a prevailing approach in the field. This is evident in introductory discussions of AI, as well as in comments by thoughtful scholars and journalists, which have offered useful perspectives on potential harms from AI.
Misuse risks entail the possibility that people use AI in an unethical manner, with the clearest cases being those involving malicious motivation. Advances in speech, text and image generation, for example, have enabled the creation of “deepfakes”—realistic but synthetic photo, video or audio—that could be used for activities like political disruption or automated phishing attacks. Advances in drone hardware, autonomous navigation and target recognition have stimulated fears of a new kind of mobile improvised explosive device (IED). Those who focus on such misuse risks emphasize AI’s dual-use nature—the appreciation of which has started conversations about prepublication review in AI research and other measures that could prevent unethical actors from obtaining or using powerful AI capabilities.
Accident risks, in contrast, involve harms arising from AI systems behaving in unintended ways. A prototypical example might be a self-driving car collision arising from the AI misunderstanding its environment. As AI scales in power, analysts worry about the potential costs of such failures—AI is increasingly being embedded in safety-critical systems such as vehicles and energy systems—and about the difficulty of anticipating the failure modes of complex, opaque learning systems. To better anticipate and prevent risks, those working to address accident risks have placed special emphasis on making algorithms and systems more interpretable—meaning, roughly, that humans have insight into the algorithm’s behavior—and robust—so that they do not fail catastrophically when errors or new circumstances arise.
The Need for a Structural Perspective on Technological Risk
While discussions of misuse and accident risks have been useful in spurring discussion and efforts to counter potential downsides from AI, this basic framework also misses a great deal. The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways. This, in turn, places the policy spotlight on measures that focus on this last causal step: for example, ethical guidelines for users and engineers, restrictions on obviously dangerous technology, and punishing culpable individuals to deter future misuse. Often, though, the relevant causal chain is much longer—and the opportunities for policy intervention much greater—than these perspectives suggest.
Crucially, the accident-misuse dichotomy obscures how technologies, including AI, often create risk by shaping the environment and incentives (the “structure” of a situation) in subtle ways. For illustration, consider the question of how technology contributed to the harms of World War I. A prominent interpretation of the origins of WWI holds that the European railroad system—which required speedy and all-or-nothing mobilization decisions due to interlocking schedules—was a contributing factor in the outbreak and scope of a war that, many argue, was largely the result of defensive decisions and uncertainty. While the importance of railroads as a cause of WWI continues to be debated among historians and political scientists, the example illustrates the broader point: Technologies such as railroads, even when they are not deliberately misused and behave just as intended, could have potentially far-reaching negative effects.
To make sure these more complex and indirect effects of technology are not neglected, discussions of AI risk should complement the misuse and accident perspectives with a structural perspective. This perspective considers not only how a technological system may be misused or behave in unintended ways, but also how technology shapes the broader environment in ways that could be disruptive or harmful. For example, does it create overlap between defensive and offensive actions, thereby making it more difficult to distinguish aggressive actors from defensive ones? Does it produce dual-use capabilities that could easily diffuse? Does it lead to greater uncertainty or misunderstanding? Does it open up new trade-offs between private gain and public harm, or between the safety and performance of a system? Does it make competition appear to be more of a winner-take-all situation? We call this perspective “structural” because it focuses on what social scientists often refer to as “structure,” in contrast to the “agency” focus of the other perspectives.
This distinction between structure and agency is most clearly illustrated by looking at the implicit policy counterfactuals on which the different perspectives focus. The misuse perspective, as noted earlier, directs attention to changing the motivations, incentives or access of a malicious individual, while the accident perspective points to improving the patience, competence or caution of an engineer. Both hone in on a single “agent,” who, if their behavior were changed in some way, could significantly reduce the chances of harm. The structural perspective, however, starts from the assumption that in many situations the level of risk would basically be left unchanged even after a change in one agent’s behavior. As with an avalanche, it may be more useful to ask what caused the slope to become so steep, rather than what specific event set it off.
In short, the potential risks from AI cannot be fully understood or addressed without asking the questions that a structural perspective emphasizes: first, how AI systems can affect structural environments and incentives, and second, how these environments and incentives can affect decision-making around AI systems.
AI’s Effect on Structure
The first question to ask is whether AI could shift political, social and economic structures in a direction that puts pressure on decision-makers—even well-intentioned and competent ones—to make costly or risky choices.
This type of concern about AI’s effects has, so far, been most clearly articulated in the area of nuclear strategic stability. Deterrence depends on states retaining secure second-strike capabilities, but some analysts have noted that AI—combined with other emerging technologies—might render second-strike capabilities insecure. It could do so by improving data collection and processing capabilities, allowing certain states to much more closely track and potentially take out previously secure missile, submarine, and command and control systems. The fear that nuclear systems could be insecure would, in turn, create pressures for states—including defensively motivated ones—to pre-emptively escalate during a crisis. If such escalation were to occur, it might not directly involve AI systems at all (fighting need not, for example, involve any kind of autonomous systems). Yet it would still be correct to say that AI, by affecting the strategic environment, elevated the risk of nuclear war.
It remains to be seen how plausible this scenario really is, though it is illustrative and warrants careful attention. Other illustrations of AI’s potential structural effects are not hard to come by. For instance, analysts and policymakers agree that AI will become increasingly important to cyber operations, and many worry that the technology will strengthen offensive capabilities more than defensive ones. This could exacerbate what scholars have dubbed the “cybersecurity dilemma” by further increasing the already-significant pressure for actors to pre-emptively penetrate each other’s networks and quickly escalate in the event of a crisis, as in the nuclear case. Looking beyond the security realm, researchers have also cited what we would identify as structural mechanisms in linking AI to potential negative socioeconomic outcomes, such as monopolistic markets (if AI leads to increasing returns to scale and thereby favors big companies), labor displacement (if AI makes it increasingly attractive to substitute capital for labor), and privacy erosion (if AI increases the ease of collecting, distributing and monetizing data).
In each of these examples, the development and deployment of AI could harm society even if no accidents take place and no one obviously misuses the technology (which is not to say that outcomes like crisis escalation or privacy erosion could not also be malicious in nature). Instead, AI’s negative impacts would come from the way it changes the nature of the environment in which people interact and compete.
Structure’s Effects on AI
The second question raised by the structural perspective is whether, conversely, existing political, social and economic structures are important causes of risks from AI, including risks that might look initially like straightforward cases of accidents or misuse.
A recent fatal accident involving an AI system—Uber’s fatal self-driving car crash in early 2018—offers a good illustration of why the structural perspective’s emphasis on context and incentives is necessary. When the crash happened, commentators initially pointed to self-driving vehicles’ “incredibly brittle” vision systems as the culprit, as would make sense from a technical accident perspective. But later investigations showed that the vehicle in fact detected the victim early enough for the emergency braking system to prevent a crash. What, then, had gone wrong? The problem was that the emergency brake had purposely been turned off by engineers who were afraid that an overly sensitive braking system would make their vehicle look bad relative to competitors. They opted for this and other “dangerous trade-offs with safety” because they felt pressured to impress the new CEO with the progress of the self-driving unit, which the CEO was reportedly considering cutting due to poor market prospects.
To understand this incident and to prevent similar ones, it is important to focus not just on technical difficulties but also on the pattern of incentives that was present in the situation. While increasing the number and capability of the engineers at Uber might have helped, the risk of an accident was also heightened by the internal (career) and external (market) pressures that led those involved to incur safety risks. Technical investments and changes, in other words, are not sufficient by themselves—reducing safety risk also requires altering structural pressures. At the domestic level, this is often done through institutions (legislatures, agencies, courts) that create regulation and specify legal liability in ways that all actors are aware of and, even if begrudgingly, agree upon. In practice, the biggest obstacle to such structural interventions tends to be a lack of resources and competency on the part of regulatory bodies.
At the international level, however, the problem is harder still. Not only would countries need a sufficiently competent regulatory body, but, unlike in the domestic case, they also lack an overarching legitimate authority that could help implement some (hypothetical) optimal regulatory scheme. For example, in thinking about whether to embed degrees of autonomy in military systems, policymakers such as Bob Work are well aware that AI systems carry significant accident risk. But these systems also come with certain performance gains, such as speed, and in highly competitive environments those performance gains could feel essential. As Work put it: “[T]he only way that we go down [the autonomous weapons] path, I think, is if it turns out our adversaries do and it turns out that we are at an operational disadvantage because they’re operating at machine speed.” Safety, in short, is not just a technical challenge but also a political one.
Implications for AI Policy Initiatives
As current and future AI policy initiatives attempt to understand and balance AI’s potential benefits and downsides, questions about structural dynamics should be a central part of the agenda. They will by no means be easy to answer: The impact of technological change, much like its direction and pace, is very hard to predict. And even though we have focused on risk in this post, the structural perspective also opens up a new category for thinking about potential benefits from AI that scholars and practitioners should explore.
It will take time and effort to tackle these kinds of questions, but that is all the more reason to start thinking now. Two main things can be done today to help speed up this process.
First, the community of people involved in thinking about AI policy should be expanded. Currently, those focused on risks from misuse emphasize the need to draw lessons from experts in other dual-use fields such as biotechnology, whereas those focused on accident risks look toward machine learning scientists and engineers. Many initiatives also include ethicists, given the frequent ethical considerations that arise when decisions are made about and by AI systems.
A structural perspective suggests these groups should be joined by social scientists and historians, many of whom spend much of their careers thinking about how bad outcomes—from climate change to segregation to war—can come about without anyone necessarily wanting or intending them to. Structural causes of risk cannot be understood, or addressed, without this expertise. Any increase in demand, though, also needs to be matched by an increase in supply. With some notable exceptions, especially within economics, social scientists have been slow to pay attention to AI and other emerging technologies. This is unfortunate, because they clearly have much to contribute, and also to learn. They will need to collaborate closely with technical experts, for example, to understand the strategic properties and consequences of AI systems.
Second, more time should be spent thinking about the possibility of creating or adapting collective norms and institutions for AI. Many current “AI for good” efforts focus on single companies or countries, as is appropriate for risks such as accidental algorithmic bias. Many other significant risks from AI, though, cannot be addressed through unilateral action. When the structure of a situation encourages or impels risky action, focusing on structure—shifting people’s behavior in a coordinated way—is often the most, and sometimes the only, effective policy direction.
The creation of norms and institutions is, of course, no easy feat. This work requires a sufficiently common definition of a problem, consensus on where to draw lines, technical means to monitor those lines and the political means to credibly punish noncompliance. Even if these conditions are met, moreover, success is not guaranteed. But the fact that there are so many difficulties is all the more reason to start thinking about these problems today, at a time of relative calm and stability. It would be most unfortunate if, once risks become more imminent, it is necessary to deliberate not only about solutions but also about the process of deliberation itself.
***
The idea that many of the risks from AI have structural causes is a sobering one: It implies that solving these problems will require collective action both domestically and internationally, which has always been a difficult problem—especially on the international stage. Yet at several points in history, even tense ones, nations managed to find ways to stave off (at least for a while) the unintended and destabilizing effects of emerging technologies, from the Anti-Ballistic Missile Treaty to the Montreal Protocol. Such cooperation becomes possible when leaders realize that structural risks are also collective risks, and that there are therefore mutual gains to be had from working hard to understand and address them—even if those involved otherwise see each other as competitors. The fact that nations’ fates are fundamentally interlocked is a source of complexity in the governance of AI, but also a source of hope.
We thank our many colleagues who contributed to these ideas, including helpful input from Emefa Agawu, Amanda Askell, Miles Brundage, Carrick Flynn, Ben Garfinkel, Jade Leung, and Michael Page, and OpenAI and the Future of Humanity Institute for institutional support.