Criminal Justice & the Rule of Law Cybersecurity & Tech

Open Questions in Law and AI Safety: An Emerging Research Agenda

Yonathan A. Arbel, Ryan Copus, Kevin Frazier, Noam Kolt, Alan Z. Rozenshtein, Peter N. Salib, Chinmayi Sharma, Matthew Tokson
Monday, March 11, 2024, 1:00 PM

The case for AI safety law as a valuable field of scholarship, a preliminary set of research questions, and an invitation for scholars to tackle these questions and others.

Artificial Intelligence (Mike MacKenzie, www.vpnsrus.com; CC BY 2.0 DEED, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Artificial intelligence (AI) is a transformative technology. Like previous technological advances, AI promises to improve socioeconomic conditions, while also presenting substantial risks to individuals, communities, and society at large. But compared with most technologies of the past, frontier AI systems may soon have substantially greater capabilities and thus present substantially larger risks. Given the rapid pace and unpredictable nature of AI development, it is difficult to anticipate its precise consequences. Nonetheless, proactively addressing the large-scale challenges posed by AI is crucial to ensuring that the technology’s benefits are fully realized while mitigating the associated dangers that may lie ahead. We think that law—and legal scholars—have a distinctive role in the project of making AI safe for humanity. In what follows, we make the case for AI safety law as a valuable field of scholarship, describe a preliminary set of research questions for that field, and invite other scholars to connect with us and with one another as we attempt to make progress on those questions.

The Legal AI Safety Initiative

In August 2023, more than 20 legal scholars, policy experts, and AI safety specialists gathered at the Center for AI Safety in San Francisco. This group included individuals with expertise in constitutional law, cybersecurity, contract law, administrative law, and other legal fields, complemented by professionals in STEM, public policy, philosophy, jurisprudence, and the humanities. The aim: to explore how legal institutions and legal scholarship could assist in understanding and addressing societal-scale risks associated with AI.

What emerged from that workshop was a set of shared propositions, along with a preliminary research agenda on the law of AI safety. The purpose of this article is to share those ideas with other scholars interested in the growing conversation around AI regulation and AI safety.

First, we reached a consensus that AI has the potential to profoundly impact human life, the economy, and society. While there is debate over the extent and nature of this impact, and the corresponding timeline, we all anticipate that AI will have significant, society-wide effects.

Second, the group acknowledged that AI’s potential for positive societal change is accompanied by a range of safety concerns. By “safety concerns” we mean the potential for advanced AI systems to cause large-scale harm to human life, limb, and freedom. The paths to these harms are many, including AI-mediated bioterrorism, cyberattacks, warfare, manipulation, mass surveillance, totalitarianism, and more. Absent adequate regulatory safeguards, these risks are likely to intensify as the technology becomes more capable, autonomous, and broadly integrated into our society and economy.

Third, lawyers and legal scholars have an important and distinctive role to play in mitigating the risks from AI. A central challenge in AI safety, the alignment problem, is intimately familiar to lawyers: It is another iteration of the principal-agent problem. Lawyers are naturally positioned to think carefully about governance and regulation in light of private incentives and collective action problems. Moreover, lawyers are trained to design rules for complex systems whose behaviors and risks are difficult to anticipate in advance. Lawyers have the added advantage of broad exposure to mechanisms for ex-ante and ex-post regulation. Finally, lawyers are well situated to engage in deep cross-disciplinary conversations with technologists, economists, sociologists, and experts in a range of other relevant disciplines. Lawyers, in other words, can serve as the connective tissue between many domains, translating across specialized subject matter to establish common goals and identify the regulatory path to achieve them. 

AI Safety and the Law

“AI safety” refers to the study and mitigation of short- and long-term risks to human life, bodily integrity, individual autonomy, and the environment posed by the rapid development of powerful AI systems. We are witnessing today a sharp spike in the capabilities of AI systems, alongside their increasing integration into many real-world applications. The field of AI safety focuses on anticipating and mitigating harms that can arise from misuse and abuse, system failure, and misaligned autonomous action by the systems themselves.

Here are just a few examples of emerging risks. Large language models (LLMs) are already displaying concerning dual-use capabilities. For example, they can provide detailed instructions for the synthesis of explosives and chemical weapons; invent new, biologically active proteins; and offer detailed instructions for selecting, procuring the synthesis of, and releasing pandemic viruses. In addition, LLMs can assist software engineers in writing code, which makes them useful for orchestrating cyberattacks, thereby expanding existing vulnerabilities in energy, water, health, and other critical infrastructure. The immense commercial pressure to accelerate labor automation has resulted in developing agent-like AI systems that can improvise and adapt to achieve complex goals in both digital and physical environments. Such systems, if left unregulated, could pursue goals that benefit their designers but harm society. Alternatively, AI agents might behave in unexpected and harmful ways that defy their designers’ intentions, resulting in potentially catastrophic accidents.

The field of AI safety builds on foundational work in computer science, philosophy, and ethics. It also grows out of the work of legal scholars who have studied the ethical use of AI and considered the safety of specific systems. 

Some safety risks follow from particular aspects of AI technology. To briefly outline a few of them, AI systems tend to perform well across a diverse set of tasks, without explicit pre-programming. They can perform these tasks at levels that match, if not exceed, average human performance—and they sometimes even match the level of experts. AI systems are being given increasing degrees of autonomy in planning and executing their tasks. To enhance the performance of AI systems, developers sometimes provide them with tools—such as a connection to a search engine, robotic arms, or infrastructure control. These tools greatly expand the real-world capabilities of AI systems. Additionally, AI systems have already shown that they can find effective ways to improve their own operations. Finally, because of their complexity, it is very difficult to audit and monitor AI systems. 

These risks are also exacerbated by factors external to the technology itself: market incentives, geopolitical rivalries, industrial organization and self-governance of AI labs, insufficient oversight of AI development, effects of AI systems on the political process, the public’s limited understanding of these technologies, and limited research on AI safety and alignment.

As technology continues to advance, AI safety concerns will grow in importance. Our goal is to anticipate safety risks ahead of time and help ensure the development of socially safe AI technology.

We think that risks like these raise important questions that touch on virtually every field of law. Many legal scholars are already producing important work on several of them. But work on AI safety is only just beginning, and it can be hard to know where to start. Some people may feel that the AI questions are necessarily technical and require a background in math or engineering. We believe, however, that lawyers of all stripes can contribute significantly to the endeavor. All of us extend an open invitation to collaborate on these issues. To this end, we highlight two institutions that will help facilitate these collaborations. First, the newly formed Center for Law & AI Risk (CLAIR) will offer opportunities for scholars to workshop papers and a community to exchange ideas (connect with CLAIR here). Second, for legal scholars whose work would benefit from technical resources, including computing power, the Center for AI Safety is interested in providing such support (for inquiries, contact Peter N. Salib).

To facilitate these investigations, we have compiled a list of “Open Questions in Law and AI Safety.” To be clear, we do not suggest that this is an exhaustive list. The examples are a preliminary reflection of the perspectives, skills, and backgrounds of a small group of legal scholars who came together at a single meeting. These views thus reflect the insights of only a small sample of the many scholars doing this work. In the course of answering some of these initial questions, new questions will likely come to light. We hope the list serves as an invitation to other legal scholars who, like us, are concerned about the impact of AI and believe that law has a distinctive role in tackling the societal-scale risks posed by AI.

We share this research agenda with legal scholars, lawyers, regulators, law students, and all other stakeholders, whether within or outside of academia. Our group plans to meet regularly, to expand our membership, and to update this list periodically. We look forward to your feedback, your participation, and your scholarship.

Open Questions in Law and AI Safety

Key Background Questions

  1. For the purposes of regulating safety risks—those adversely affecting human life, limb, and freedom at scale—how should a regulable AI system be defined? What is the unit being regulated? 
  2. At what level(s) should AI safety regulations operate: the engineer, the company, or the system?
  3. Can and should the law encourage a safety program and then enforce its implementation in AI systems?
  4. Given the uncertainty about the rate at which AI will progress, when should regulation be deployed?
  5. Are there certain technological developments that are considered dangerous on their own and should be banned? If so, how can such bans be designed and enforced?
  6. Should regulators focus upstream (at the research and development level), midstream (including hardware and software developers as well as AI labs), or downstream (at the level of specific use-cases)?
  7. How should facts about geopolitical competition between nations inform AI safety policy?
  8. How can we design regulations and audits of systems that may be inscrutable and, under certain assumptions, deceitful?
  9. How can we design safety regimes that keep up with the pace of technological development?
  10. Should the training, deployment, or use of a frontier model be subject to disclosure, registration, or licensing requirements? How could such regimes be best designed?

Legal Questions by Domain

International Law

  1. What other regulatory regimes offer a promising blueprint for AI safety regulation?
  2. What are some lessons learned from the development of soft and hard norms that can shed light on effective international norm building?
  3. What are the prospects for slowing international AI races via multilateral agreements?
  4. What are the prospects for governance via coalitions backed by carrot-vs.-stick authority (e.g., NATO, UN, WTO)?
  5. What tools does human rights law offer for preventing large-scale harm from the development and use of advanced AI, by either state or non-state actors?
  6. How can culturally relative norms and value judgments be taken into account when designing safety regulations for AI systems with global reach? 

National Security and Military Law

  1. If AI systems are effective in facilitating the creation of arms and weapons, or in their deployment, how should they be governed?
  2. What degree of transparency into military and intelligence uses of AI is desirable, balancing the necessity of secrecy for national security with the need for public oversight?
  3. Should an AI system have lethal capabilities and be able to execute on them without a human in the loop?
  4. How would proportionality judgments and other law of war considerations be baked into AI systems?
  5. How do existing rules governing weapons systems’ fitness for combat use apply to AI systems?
  6. When does an AI system acting in another sovereign entity’s territory constitute a violation of sovereignty or an act of war? To what degree can an AI system’s decisions be imputed to the nation state that developed or deployed it?
  7. Should certain types of AI technology be subject to export controls?
  8. What are the considerations around using generative AI technologies for offensive influence campaigns? How should the data of human subjects be treated in that context?

Constitutional and Administrative Law

  1. Which regulatory tools are available to the U.S. government to regulate and potentially limit highly dangerous technological research and development?
  2. Which administrative agencies are best positioned to understand the technical questions needed to effectively promulgate and enforce AI safety regulations?
  3. Will constitutional limits on Congress’s legislative power hamper its ability to promulgate optimal AI safety laws? How might those limits be managed? 
  4. Will constitutional limits like the Dormant Commerce Clause limit states’ ability to promulgate optimal AI safety laws? How can those limits be managed?
  5. Would emergency government shutdown of a dangerous AI model constitute a “taking”? 
  6. What are the limits on federal versus state regulation of AI development?
  7. What First Amendment constraints govern regulations regarding the production and use of unsafe AI outputs or systems that can produce it?
  8. How can Congress ensure agency flexibility and agility without running afoul of rules like the Major Questions Doctrine?
  9. What is the ideal regulatory framework—centralized, decentralized, “Swiss cheese” model with redundancy, etc.?

Antitrust Law

  1. Would agreement to pause, or otherwise coordinate on safe AI development, run afoul of antitrust law? How could the law encourage such behavior without sacrificing other values?
  2. To what extent would some degree of market concentration make AI safety regulation more—or less—effective? Which parts of the AI development stack should be more or less centralized?
  3. Would AI safety regulations raise barriers to entry into the industry and hamper competition? Will reduced competition limit or enhance safety? 

Private Law

  1. What lessons can be learned from the incomplete contracting and principal-agent literature for alignment problems, reward hacking, and goal misspecification?
  2. To what extent should AI agents be given the capacity to enter into contracts or form corporations and how might that affect the risk profile of autonomous AI systems?
  3. How can ex-post liability regimes be designed to limit large-scale risks?
  4. How should tort law assign liability when a human uses an AI system and causes harm?
  5. How should tort law assign liability when an AI system with no human user causes harm?
  6. Should certain autonomous AI systems themselves be subject to liability, as corporations are?

Criminal Law and Procedure

  1. How can criminal law govern harmful behavior by AI labs, users of deployed models, and autonomous AI systems?
  2. What specific laws should be enacted to govern the commission of crimes by autonomous AI systems? 
  3. Are criminal sanctions appropriate tools to help ensure the responsible development of AI systems?
  4. What are the limitations on the government’s power to investigate private AI labs in cases of suspicion of dangerous development?

Tax Law

  1. What tax incentives can be used to encourage safety research?
  2. How should AI labs or developers be taxed to ensure a desirable pace of development?
  3. How can it be ensured that the fruits of AI innovation are equitably distributed in society? 
  4. From a safety perspective, should taxation be at the level of lab, licensor, licensee, or a combination of those?
  5. How can tax incentives be used to internalize the costs of unethical or unsafe AI systems?

Environmental Law

  1. Which regulatory tools are available to regulate the environmental harm that can come from the broad training of AI systems?
  2. How should liability be designed for environmental harms caused by either directed or autonomous AI systems?

Political Economy

  1. How can regulation foster or inhibit the ability to effectively govern unsafe AI development?
  2. How can regulatory capture be mitigated or avoided?
  3. To what extent can self-regulation be trusted? Are there market mechanisms that ensure safe development?
  4. To what degree can AI systems themselves influence public perception of and trust in political institutions? How can these risks be mitigated?

Intellectual Property Law

  1. To what extent does the patent system promote safe technology development?
  2. To what extent should trade secret protections be afforded to AI labs?
  3. Is traditional intellectual property the best way to incentivize technical safety advances, given their large positive externalities? Would other incentive regimes perform better?

Corporate Law and Financial Regulation

  1. Which systems of corporate governance can ensure that AI labs will have effective safety oversight?
  2. Which equity and pay structures can ensure that managers and engineers have proper incentives to develop safe AI systems?
  3. How does limited liability affect the potential deployment of dangerous AI systems?
  4. What degree of autonomy and with what legal frameworks should AI agents be authorized to participate in economic and financial transactions (e.g., trading on financial markets, entering into contracts, selling products and services)? What disclosures or compensation schemes should be in place to protect confidence in markets?

Yonathan Arbel is the Silver Associate Professor University of Alabama School of Law and Director of the AI Studies Initiative.
Ryan Copus, Associate Professor of Law at the University of Missouri-Kansas City School of Law, teaches and researches in the areas of civil procedure, law & algorithms, and empirical legal studies.
Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Noam Kolt is a Vanier Scholar at the University of Toronto Faculty of Law and an incoming Assistant Professor at the Hebrew University Faculty of Law and School of Computer Science and Engineering.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
Peter N. Salib is an Assistant Professor of Law at the University of Houston Law Center and Affiliated Faculty at the Hobby School of Public Affairs. He thinks and writes about constitutional law, economics, and artificial intelligence. His scholarship has been published in, among others, the University of Chicago Law Review, the Northwestern University Law Review, and the Texas Law Review.
Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.
Matthew Tokson is a Professor of Law at the University of Utah S.J. Quinney College of Law, writing on the Fourth Amendment, cyberlaw, and artificial Intelligence, among other topics. He is also an affiliate scholar with Northeastern University's Center for Law, Innovation and Creativity. He previously served as a law clerk to the Honorable Ruth Bader Ginsburg and to the Honorable David H. Souter of the United States Supreme Court, and as a senior litigation associate in the criminal investigations group of WilmerHale, in Washington, D.C.

Subscribe to Lawfare