Cybersecurity & Tech Executive Branch

What AI Labs Can Learn From Independent Agencies About Self-Regulation

Nicholas A. Caputo
Monday, October 28, 2024, 10:48 AM
Frontier AI labs have teams dedicated to the public good, but unless those teams are independent, they will be largely ineffective.
Illustration of a meeting (Photo: Mohamed Hasan/Pixabay, https://tinyurl.com/dx4u2655, Free Use)

Published by The Lawfare Institute
in Cooperation With
Brookings

Nine years ago, OpenAI was founded as a nonprofit for the “benefit of all humanity.” The organization installed a board of directors to ensure this mission, even after the lab shifted to a “capped profit” structure in 2019 to attract capital. On Nov. 17, 2023, OpenAI’s board of directors voted to remove then- and current-CEO Sam Altman from his position on the basis that he was “not consistently candid” in his communications with the board, which—in their view—threatened the lab’s fundamental mission. But just days later, Altman was back in as CEO and that board was out, replaced in time by a new set of directors that seem less worried about catastrophic harms from artificial intelligence (AI). Now, as it seeks to raise $6.5 billion on a valuation of $150 billion, OpenAI is aiming to do away with the unique institutional structure that allowed its first board to remove Altman in favor of a more normal corporate form (likely as a public benefit corporation) that will allow it to raise more money and give Altman equity. 

OpenAI and others in Silicon Valley have argued that self-regulation (or waiting for regulation by Congress, likely the same thing) is the best way forward for the industry. These arguments were especially prominent in their response to California’s SB 1047, the first major bill in the country aimed at regulating frontier AI, which Gov. Gavin Newsom (D) vetoed recently based in part on such claims. But if advocates of self-regulation want to be taken seriously, they must learn from the failure of the nonprofit board model at OpenAI and do more to insulate those who are charged with ensuring that AI benefits everyone from the pressure of the profit motive and from the regular hierarchy of a company. Removing internal governance is not the way forward to effective self-regulation. 

Instead, relatively simple modifications of existing internal structures in labs such as OpenAI modeled on the protections given to independent agencies in American government, like removal protections and incentive realignment, could go a long way toward that goal. Shortly after OpenAI’s plan to change to a for-profit public benefit corporation was initially reported, OpenAI established an independent board oversight committee for safety and security. But independent safety functions—such as independent oversight committees—have proved to be ineffective if operating solely at the board level, and the shift to a for-profit structure would likely undermine the extent to which even an independent part of the nonprofit board of OpenAI could exercise oversight. 

Internal governance must happen at two levels: the board level and within the company at the team level. Any significant self-regulation begins at the top and requires the vesting of power in some sort of branch that can stand up to or override the CEO of a company—though the OpenAI example shows even that might not be enough. A nonprofit board given power over for-profit functions is one example of such a set up, but there are others. For example, Anthropic, another leading lab that has published cutting-edge models in its Claude series, hired lawyers and law professors to design its Long-Term Benefit Trust, which sits atop the company’s public benefit corporation structure with the goal of “developing and maintaining advanced AI for the long-term benefit of humanity.” Over time, the trust—whose members have no financial interest in the company—will take on increasing power until it is able to determine the trajectory of the lab as a whole.

Harvard Law School’s Roberto Tallarita, discussing OpenAI’s structure and Anthropic’s Long-Term Benefit Trust, writes that “[b]oth structures are highly unusual for cutting-edge tech companies. Their purpose is to isolate corporate governance from the pressures of profit maximization and to constrain the power of the CEO.” Tallarita is skeptical that these changes will successfully tame the profit motive, and his skepticism seems to have been borne out in the case of OpenAI’s nonprofit board fight. He explains that unless there is a power rooted in the lab’s fundamental structure that can counterbalance the CEO, then any internal team or structure will be swept away. Since the replacement of OpenAI’s original board, almost half of the researchers working on AI safety have left and the teams they were working on have significantly been dismantled.

So, given the checkered history of existing self-regulatory structures in AI labs, what should be done? Top-level governance, whether in the form of a board or a trust or some other structure, must remain intact. But it needs to be supplemented by internal safety structures that are insulated from the direct control of the CEO and that report directly to the board or similar entity. The independent agency model, developed in administrative law, provides a useful example of how internal corporate governance at AI labs could be reformed to ensure that safety is put first, as the charters of the leading labs indicate it should be.

Independent agencies such as the Federal Reserve and the Federal Trade Commission serve the public benefit while maintaining a degree of insulation from the executive branch that allows them to prioritize their mission over the executive’s short-term gain. If the Federal Reserve were under the direct control of the president, for example, it might be used to run the economy hot during every election cycle to generate good headlines, even if doing so damaged the country’s long-term economic health. The independence of these agencies derives from a combination of appointment and removal protections and incentive structures that cause them to prioritize things other than the benefit of the party in power as well as a culture of professional mission. The Federal Reserve, for example, is funded not by congressional allocations but by interest on government securities that it has acquired; its leadership enjoys for-cause removal protections and staggered 14-year terms that ensure that individual presidents are unable to change the composition of the leadership to get it more in line with what the president wants. Protections of this kind, along with informal or professional norms and conventions of independence, give these agencies the wherewithal to pursue their goals even when the executive puts pressure on them to act differently.

AI labs have internal teams that could be converted into some sort of “independent within the lab” structure based on the independent agency model. OpenAI’s Preparedness Team and Safety Advisory Group (SAG), which are integrated into its organizational chart and interact closely with other functions within the lab, present a useful example of how such modifications could occur. The Preparedness Team is charged with researching and monitoring the increasing capabilities of models across a set of potentially dangerous fields such as cybersecurity and biosecurity as the lab moves to develop and release new AI systems. The team reports to the SAG as it performs its evaluations and can fast-track a report to the SAG chair if it discovers that a dangerous capability has been found. The SAG then decides whether to send these cases to OpenAI leadership and the board of directors and can recommend measures to mitigate whatever risks are presented. Leadership then chooses what to do about the case, subject to veto and alternative mandates from the board. The new independent safety and security committee of the board likely slots into the top of this structure, providing dedicated monitoring and expert recommendation capacity to support safety in the lab.

So far, so good. But the Preparedness Team framework implicitly relies on a degree of independence between the Preparedness Team/SAG and the leadership of OpenAI to function. The structure of these groups, however, does not provide a guarantee of such independence. Currently, the SAG members and chair are appointed by OpenAI leadership, which the Preparedness Framework indicates is the CEO, with consultation from the board. Membership in the SAG rotates yearly, so there is limited space for the buildup of institutional memory. There are no removal protections for members of the SAG or the Preparedness Team enumerated in the framework. The board of directors relies on information from the Preparedness Team and the SAG to know whether to take any mitigation measures. If the Preparedness Team decides there is a risk, but the SAG decides it is not a real problem, then the SAG can simply recommend that no steps be taken to mitigate the issue. In other words, the CEO has full control over the composition of the groups charged with overseeing model safety, and the board relies on these groups that the CEO controls to get information to counterbalance the CEO.

Despite their current shortcomings, these teams represent a promising foundation that could be further developed by incorporating lessons from independent agencies. The Preparedness Team and the SAG could be modified to include removal protections for their members and appointment through processes such as board ratification rather than simply consultation. Furthermore, a pot of funds that does not depend on the continued growth of the company could be set aside to pay for the Preparedness Team and SAG functions to ensure that members of these groups do not feel that standing up to the CEO and other company leadership might threaten their own jobs and livelihoods. The board would represent a kind of authorizing source of power for these newly independent structures within OpenAI and continue to ensure that they are performing their duties effectively. Other labs, such as Anthropic and Google DeepMind, could create similar structures to ensure that their own advances are not unduly dangerous.

Such independent structures within companies would add a significant set of complications to the internal operations of frontier AI labs. In the parlance of administrative law, they might reduce “energy in the executive” if decision-makers face resistance or obstacles put in place by the safety teams. Every extra bit of friction and resources spent on safety also reduces the amount that can be expended elsewhere on delivering the enormous potential benefits of AI. At least since Seila Law v. CFPB, the trend in the law has been toward the rejection of independent agencies as legitimate constraints on executive power on pretty much exactly these bases, and the administrative state as a whole is increasingly under fire across a variety of fronts.

But these are the costs of regulation. The executive branch of the government still serves the greater good of the American people, and decisions about how to allocate power among the branches are about how to most effectively serve that goal. In contrast, in AI labs there is a divergence of aims between the good-of-all and profit-motivated functions, and protecting the former best serves the public good. Even if the kind of institution outlined above were to be put in place in frontier labs, some kind of external regulation would still be necessary. Democratic oversight through responsible government remains a better way to promote the “benefit of humanity” than relying on internal regulation. However, the independent agency model still presents a useful example for how AI labs could better ensure that the general interest of humanity does come first, if in fact they are as dedicated to putting it first as their various charters and public statements suggest.

AI labs and the people who run them should be applauded for their innovative attempts to improve corporate governance for the broad public good. Other tech companies have been much more resistant to any kind of regulation, self- or otherwise, that reduces the extent to which they can pursue profit and the growth of the company. For these AI labs to have taken such significant steps to ensure the public benefit before they even had products suggests real seriousness about what AI is and how to ensure that it is developed and deployed for the benefit of all. But as the technology has proved to be increasingly powerful and valuable, the allure of increasing valuations seems to have undermined the spirit of serious self-regulation that was so promising. The point of real governance institutions is to stand up to the kind of tests that such a shift puts forward, but right now they may be failing.

If frontier AI labs, working on new technologies that they themselves have often loudly claimed (though less often recently) to be dangerous, want to be left alone to use self-regulation as the tool for the governance of AI, they must show that they have built strong enough internal institutions to protect the public good—even when those institutions seem to stand in the way of clear benefits and massive profits. Drawing on the experience of independent agencies, it is possible to pick out a path forward that would give internal corporate governance enough power to ensure that safety and the public good come first, but significant internal protections would have to be put in place. Even then, institutions alone might not be enough, as the OpenAI board fight showed. It will be interesting to see whether the labs choose to take the kinds of measures necessary to live up to their great promise.


Nicholas A. Caputo is a legal researcher at the Oxford Martin AI Governance Initiative where he works on domestic and international regulation of AI as well as how AI and the law can inform and shape each other. Nicholas works on AI regulation across different areas of the law, with particular focus on legislative and administrative regulation, corporate governance, and the ways in which advanced technology will reshape fundamental rights. He has experience with strategic litigation and policy advocacy in the United States and in the European Union. Prior to taking up his position at the Initiative, Caputo graduated with honors from Harvard Law School where he focused on law and technology, constitutional law, and public international law. Before law school, he taught debating in China for two years during the height of the COVID-19 pandemic.

Subscribe to Lawfare