Cybersecurity & Tech

The Promise and Perils of Nontraditional Corporate AI Governance

Brett McDonnell, Alan Z. Rozenshtein
Tuesday, October 1, 2024, 8:00 AM
Can nonprofit corporate forms help AI labs balance innovation with safety, and, if so, what can the government do to encourage that?
OpenAI Co-Founder & CEO Sam Altman speaks onstage during TechCrunch Disrupt, San Francisco, 2019 (Photo: TechCrunch/Flickr, https://www.flickr.com/photos/techcrunch/48838377432/, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

The November 2023 boardroom coup that briefly deposed OpenAI CEO Sam Altman illustrated both the promise and the limits of OpenAI’s unusual governance structure, by which the leading artificial intelligence (AI) laboratory was (and at least for now still is) controlled by a nonprofit board of directors that could (and briefly did) act in ways that threatened the company’s existence, not to mention its bottom line. But the board’s attempt to assert its authority was short-lived. Altman returned as CEO a week after being fired, and those board members who voted for his ouster, including OpenAI co-founder and chief scientist Ilya Sutskever, ultimately left the company.

The Altman saga raises a number of questions about the role of nontraditional governance models—that is, those that depart from normal for-profit corporate governance—in the development of AI. Given the traditional dominance of for-profit corporate forms in the technology industry, the question of whether commercial AI would be better developed under a nonprofit structure would be an academic one but for the striking fact that two of the leading AI labs—OpenAI and Anthropic—have chosen to forego the typical for-profit corporate route (leading in turn to a growing body of excellent academic commentary). Both have done so because of explicit concerns over AI safety, on the theory that an exclusive focus on profits will cause AI developers to make unsafe choices if doing so brings in more money. Thus, it is worth exploring whether nontraditional corporate governance can achieve the goals to which it is being called.

Here we aim to describe the landscape of corporate governance in the AI sector, critically evaluate whether nontraditional governance is a viable solution for the unique risks that AI poses, and offer policy prescriptions that will help nontraditional governance models align AI development with the broader social interest.

The Risks of AI

By their own admission, the two leading AI labs that have chosen not to operate as traditional, for-profit companies have made that decision largely due to concerns over AI safety.

The structure of AI labs is an important policy issue because the development of ever-more sophisticated AI systems has substantial externalities, both positive and negative. On the positive side, AI promises to increase productivity and spur technological innovation. In the most utopian forecasts, it could usher in an era of post-material abundance. Society should thus want to encourage those positive innovation effects as much as possible.

On the negative side, AI risks substantial social costs. Some of these are relatively small and localized, such as the harms that a particular AI system could cause an individual—for example, an AI system that provides bad health advice to a user or defames a third party. Some are medium scale, such as the risks of AI being used to spread disinformation and propaganda at scale, or to hypercharge surveillance and job loss. And at the extreme end, AI poses a range of “existential” threats, whether by enabling bad actors to develop weapons of mass destruction or by autonomous AI agents themselves acting to harm humanity as a whole.

Traditional regulation may struggle to address the threats that AI poses. The usual expertise gap between regulators and the regulated may be even greater in this new, rapidly evolving field than in other areas. The capture problem may be especially serious insofar as persons outside the field itself do not understand the risks or take them seriously. Since AI research can be done around the world, national regulators may find AI companies are able to elude their grasp. Perhaps worst of all, governments may become the most dangerous actors of all if they get into a literal arms race, given the obvious potential military implications of AI. Governments hungry to harness the power of AI may not be incentivized to regulate its more destructive potential aspects.

What Nontraditional Corporate Governance Can Accomplish

Because the unique and potentially catastrophic risks of AI make conventional regulation hard, we might hope that self-regulation by the companies developing AI can help address those risks. The goal is to try to align the interests of companies and their managers with social goals of realizing the potentially amazing benefits of AI while avoiding potentially catastrophic risks. 

Unfortunately, traditional for-profit corporations seem ill-suited to engage in adequate self-restraint in avoiding social risks. If a decision involves a trade-off between safety and profit, the shareholder wealth maximization norm that applies in U.S. corporate law (at least in some states, above all Delaware, where a majority of large U.S. companies are incorporated) dictates that maximizing financial returns to shareholders should prevail. The business judgment rule and other doctrines give safety-minded managers significant leeway to consider social risks. But various legal and nonlegal norms and practices encourage managers to focus on profits.

Nonprofit corporations, as the name suggests, provide a way to avoid that focus on profits. Instead, they prioritize mission-driven goals, such as advancing social, educational, or charitable causes. To maintain nonprofit status, these organizations must adhere to specific legal requirements, such as refraining from distributing profits to private individuals or shareholders and ensuring that their activities primarily serve a public benefit. Any surplus revenue must be reinvested into the organization’s objectives, reinforcing the focus on long-term societal benefits rather than short-term financial gain.

However, nonprofits have their own limits as an organizational form for companies developing AI. The exclusion of equity investors will put them at a big disadvantage in attracting the large amount of capital required to fund AI research and development, and may also make it harder to attract the best researchers. They could be overly cautious, slowing down the achievement of potentially huge benefits from innovations in AI. Nonprofits also may have a severe accountability problem, as their boards are typically self-perpetuating, with the current board choosing its successors, and they lack the mechanisms of shareholder voting and lawsuits, which provide at least some limits on for-profit boards.

Much attention has recently focused on hybrid legal forms for social enterprises, situated between for-profits and nonprofits. Benefit corporations are the leading new legal form designed to achieve some of the advantages of each type. However, benefit corporations lack strong governance mechanisms to ensure that the profit motive does not take priority over social goals (like avoiding human extinction). They rely on statements of purpose, fiduciary duties, and disclosure to create a commitment to consider public interests, not just profit. But as designed, companies can treat the public as a mere pretext with profit as their true motive, and none of those mechanisms will do much to stop them, or even slow them down.

In that respect, both OpenAI and Anthropic have experimented with somewhat complicated individualized hybrid forms that seem more promising than benefit corporations. Each company has created a for-profit entity that can accept equity investors, but with a nonprofit entity in ultimate control. The OpenAI structure is particularly complicated. The company began as a nonprofit and hoped that donations would provide the capital it needed, but the amount raised wasn’t enough. In response, OpenAI created a for-profit LLC formed under Delaware law in which persons could invest and receive a financial return, though that return is capped. There are several layers of companies between the nonprofit and the for-profit LLC, including a holding company and a management company. But ultimately, the nonprofit corporation’s board is the final authority for the for-profit LLC, and the nonprofit’s board is self-perpetuating.

Anthropic’s structure is not identical, but it’s similar and getting at the same basic point. Anthropic is a Delaware public benefit corporation, which on its own, we argue above, has little impact. Much more interestingly, it has established a long-term benefit trust with five independent directors with “expertise in AI safety, national security, public policy, and social enterprise.” The trust owns a special class of Anthropic shares, with the power to appoint some of the directors of Anthropic’s board. Within four years the trust will elect a majority of the Anthropic board. The trust’s purpose is the same as the benefit corporation’s purpose, namely to responsibly develop and maintain AI for the benefit of humanity.

For both companies, the hope is that the controlling nonprofit can shield the business from a focus on profit that undermines the crucial goal of ensuring its product is safe, while still attracting enough capital to allow the company to be a leader in AI development. This structure shields the nonprofit board, with ultimate authority, from pressure from shareholders clamoring for financial returns. Unlike for-profit corporations, shareholders do not elect the nonprofit directors or trustees, and there are no shareholders to sue for fiduciary duty violations. Unlike benefit corporation statutes, this goes to the heart of governance: who has power over decisions and power over who those decision-makers are.

Though unusual, the OpenAI and Anthropic governance structures are not completely sui generis. They have analogs with a track record. For instance, nonprofit foundations have owned and operated for-profit enterprises with some frequency in some countries. Foundation enterprises are uncommon in the U.S. because tax regulations discourage them, but they are popular in parts of Europe, especially Denmark, where regulations are more encouraging. The empirical evidence on how foundation enterprises function is mixed but fairly positive. Looking at profit and other measures of financial and economic performance, studies generally (though not always) show that they perform as well as comparable ordinary for-profit companies, often with a somewhat lower level of risk undertaken and more long-term stability. The evidence on social performance is more scarce but suggests that foundation enterprises do as well as or better than ordinary for-profits in creating social benefits and avoiding harms.

Those who have studied enterprise foundations note that this evidence goes against the conventional understanding among corporate governance academics about the merits of for-profit organizational forms in focusing incentives and ensuring accountability. The directors or managers of foundation enterprises are insulated from shareholders and donors. Their boards are self-perpetuating, and there are no shareholders (or others taking their place) to sue when managers violate their fiduciary duties. This insulation from accountability mechanisms might lead one to think that foundation enterprises will be less efficient and financially successful, yet that is not what the evidence seems to show. Scholars theorize that the insulation from accountability may allow managers to consider long-term prospects more fully as well as consider stakeholder interests even when doing so may threaten profits. But this insulation may also make it hard to hold boards accountable if they swerve from pursuing their mission due to self-dealing, incompetence, or a misguided understanding of that mission.

OpenAI, Anthropic, and foundation enterprises thus focus on the board and who controls it, and the solution they have come up with is that no one controls the board but the board itself. In this focus on who controls the board, they resemble other potential variations in corporate governance. Forms of stakeholder governance empower stakeholders other than shareholders to choose some or all directors. That could be employees, as in worker cooperatives. Or it could be customers, as in credit unions and insurance mutuals. Or it could be providers of supply inputs, as in agricultural coops. For AI developers, one could imagine AI safety organizations being empowered to appoint some directors. Like OpenAI and Anthropic, these organizations remove the power of shareholders to elect (some or all) directors. However, these alternatives give that power to other sets of stakeholders, rather than removing that power from anyone other than the board itself, which becomes self-perpetuating in the OpenAI and Anthropic models.

The Problem of For-Profit Competitors

There is thus some reason to believe that the hybrid structures of OpenAI and Anthropic could achieve a better balance of attracting capital while still retaining some significant focus on safe and responsible development. But even if, for any particular lab, the benefits of nontraditional AI governance outweigh its costs, that’s no guarantee that nontraditional AI labs will be able to make good on their AI safety promises in a world of for-profit competitors. From the perspective of existential—or even broad social—risk, it does no good for OpenAI or Anthropic to move cautiously if peer competitors like Microsoft or Google plow ahead at breakneck speed. AI safety is such a difficult problem because it is one giant negative externality—if one company produces the proverbial superintelligent paperclip maximizer, it will threaten not only that company but humanity as a whole.

Nor is the playing field a level one. For-profit AI companies—with their promises of higher profits and thus greater share prices and dividends—will likely be able to raise more investments, an important ingredient for success given the astronomical costs of data and compute. To be sure, the nonprofit AI labs have raised plenty of money, and OpenAI’s current funding round—a massive, oversubscribed $6.5 billion ask that is one of the largest in history and would value the company at a whopping $150 billion—suggests that investors have appetite even for not-for-profit companies. At the same time, even the current investments OpenAI is raising may be insufficient for future compute costs. 

For-profit AI companies may also be able to lure away talented engineers from nonprofit competitors, either through higher compensation or just the promise of being able to develop bigger and better systems faster than anyone else. Even engineers who are not primarily motivated by money and are concerned about AI risks will also be naturally attracted to environments in which they can work on the cutting edge of “technical sweetness,” to borrow a phrase from J. Robert Oppenheimer, the father of the atomic bomb.

Nonprofits are not powerless to fight back, but doing so will likely require them to become more like their for-profit competitors, thus undermining the case for their special corporate structure. An example of these dynamics is OpenAI itself. After Altman was fired from the company, Microsoft quickly swooped in and hired him and OpenAI co-founder Greg Brockman to essentially rebuild OpenAI within Microsoft itself—had Altman stayed at Microsoft, no doubt many of OpenAI’s top researchers and engineers would have followed him there. After Altman returned to OpenAI and the board was reshuffled, Microsoft secured a nonvoting seat on OpenAI’s board (although it has since given it up), demonstrating a shift in the balance of power in favor of the for-profit AI industry. 

Over the past year, as Altman has consolidated his power at OpenAI, the company has increasingly come to look like a traditional Silicon Valley tech company, trying to develop products as quickly as possible and shortchanging its stated commitments to AI safety in the eyes of many insiders. 

Perhaps most dramatically, reporting suggests that OpenAI is planning to abandon its nonprofit form altogether and become a for-profit public benefit corporation in which Altman will have a substantial 7 percent equity stake, despite his previous repeated assurances, including to the U.S. Senate, that he had no ownership of OpenAI. (Altman has since denied the reports of an equity stake for himself, calling the 7 percent figure “ludicrous.”) If OpenAI ultimately does become a for-profit company, it will be a dramatic example of the difficulties of nonprofit frontier AI labs staying true to their original missions. The benefit company status would be a fig leaf—providing little protection against the profit motive overcoming OpenAI’s mission.

Government “Subsidies” for Nontraditional Corporate Governance

Given the obstacles to both conventional regulation and corporate governance, perhaps a combination of the two would be best. Corporate governance could supplement regulation, and regulation could encourage forms of governance that may reduce incentives to ignore safety and to abandon nonprofit status. This could involve a variant of responsive regulation, an approach to regulation in which state regulators include businesses and stakeholders in a more flexible and dynamic process than traditional regulation.

Regulators could encourage entities with stronger corporate governance in several ways. Entities with a preferred governance structure could receive lighter-touch regulation. Certain regulatory requirements could be waived or loosened for entities with stronger governance. For instance, if a jurisdiction requires companies to test their products for safety, it could give preferred companies more leeway in designing those tests, or scrutinize their tests less frequently. 

An extreme version of this strategy would be to allow only entities with a preferred structure to develop AI, while those preferred types of entities would still be regulated (i.e., one would not rely entirely, or even mainly, on internal governance as a solution). The suggestion of a federal charter for AI developers provides one way of accompanying this. If all AI developers were required to be chartered by a federal regulator, that regulator could impose whatever governance requirements it believes would be helpful and also monitor chartered companies and potentially revoke a charter if a company behaves in sufficiently unsafe ways. 

Alternatively, companies with stronger governance could get a preference in receiving government contracts to fund development or deployment of AI. Besides such contracts or subsidies, another way governments could guide the private development of AI is through setting up a nongovernmental organization that has intellectual property (trade secrets, copyrights, patents) that companies can access, but only if the companies have proper governance and commit to safety safeguards.

Lighter regulation or subsidization through contracts or access to intellectual property for nontraditional entity types would help somewhat in addressing the problem of for-profit competitors discussed above. Lighter regulation and subsidies could at least level the playing field against competitors with access to more capital and, if strong enough, could even tilt the field in favor of companies with safer governance. At the extreme, if only entities with the appropriate governance structure were allowed to develop AI, then the problem of competition from more profit-oriented companies would be eliminated (at least within the jurisdiction imposing such a restriction—avoidance through moving outside the jurisdiction would remain a problem).

If regulators were to try such a strategy, an important question would be what governance structures would be treated as preferred. The strategy makes sense only if one thinks that a governance structure has some notable effect in discouraging bad risk-taking. At best, the jury remains out on the nonprofit/for-profit mixed structure that OpenAI and Anthropic have experimented with. Indeed, perhaps the greatest risk of nontraditional corporate governance for AI labs is that it will lull regulators into a false sense of security and lead to less government oversight than would be optimal.

Yet, despite the problems illustrated by the Altman fiasco, the structure could still have merit, either as is or perhaps with further experimentation to improve on the weaknesses that have been revealed.

To that point, a governmental role in vetting structures may introduce new possibilities for strengthening accountability and insulating against pressure to weaken safety in a pursuit of profit, thereby addressing the concern that alternative governance structures do not really help achieve the safety benefits for which they are designed and hyped. For instance, regulators could insist on government-appointed directors or board observers. This could help improve the internal safety benefits of alternative governance forms, if one believes that as currently structured they are not yet delivering on that promise. As noted above in the discussion of the promise of nontraditional governance, the nonprofit approach relies on self-replicating boards, hoping that the insulation from profit-seeking shareholders and investors will steel the will of those in control.

Other forms of stakeholder governance look instead to have noninvestor stakeholders play a role in choosing who is on the governing body. Government-appointed directors are one way of doing this, addressing the problem of who should decide who will be chosen to represent the public interest. The state is the ultimate body charged with protecting the public, so it is a plausible candidate, though there are plenty of issues with state control of private enterprise. We would not advocate having government regulators appoint a majority of the board for AI companies, just one or a few seats. This could give regulators valuable information and some voice in decisions without giving them full control. This resembles the proposal for giving banking regulators a golden share in systemically significant banks, though that proposal is certainly not without controversy. Instead of government-appointed directors, regulators might look to the inclusion of other stakeholder-chosen directors, such as employee directors or directors chosen by AI safety organizations.

Both discouraging for-profit competitors and possibly adding internal safety governance mechanisms such as government-appointed directors or observers may increase the risk of reduced innovation. That’s a real concern. However, a slower path to the potential utopia that some envision may be a worthwhile price to pay to reduce the possibility of truly existential risk.


Professor Brett McDonnell is the Dorsey & Whitney Chair in Law at the University of Minnesota Law School, where he teaches and writes in the areas of business associations, corporate finance, law and economics, securities regulations, mergers and acquisitions, contracts, and legislation.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare