Cybersecurity & Tech States & Localities

The Case for a Joint U.S.-China AI Lab

Simon Goldstein, Peter N. Salib
Wednesday, April 23, 2025, 9:54 AM

The best AI development path for both the U.S. and China is to cooperate—by jointly backing the world’s best AI lab.

USA-China flag photo (https://commons.wikimedia.org/wiki/File:Potential-USA-sectors-China.jpg, CC BY-SA 4.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

As AI capabilities advance rapidly, the U.S. and China are locked in a race for AI supremacy. Soon, many expect AI to become the most important economic and military technology in the world. The participants in the race believe the stakes are existential: Whoever wins will “rule the world.”

In recent essays, we’ve argued that a geopolitical race for AI supremacy is misguided. As with historic arms races—including the nuclear race—participating in an AI race has serious downsides. Winning may bring huge rewards, such as absolute, unassailable military dominance. But coming in second place could mean annihilation, should a war break out against an opponent who has achieved unassailable, AI-powered military dominance. 

Races for decisive military technology can generate violent conflict even before they are over. If either party falls behind mid-race, it then has a strong incentive to launch a preventive attack against the leader. Such a strategy may be the laggard’s last hope of stopping its rival from attaining permanent military supremacy. While some commentators have suggested that a trailing country could use force to disable the leader’s AI without triggering a full-scale war, the issue is hotly contested.

Thus, both the U.S. and China would likely be better off if they could call off the AI race, cooperate, and obtain transformative AI—systems that could guarantee military and economic dominance—at the same time. With both superpowers in possession of powerful AI systems, the status quo balance of power would be preserved, and the risk of conflict, existential or otherwise, would be significantly reduced.

The difficult question is how. Even if the U.S. and China wake up to the foolishness of an AI race, coordination will remain difficult. Here, we argue for one concrete path to U.S.-China cooperation on AI: the joint establishment and operation of the best AI research lab in the world.

Overcoming the Challenges of an AI Race 

Why this strategy, in particular? The history of nuclear weapons might instead suggest cooperation focused on disarmament. In the New START treaty, the U.S. and Russia agreed to limit the production of nuclear weapons, for example, by halving the number of strategic nuclear missile launchers.

Unfortunately, disarmament is a much harder strategy to apply in an AI race than in a nuclear race. This is because the production of traditional weapons involves a “guns versus butter” trade-off. The resources used to produce guns could instead be used to produce other goods—food, housing, education, health care, and so on. Thus, in the nuclear context, cooperative disarmament allows disarming nations to reallocate spending from destructive goods to welfare-enhancing ones.

The AI race is different. AI is a general purpose technology—one that will have impacts in essentially every sector. The most important advances come from frontier models, massive systems that can accomplish a wide range of tasks. These frontier models will end up having powerful military applications—but their civilian applications will be even more important. Powerful AI systems will accelerate science, cure diseases, invent new technologies, and generate immense wealth for the nations that have them. Thus, AI disarmament is a nonstarter. Even if it would mitigate the downside risks of AI warfare, it would also mean missing out on the massive benefits of advanced AI. 

If racing is too dangerous, and disarmament is too costly, what options remain?

Consider another important element contributing to the disanalogy between nuclear weapons and AI. A frontier AI model, like all software, is non-rivalrous: One person’s use of the model imposes no barrier on another’s use of it. After all, the model is, at bottom, just computer code that can be copied infinitely and without cost. Thus, whenever the U.S. develops a new, world-beating model, sharing with China would be effectively free. And vice versa. Here, we have a race in which either party can instantly catch up with the other, provided that the leader shares its model with the laggard. In a world like that, there is no AI race for global domination. No one gains a decisive military advantage from having the best AI.

One naive way to avoid an AI race, then, would be for both the U.S. and China to commit to sharing their frontier models with the other party. But the challenge here is credibility. What would stop one nation from developing a better model in secret and sharing an inferior version with its rival? 

To overcome the credibility problem, the U.S. and China should create a joint AI lab. Not just any AI lab—the best one in the world. So long as the joint lab remains at the bleeding edge of AI progress, both countries can be secure in the knowledge that they, by accessing the lab’s products, are likewise at the frontier.

What Would a Joint AI Lab Achieve?

The joint lab would have a few important features. First, the lab would be a merger—in terms of talent, knowledge, and intellectual property—of two of the most successful U.S. and Chinese AI labs. Consider, for example, a joining of forces of the top talent at OpenAI and DeepSeek. The resulting lab might, for example, be called OpenSeek.

OpenSeek would have a number of advantages over its global competitors. First, it would have top-tier talent. The lab would recruit the very best AI researchers from the U.S. and China. To attract them, it would need to offer the highest pay, the most impressive colleagues, and access to the most extensive resources.

Bringing together top talent from the two rival countries would allow for efficient exchange of ideas between competitors. Silicon Valley house parties are famous sites of knowledge diffusion. When the diffusion is between competing private companies, those leaks count as a bug. But when the diffusion is between geopolitical rivals whose best interests require joint progress, leaks are a feature. This diffusion of information would be a “positive sum”: Each researcher’s shared ideas would make the overall “pot” of AI development bigger.

Second, and just as important, OpenSeek would need to be incredibly well resourced. Frontier AI development is famously expensive. The compute and energy costs are immense. To make sure that OpenSeek remains the best lab in the world, both the U.S. and the Chinese government would channel their AI investments into the project. With government funding—and regulatory support—the joint lab could quickly amass the world’s largest stockpile of compute. Indeed, given the massive amount of capital required for frontier AI development, state-level investors might eventually be necessary, joint lab notwithstanding.

Third, the lab would allow two different distribution streams for the U.S. and China. The lab would start with one base model and create two fine-tuned versions. Each of the Chinese and U.S. versions of the model would be tailored to the particular political context and values of the two countries.

Finally, and perhaps most ambitiously, the U.S. and China would have to agree to split inference compute roughly equally. Inference compute is the computing power used to actually run AI models, rather than to train them. Here, the intuition is straightforward. It is no good having the world’s best AI model if you have no compute to run it. Nor is it much better if you have 10,000 times less compute—and, by implication, 10,000 times fewer copies of the model—than your geopolitical rival.

The joint lab avoids the risk of an AI race. Both the U.S. and China could expect to receive their fair share of the benefits of AI development, without being left behind. Neither superpower would have the opportunity to dominate the other. The joint lab would also avoid the downside of disarmament. The U.S. and China would be able to enjoy all of the advantages of AI development without losing momentum. 

Finally, a joint lab would help to manage the catastrophic risks of rogue AI. New AI systems are stunningly capable and are developing at an alarming rate. A headlong race to develop AI risks losing control; we may not be able to align powerful AI to humanity’s goals and values. AI systems pursuing their own misaligned goals would be exceedingly difficult to stop—indeed, if that AI system were also the most militarily powerful technology in the world, the AI itself could have a decisive military advantage over humanity as a whole.

If the U.S. and China race with one another to develop AI, each side will have an incentive to lower their investments in aligning and maintaining control over powerful AI systems. Each will worry that caution could cause them to fall behind in the race. By contrast, a joint AI lab could more responsibly manage the task of developing new AIs while monitoring for safety. 

***

In 1985, at the height of the Cold War, Reagan and Gorbachev met in Geneva for negotiations. During a pause in the meeting, they took a walk. Reagan, an avid science-fiction reader, had a question for Gorbachev. If the Earth were invaded by technologically advanced aliens, could the U.S. and the Soviet Union agree to put aside their differences in order to defend humanity? After some initial confusion about the question, Gorbachev agreed that under these conditions, the two superpowers could make peace.

The U.S. and China find themselves in a remarkably similar position today. Humanity has stumbled into creating AIs that may well turn out to be powerful, alien, and hard to control. But even if the AI systems themselves are controllable, the two rival superpowers now both face the existential threat of the other obtaining powerful, alien technology first.

Under these conditions, it is high time for the U.S. and China to make peace on AI. The benefits of peace are numerous; its only cost is giving up a shot at ruling the world. Human flourishing does not require such domination. Hopefully, American and Chinese leaders will learn from the mistakes of the past, rather than repeat them.


Simon Goldstein is an Associate Professor at the University of Hong Kong. His research focuses on AI safety, epistemology, and philosophy of language. Before moving to Hong Kong University, he worked at the Center for AI Safety, the Dianoia Institute of Philosophy, and at Lingnan University in Hong Kong. He received his BA from Yale, and his PhD from Rutgers, where he wrote a dissertation about dynamic semantics.
Peter N. Salib is an Assistant Professor of Law at the University of Houston Law Center and Affiliated Faculty at the Hobby School of Public Affairs. He thinks and writes about constitutional law, economics, and artificial intelligence. His scholarship has been published in, among others, the University of Chicago Law Review, the Northwestern University Law Review, and the Texas Law Review.
}

Subscribe to Lawfare