DeepSeek Points Toward U.S.-China Cooperation, Not a Race

Published by The Lawfare Institute
in Cooperation With
On Jan. 1, the Chinese company DeepSeek released r1, a new artificial intelligence (AI) model that matches the performance of recent American reasoning models such as OpenAI’s o1. R1 has caused something of a panic in the United States, with many calling for the U.S. government to ensure that the United States prevails in the “AI race” with China. President Trump has described DeepSeek as a “wake-up call.” And in a recent post, Anthropic CEO Dario Amodei cited DeepSeek as a reason why the U.S. government should further reduce Chinese access to advanced computer chips.
The prevailing logic seems to be something like this: Before DeepSeek’s r1 release, the U.S. appeared to be comfortably ahead of China in AI development. There were at least four U.S. companies at the frontier of global AI development: OpenAI, Anthropic, Google DeepMind, and Meta. No Chinese company, it appeared, could build models as capable as the ones steadily coming out of these U.S. labs. But—the first step in the argument goes for those advocating for an AI race—DeepSeek changed that. It showed that China was not really behind the U.S. in AI development and that it instead had the ability to, at a minimum, achieve parity with the best U.S. models.
The second step in the argument is the claim that this means the United States must race harder with China for AI dominance. If the U.S. does not race, it cannot win. And if it does not win, the result will be disastrous. Here, advanced AI is understood as a world-historically important economic and military technology. Whichever nation is the first to develop highly capable, generalist, smarter-than-human AI systems will dominate the global order—possibly forever. The losing nation will be dominated at best. At worst, it risks destruction in a one-sided, AI-dominated military conflict. Hence, the argument goes, the United States must win.
This argument is misguided at every step.
First, even after the DeepSeek revelations, it remains unclear whether China’s AI development is now on par with the United States’s. Second, even if China and the U.S. were neck and neck, a race would be exactly the wrong thing to do from the perspective of both countries’ rational geostrategic self-interests.
Just the opposite: In any high-stakes competition to obtain powerful military technology, the closer the game, the more sense it makes to declare a truce and cooperate. Cooperation can help to ensure that both superpowers obtain transformative AI around the same time. This preserves the current balance of power, rather than unsettling it and inviting extreme downside risks for both nations.
Begin with what DeepSeek does or does not show about AI progress in China. In his blog post advocating tighter chip controls, Amodei also argued that DeepSeek’s capabilities are exactly what is expected from catch-up AI growth over time. That is, the r1 model is on the same progress curve as everyone else. Moreover, DeepSeek’s capabilities only match those of OpenAI’s second-best model. Their o3 model, announced a few weeks before r1, is still far ahead of the competition. Thus, in terms of publicly known AI systems, the best Chinese model remains a generation behind the best American model—albeit with impressive gains in cost reduction.
OpenAI has also claimed (without publicly disclosed evidence) that DeepSeek’s model is a “distillation”—a kind of compressed copy—of GPT4o, one of OpenAI’s models. If true, it calls into question whether leading Chinese labs could, in fact, produce state-of-the-art models from scratch.
But suppose that is all wrong, and that leading Chinese AI labs are now just as good as American ones. Even then, if the U.S. has lost its lead in AI development, this does not rationalize “racing harder” to beat China. Instead, it counsels caution and a collaborative approach to AI development. If the U.S. and China are now neck and neck, then they should race less rather than more.
The Game-Theoretic Case Against Racing
Consider the following analogy: Suppose you find yourself in a race. It is a 10-mile footrace, but with existential stakes. The winner of the footrace gets a very large prize—say, $50 million. The loser gets a bullet in the head.
Suppose you notice that you’ve started the race at the 9.5-mile marker, and your opponent is at mile 0. Here, it may be rational to run. The $50 million prize is a lot of money, but what are the odds that your opponent can cover 10 miles more quickly than your half? You should, of course, run as quickly as you can. There is no sense in risking the fate of the proverbial hare.
But suppose instead that you do not have the 9.5-mile lead. Suppose you find both yourself and your opponent at the starting line. What should you do? If winning and losing are the only options, you should of course run like mad. But suppose there is a third choice: You and your opponent can jointly agree to cancel the race. Then the racers get neither a special prize nor a special punishment. The status quo is preserved. This seems to us like the obvious best choice.
This point can be put more formally in terms of expected value, uncertainty, and risk aversion. Like our hypothetical footrace, the AI race seems to offer massive payoffs to the winning nation: unilateral global economic and military dominance. If that prize is on offer, with little downside, a self-interested nation would be foolish not to chase it.
But if the U.S. and China are at rough AI parity, the calculus changes. Then, both nations must begin to take very seriously the possibility of losing. This makes running the race less valuable, in expectation. It also raises the variance of potential outcomes—both riches and ruin start to look probable. Thus, if the United States and China are averse to the risk of losing the AI race, and being permanently dominated by the other, they should be less willing to keep running the race the closer the finish looks.
As with the footrace, there is an alternative to the U.S.-China AI race: Both countries could work together to pursue a peaceful, collaborative approach to AI development. This collaborative alternative to racing has some costs to both sides. Neither party can hope to reap the future rewards that come from stable global dominance. But the risk of being among the dominated also falls precipitously.
Moreover, if AI development will be as transformative as both nations seem to believe, the upsides will be immense, even if dominance is off the table. Economic growth will explode. Science will progress. Diseases will be cured. And much more. Racing is not necessary to achieve any of this. All of these destinations lie, at least potentially, along the path of cooperation. Indeed, the cooperative path is by far the most likely to get us there. The alternative route, of breakneck competition and existential conflict, is extraordinarily perilous.
Some readers may object that the strategic dynamics of the geopolitical AI race have important differences from those of our hypothetical footrace. After all, unlike a footrace, the AI race has no fixed end point. Moreover, in a footrace, it does not matter whether you win by an inch or a mile. The prize is the same. But in an AI race, a bigger lead may lead to ever bigger payoffs in terms of military or economic dominance.
Indeed, there may even be increasing marginal returns to extending one’s lead in the AI race. If your AI is 5 percent better than your opponent’s, you may capture roughly an extra 5 percent of global gross domestic product or be 5 percent more likely to prevail in a military conflict. But if it is 10 percent better, you may instead get a 20 percent boost. And so on. This is a standard approach to modeling the payoffs of military superiority. And it is how many proponents of racing with China appear to view the AI competition.
This would be the right way to think about AI competition if AI advances eventually lead to a “takeoff.” At some point, sufficiently capable AI systems may begin to increase the rate of economic or military growth. The rewards of that initial growth could be reinvested in advancing AI further, increasing again the rate of growth, and so on.
The formal way of talking about competitions like this is to say that they have “momentum.” In races with momentum, each marginal step that one side gets ahead of its opponent is more valuable than the previous one. Here, it is rational to race harder—to invest more resources in AI advances—to the extent one is already winning.
But when the players are neck and neck, they are on the flattest part of the utility curve. Each marginal investment in racing ahead gets them the least expected value. You, and your opponent, can invest a great deal in advancing your AI capabilities. But, since you’re both near a tie, the initial investments aren’t worth much. And your opponent can keep up by matching investment, so that your future investments won’t get you much, either.
Geopolitics is complicated, and gut reactions are often wrong. AI racers would do well to think through the underlying causal models driving their reasoning, to check whether their premises support their conclusions. On reflection, it is extremely difficult to view DeepSeek as evidence that either the U.S. or China would benefit from intensifying the AI race—at least while cooperation remains a live possibility.
A Path Forward: Joint Development and Mutual Transparency
So what would a collaborative approach look like?
First, the U.S. and China could create a joint AI lab, something like a cross between DeepSeek and OpenAI. The lab could be co-run by a U.S. and Chinese CEO. The U.S. and China could take an equal share of the profits in tax revenue and could have equal governance control over the company.
Such a lab would be a very different approach than trying to restrict China from using computer chips. Most importantly, if the lab were at the frontier, it would be a way of simultaneously diffusing the most powerful AI systems to both countries. This would avoid suddenly unsettling the status quo balance of power. Today’s balance is, to be sure, tenuous and fraught. But it seems far better than a situation where the first country to develop a very powerful military AI is incentivized to use it before the other catches up. Worse, the mere anticipation of such a leap ahead could cause either the U.S. or China to instigate a preemptive conflict on the theory that a war now is better than a war later, when one’s opponent has the clear AI advantage.
This balance would make it vital that the joint U.S.-China lab remained at the frontier. Even if both countries nominally agreed to cooperate and avoid racing, each would worry that the other was racing in secret. Transparency is difficult in international relations, but it is not impossible. Precedents exist. Consider, for example, the New START nuclear nonproliferation treaty. Under that agreement, which included a variety of transparency and verification mechanisms, the U.S. and Russia successfully cooperated to significantly reduce both countries’ nuclear stockpiles. True, those mechanisms have broken down since Russia launched its 2023 invasion of Ukraine. But, if anything, that breakdown demonstrates that it is even more important for the U.S. and China to work together to defuse a wide variety of geopolitical tensions, including over Taiwan. In both contexts, only a reckless nation would raise the risk of annihilation for the sake of a limited strategic goal. In 2023, Russia acted recklessly. The U.S. and China should not follow suit.
AI development is not exactly the same as nuclear proliferation. But there are similarities. In both cases, it seems likely that obtaining the absolute most powerful systems will require large-scale, and thus conspicuous, industrial capacity. Even if DeepSeek could make models as good as OpenAI’s o1 using a substantially smaller compute cluster, this merely suggests that they could make models far better than o1 using a massive cluster. Consider also that, unlike in the case of nuclear weapons, the goal of AI cooperation is not mutual transparency around every American or Chinese AI model. It is just transparency as to the very best models, which will presumably have by far the greatest strategic import. Thus, if the United States and China were able to remain mutually confident that the joint lab was the best in the world, broad transparency and reporting into the activities of other labs would be comparatively less important.
There are many details to be worked out here, and we don’t pretend to have all the answers. What we are confident about, however, is that proponents of an AI race for the sake of global geostrategic dominance should take a step back. There are reasons to think that such a race could be catastrophic for humanity, broadly construed. But just as importantly, as with the Cold War nuclear arms race, there are strong reasons to think that an AI race between the United States and China would be against those countries’ own national self-interest.