Nuclear Deterrence in the Age of AGI
Will AGI break the framework of nuclear deterrence that has averted war between superpowers for the last 70 years?

Published by The Lawfare Institute
in Cooperation With
One of the great successes of the 20th century was that the U.S. and the Soviet Union never entered into total war. Nuclear deterrence was central to this feat; each of the two great superpowers had nuclear weapons. If either party attacked the other, the attacker could expect to be destroyed by the defender in a second strike.
Today, the United States and China are engaged in a new kind of conflict: the artificial general intelligence (AGI) race. As AI capabilities advance rapidly, many expect AI to become the most important economic and military technology in the world. Indeed, the AGI race is often portrayed as an existential matter: Whoever wins the race will “rule the world.”
In this article, we argue that the most important element of the AGI race remains nuclear deterrence—not, as most commenters suggest, AGI itself. There are two possibilities. First, if AGI does not disrupt nuclear deterrence, then the winner of the AGI race will not, in fact, “rule the world.” Second, if AGI does challenge nuclear deterrence, then it poses a grave threat to international security, regardless of who wins the AGI race. In that event, countries will engage in high-risk strategies to try and preserve their ability to threaten a nuclear strike. If they fail, peace will no longer be incentivized through nuclear deterrence. It is therefore crucial to better understand whether and how AGI will affect nuclear deterrence.
AGI and Defense
AGI are not necessarily machines that are conscious or sentient; rather, they are systems that are extremely capable of solving complex problems and accomplishing goals in the real world. As OpenAI’s charter puts it: AGI “mean[s] highly autonomous systems that outperform humans at most economically valuable” tasks.
Consequently, many leaders expect AGI to be militarily decisive. As Dario Amodei, the CEO of Anthropic, has said, countries with advanced AGI will be able to bring the equivalent of an entire “country of geniuses” online “in [every] datacenter” they can build. Imagine, in World War II, what the Allies could have accomplished if they had not just one Robert Oppenheimer or Leo Szilard—but tens of millions of them, and tens of millions more for every new data center they built. Think of the technologies of war the Allies could have produced and the military strategies they could have devised in order to deploy them.
If only one country today had AGI, that country could set an entire country of geniuses to perfecting hypersonic missiles, directed energy weapons, or genetically targeted bioweapons. It could deploy millions of combat drones, each autonomously piloted by an AI system with the flying prowess of Erich Hartmann. It could throw 100 million Alan Turings at its adversaries’ cyber defenses.
It is therefore not difficult to imagine that whichever country attains AGI will be instantly positioned to prevail in live military conflict—perhaps against all other countries. This, in turn, may incentivize the AGI-possessing country to preemptively launch such an attack of conquest before its rivals attain AGI themselves.
But what if the nations that lack AGI nonetheless have the ability to destroy the entire planet in a nuclear fireball? At least two countries—the United States and Russia—are capable of this today. According to deterrence theory, those nuclear arsenals effectively deter aggression from other nations—even those with overwhelming advantages in conventional warfare. The key strategic consideration is the “second strike,” the idea that nuclear superpowers have stockpiled enough nuclear weapons to effectively retaliate even after being struck with nuclear weapons.
Yet nuclear deterrence is generally absent from discussions of military AGI. That is a mistake. The most important geostrategic question is not who develops AGI; it is whether AGI breaks the framework of nuclear deterrence that has averted war between superpowers for the past 70 years. This would happen if, for example, one nuclear power’s AGI system were to disable another country’s nuclear arsenal, allowing the former to threaten a nuclear strike without fear of reprisal.
Nuclear Deterrence and AGI
Leopold Aschenbrenner is one of the few commenters to have addressed the question explicitly. He claims that “superintelligent” AIs will be able to use sensors to find an adversary’s nuclear weapons, and use billions of tiny autonomous drones to disable them.
If so, the consequences will be dire. If one nuclear armed nation were to develop AGI first, it would quickly become the only nuclear armed nation. That nation could then dictate terms to the rest of the world. It might demand tribute from other nations on pain of annihilation. Or it might simply annex large swaths of the globe.
Aschenbrenner could be right about AGI’s potential to nullify deterrence, but we remain unsure. Nuclear arsenals might turn out to be quite resistant to AGI-mediated interference, in part because both the weapons and the systems for deploying them are relatively unsophisticated. Nuclear weapons themselves are, by today’s standards, a primitive technology. The first were created during World War II and contained no computing equipment of any kind. To generate a nuclear explosion, one needs only to smash a small quantity of sub-critical fissile material with enough force that the material goes critical. Usually, this involves putting the fissile core in between some conventional explosives and blowing them up.
Intercontinental ballistic missiles (ICBMs) and other warhead-delivery technologies are more sophisticated, using microprocessors for things like guidance. But they, too, can operate on technology that was familiar to technologists of the 1950s. As Aschenbrenner notes, many U.S. nuclear weapons are deployed on submarines. These, fundamentally, are small metal tubes scattered across a vast ocean and then sunk hundreds of feet into the sea—where they can stay indefinitely.
It is certainly possible that AGI will be able to easily decapitate nuclear arsenals of these kinds. But it is far from obvious. The locations of these weapons are among the United States’s closest-guarded secrets. The nuclear arsenal is a triad, split across ICBMs, strategic bombers, and submarines. These weapons are not networked. They cannot be directly hacked. AGI may improve sensors and detection dramatically. But physics limits the transmission of usable information, like submarine engine noise, through churning seawater.
On the other hand, perhaps advanced AI cyberattack capabilities could be used to hack nuclear weapons, for example, in the systems that transmit nuclear launch orders. The most common mistake in AI discourse is to underrate its transformative potential. But the second most common is to posit arbitrarily powerful, even godlike, AIs—certain to be capable of anything, no matter how difficult.
Moreover, if impending AGI seemed poised to neutralize nuclear arsenals as they are currently deployed, nations would work to harden those arsenals further. For example, in the United States, the president has sole nuclear launch authority—at least as far as is publicly known. This creates a single point of potential failure, at least temporarily. If the president is assassinated, the United States may have diminished capacity to deploy its nuclear arsenal, at least until the new chain of command is ascertained. If the United States judged this centralized launch structure to be too risky in a world of AGI-enabled drone assassinations, it could devolve launch authority.
Even more extreme measures could also be taken. An increasing share of nuclear armed countries might deploy “dead hand” systems, rigged to launch their arsenals automatically—perhaps unless their launch sequences were cryptographically aborted at regular intervals.
Detonation of a large enough number of nuclear weapons may trigger nuclear winter, an irreversible unraveling of the climate. This kind of threat does not even require that nuclear weapons be targeted: A weaker state could threaten to instantly detonate a stockpile of nuclear weapons hidden in a remote location, rather than allow themselves to be destroyed. This would neutralize the threat of AI-assisted missile defense systems.
These strategies would be extraordinarily dangerous. They would substantially raise the risk of an accidental or rogue nuclear launch. But if the United States or another nuclear armed nation believed the threat of AGI domination to be sufficiently dire, it might resort to such risky measures.
The Limits of Nuclear Deterrence
A further question concerns the limits of nuclear deterrence. U.S. nuclear deterrence did not stop Russia from invading Ukraine, because the Russian invasion was not an existential threat to the United States. The U.S. could not credibly threaten Russia with a nuclear strike in response to the Ukrainian invasion because, for Americans, the stakes were not high enough to risk a Russian counterstrike. The international system has devised conventions based on this intuition: In general, nuclear weapons are neither used nor threatened in proxy wars between nuclear powers. Such conflicts remain conventional, and the side with the more powerful non-nuclear forces can generally expect to win them.
Thus, even if AGI does not enable the country that has it to disable rivals’ nuclear arsenals, an AGI superpower might dramatically improve its ability to win proxy wars. This would have two simultaneous, and perhaps paradoxical, effects. On the one hand, it might dramatically increase warfare globally, as the AGI superpower would pick proxy fights with impunity. On the other hand, it might not substantially raise the risk of all-out nuclear war, if the AGI superpower refrained from—for example—invading other states whose nuclear arsenals remained intact.
In a world like this, the U.S.—if it lost the AGI race—would face significant reductions in its ability to project military force across the globe. Its nuclear arsenal would remain a deterrent, but only against threats for which it could credibly threaten nuclear retaliation. That would still likely include foreign threats to invade sovereign U.S. territory. But it is unlikely, for example, that the U.S. could credibly threaten to launch a nuclear war over Chinese expansion in the South China Sea. Hence, so long as nuclear deterrence holds, losing the AI race means losing the power to police the globe, but it does not mean losing the ability to defend U.S. territory.
***
The timeline to AGI may be short. The question of nuclear deterrence in the age of AGI is among the most consequential for global geostrategic planning in the coming two to five years. We therefore hope the issue will be taken up jointly by military and AI experts, and that the analyses they produce will be far more sophisticated than those we offer here. The world cannot afford to rely on idle speculations about the balance of power in the AGI world. We need better answers, and we need them now.