Cybersecurity & Tech

AI Timelines and National Security: The Obstacles to AGI by 2027

Trent Kannegieter
Friday, August 2, 2024, 9:45 AM
Leopold Aschenbrenner’s “Situational Awareness” builds claims of artificial general intelligence’s imminence on assumptions that demand further scrutiny.
Artificial General Intelligence Illustration. Nov. 6, 2022. (David S. Soriano, https://commons.wikimedia.org/wiki/File:Artificial_General_Intelligence_Illustration.png#file, CC BY-SA 4.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

On June 4, ex-OpenAI Superalignment employee Leopold Aschenbrenner released a book-sized manuscript titled “Situational Awareness.” In it, he argues that artificial general intelligence (AGI)—AI technology that can complete and automate the set of complex knowledge work tasks at a level at or above humans—will arrive as soon as 2027, with epoch-shaping consequences.

He emphasizes this as a critical moment, claiming “the free world’s very survival” is “at stake.” That reaching “superintelligence” first will give the U.S. or China “a decisive economic and military advantage” that determines global hegemony. He is also raising millions of dollars for an investment fund behind this thesis.

But such large calls to action, national security stakes, and financial incentives to increase AI hype demand intense scrutiny. Policymakers and researchers asked to assess the argument must be clear about the assumptions it makes and their likelihood of coming to pass.

Looking past Aschenbrenner’s rhetoric and into the limits of deep learning—today and as a paradigm—one may find many reasons to think that his claims of “AGI by 2027” are too bullish. 

Aschenbrenner’s argument for AGI and superintelligence emerging this decade is underpinned by several load-bearing leaps of faith. None of these challenges are inherently unsolvable. But solving them will require new tools and more time. Put together, they create serious doubt about Aschenbrenner’s rapid timelines.

The Harms of AI Hype: Why Timelines Matter

Selling false existential stakes to national security interests is dangerous. If leaders think the battle for superintelligent AI is existential and will be determined in a few years, they could potentially rationalize almost anything to win the AI race. Similar situations have occurred before. In the Cold War and beyond, “red scaresdestroyed lives and livelihoods. Domino theory’s anxieties led the U.S. to some of its darkest foreign policy chapters, from supporting Latin American coups to the Vietnam War.

Blind faith in deep learning’s inevitability also inhibits AI research and development itself. Key challenges in AI product development and AI safety alike revolve around the problems beneath these leaps of faith. Instead of hand-waving away such concerns, policymakers and computer scientists should center them in research agendas for both private labs and public funding.

Even delaying the date of superintelligent AI’s emergence changes the policy prescription. A decade’s delay from “AGI by 2027” claims would still significantly outpace average predictions from surveyed AI researchers, who place 50 percent chances of full labor automation in the 2100s and automating complex knowledge work like AI research— key to Aschenbrenner’s AGI definition—in 2060. (That’s before considering concerns others have raised about the survey, like non-response bias, that could skew responses toward rapid timelines.)

If scientists and policymakers have more time, acts potentially made in desperation to win an AGI race by 2027 become unnecessary. Aschenbrenner gets this part right: The world isn’t prepared for AGI today. But a decade until AGI means a decade of progress on AI safety. From technical alignment to geopolitical trust-building and the potential for new AI paradigms, these tools can make all the difference for a better AI future. (Consider how different the state-of-the-art was in 2014—when generative adversarial nets were new—and how many more people work on these problems today.)

If AGI is three years away, scientists and policymakers don’t have the time to develop enough new safety solutions. If superintelligence’s deciding day is so soon, national security powers can justify nearly anything to ensure their states get there first. Moves that might be necessary in the face of such an imminent threat—from nationalizing labs to curtailing non-U.S. researcher participation to banning certain large models altogether—could stifle the innovation and collaboration that will solve these hard problems.

These delays don’t mean the world shouldn’t care. Far from it. They just give policymakers and researchers time to prepare more effectively.

Leap of Faith #1: GPT-4 Is as Smart as a Bright High Schooler 

Aschenbrenner starts by comparing the abilities of GPT-4 (OpenAI’s most recent model) to “a smart high schooler.” He writes, “GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years.”

This claim acts as a baseline for his argument about future model capabilities. If scaling took AI from the smarts of a “preschooler” (GPT-2) to a high schooler in a few years, he argues, adding the same orders of magnitude of compute again will unlock the mind of a top grad student and beyond to superintelligence.

But how reasonable is this claim? As others quickly pointed out, competent high schoolers can do many jobs that current models can’t. Models can’t drive, do accurate arithmetic with many-digit numbers, or provide customer service for a car dealership without a viral disaster. If current models were already as smart as high schoolers, they would already autonomously handle a much larger share of these roles in the “real world.” More damningly, AI hallucinations—when models generate incorrect and/or fabricated, yet legitimate-looking information in their outputs—render models untrustable.

Model skills and weaknesses are very different from human ones. Models perform some tasks far better than high schoolers yet flail where young students would thrive. A typical human high schooler can’t write a grocery list in iambic pentameter, but he or she can almost certainly be trusted not to invent fake cases for a legal brief or suggest thickening pizza sauce with glue.

Comparing the growth of human and model intelligence on the same axis obscures the differences that, even at scale, limit model capabilities.

Leap #2: Models Can Reason

Aschenbrenner claims that “we are building machines that can think and reason” and that “models learn incredibly rich internal representations.”

Any labor-substituting AGI, let alone superintelligence, must be able to reason. The real world is unpredictable, presenting a limitless number of variables arranged in a combinatorial explosion of scenarios. Novelties, challenges beyond what’s cataloged in a training dataset, are inevitable.

Humans survive by reasoning through novel situations. As explained by computer science scholar Yejin Choi, people draw on “rich background knowledge about how the physical and social world works.” From the laws of physics to cultural norms, we learn underlying rules in one context and apply them in others. This ability goes by a host of names: from “abstraction” or “intuitive reasoning” to a more technical “generalizing beyond the training distribution.” In other words, we can make sense of the world beyond what we have been exposed to in the past.

AI research—key to Aschenbrenner’s automation argument—requires reasoning in novel situations, too. Even the tasks of an AI researcher that he calls “fairly straightforward”—reading papers, generating new ideas and questions, designing experiments, analyzing experiment results—require contextualization, common sense, and abstract reasoning.

But large language models (LLMs) can’t reason. Model “reasoning” is tricky to assess because training on human data lets models emulate humans so well. A model’s outputs sure look like those of a reasoning entity. That’s why, for example, people grow emotional connections to “AI companions” created by companies such as Replika and Character.ai. Or why an AI-generated output can seem like the work of a human professional. Therefore, it’s easy to get pulled in and overcredit models’ internal processes. But to mistake impressive text-based correlational token prediction with an internal understanding of language or reasoning is to, as linguist Emily Bender says, “conflate word form and meaning.” 

Deep learning is amazing, but it’s not magical. Its pattern matching of an immense data corpus paired with vast computational power generates incisive predictions. This can do far more than many originally expected. Internet-sized datasets unlocked deep learning’s correlative powers, driving its “emerg[ence] from [the] backwater position” it held in the 1970s.

But outputting impressive correlation doesn’t necessarily demonstrate the internal reasoning—or “consciousness”—ability crucial to knowledge work for the reasons above. 

In fact, recent studies offer lots of reasons to doubt such consciousness. These problems become particularly apparent through experiments with prompts about fictitious hypotheticals—that is, when the answer cannot be found in training data. Highlights include:

  • Leading models still cannot readily determine that, “if A is B, then B is A.” Neither data augmentation nor different-sized models improve the problem. Even when trained on “A is B,” prompts of “B is” do not result in “A” any more often than they result in any other random name.
  • Models can do single- or double-digit arithmetic, but function performance rapidly declines with longer numbers that are less frequent in training data. For example, GPT-4 never surpasses 10 percent accuracy once reaching five digits.
  • The Abstract Reasoning Corpus (ARC) series of abstract puzzles that “intentionally omitted” language still stymies state-of-the-art models. As of July 2024, the highest model score on ARC is 43 percent. Humans average 84 percent.
  • LLMs can output code snippets but fail when asked to make a “simple adjustment” based on the same coding language’s principles.

These cases and others demonstrate that models haven’t learned underlying “rules” for identity mapping, multiplication, spatial reasoning, or coding. Instead of learning underlying concepts, models pattern match to optimize the feedback of their reward functions.

This is what researchers such as Choi refer to as the “common sense problem.” For many researchers, this problem is well known and top of mind.

Aschenbrenner claims that “[w]e’re literally running out of benchmarks,” or tests of intelligence for new models. But benchmarks such as ARC disprove this claim. Assessments like the above measure what matters for AGI: general reasoning beyond what’s in the training set.

Aschenbrenner says models will develop an “internal monologue” when reading textbooks to deeply “understand” it. “Reasoning via internal states,” he says, will enable newfound efficiencies.

But he never proves why this will happen, even in the face of the counterevidence detailed above. Successful AI researchers, and successful generally intelligent models, need reasoning and common sense.

Aschenbrenner’s argument relies on the leap of faith that researchers can overcome these problems through scale and slog. But their failures to date, even as they have made incredible gains in other domains, provide reason for skepticism.

Even well-known claims of internal “world models” fall under scrutiny. For example, researchers claimed “evidence for … a world model” within LLMs in 2023 by plotting LLM-generated data points on a map. This map featured cities clustered on or close to their “true continent[s],” which the researchers claimed proved models “learn not merely superficial statistics, but literal world models.” But as other researchers flagged at the time, these relationships clustering places roughly by their geographic proximity are precisely what you would expect from a strictly correlational tool. After all, nearby places are often mentioned together. Datasets mention Dallas alongside Houston far more frequently than alongside Hamburg. One might expect correlational outputs to cluster cities near each other closely together due to more frequent mentions alongside each other. (Moreover, any semantically valid world model wouldn’t place cities hundreds of miles into the ocean, as the original team’s work did.)

Leap #3: Data Won’t Bottleneck Model Development

Data quality is a key determinant of model performance. Without large amounts of high-quality data that represent the “real-world” scenarios a model will address, model performance quickly stalls. High-quality datasets are large and “representative”—they provide instances that match the real world (or “ground truth”) for which they are meant to provide insights. Such high-quality data enables models to generate incisive predictions, because they can draw correlations based on a dataset that matches the real world.

There are reasons to believe that dataset quality cannot be assumed to even match current levels into the future. And key data for many critical tasks in AI research and beyond might never exist.

Analysts suggest that high-quality public training data could run out by a median date of 2028. Researchers might even move backward, losing access to the data sources we currently have through bet-the-company copyright lawsuits working through the courts now. For example, suits from the New York Timesand Getty Images allege that foundation models infringe on copyright by training on data without consent or compensation. If model training is not deemed fair use, foundation model labs might lose access to the large corpus of data they need to power their outputs.

More perniciously, hallucinations and the lower quality of generative AI’s own responses might corrupt the digital corpus itself. As hallucinations become an increasingly large share of “ground truth” in future runs, the “model world” separates further from the real world it tries to represent. 

The current corpus of data only helps generate such impressive outputs because it represents the “real world” fairly well. Generative outputs flooding the internet distort that distribution, “causing irreversible defects.” Even if hallucinations can be avoided, such a distribution shift can lead to model collapse.

Further, even if internet datasets survive, the vast majority of the world’s “expertise” isn’t recorded in any legible fashion. And such “intuitive” expertise might be hard to catalog even if we tried. AI researcher Zhengdong Wang and journalist Arjun Ramani say it well:

We are constantly surprised in our day jobs as a journalist and AI researcher by how many questions do not have good answers on the Internet or in books, but where some expert has a solid answer that they had not bothered to record. And in some cases, as with a master chef or LeBron James, they may not even be capable of making legible how they do what they do.

This limitation is particularly relevant to Ashchenbrenner, given his argument’s reliance on models that can do AI research to supercharge other AI progress.

Data bottlenecks aren’t just about running out of high-fidelity ground truth. They require curating never-before-archived work and figuring out how to express it.

Aschenbrenner flags the data bottleneck argument, but he doesn’t sufficiently respond to it. He assures us that insiders are bullish, that labs will devise a new solution, that models might develop an internal view of the world that enables new techniques, and that deep learning will crash through this wall as it has crashed through others.

But these points—beyond the internal world model suggestion dealt with earlier—aren’t solutions. They are just faith that researchers will find a solution.

Though Aschenbrenner mentions lines of research that might one day help us overcome these problems, he glosses over obstacles to these as well. For example, he suggests synthetic (computer-generated) data might solve real-world data bottlenecks. But the hard problem of creating synthetic data distributions that are “representative” of real-world ground truth remains unanswered. (What data should one generate? How can you know without the real world for reference? Do model builders just defer to an engineer’s assumptions?)

Perhaps researchers will solve such problems soon. Synthetic data challenges likely don’t represent a permanent wall preventing AGI. But they at least complicate an acceleration to AGI by 2027. Aschenbrenner’s bullish framing positions these well-documented obstacles too far off stage. Policymakers tasked with setting thresholds for when to act—potentially at high cost—must be aware of these critical roadblocks.

Leap #4: Researchers Will Solve Hallucinations

Using models as independent agents of science like AI research depends on solving hallucinations—the fake, fabricated, or false-yet-legitimate-looking outputs that plague LLMs. 

Even if humans produce less content than models, accuracy and reliability matter far more than quantity alone. They’re necessary conditions for mission-critical work.

But becoming “mostly reliable” isn’t enough. You wouldn’t get in a self-driving car that was only 90 percent accurate, for instance. And an AI research bot that builds fabrications into its research even a small share of the time—and then builds on top of these fabrications—would soon be hallucination-filled and useless.

Hallucinations are why enterprises deploying AI on high-consequence decisions require human oversight. When news leaked that “self-driving cars” such as G.M.’s Cruise still require human intervention every 2.5 to 5 miles, people working in autonomy weren’t surprised.

The hallucinations challenge lacks a solution. Models lacking internal world representations can’t discern when outputs make sense or simply reassemble legible-seeming patterns from the training data. That’s how you get lawyers submitting briefs of fake cases without even realizing it.

Even Sundar Pichai, the CEO of Google with every incentive to be publicly bullish, admits that Google has no answers to this question and that hallucinations are “an inherent feature” of LLMs.

Hallucinations present a clear roadblock to a rapid, linear AI growth timeline. Yet Aschenbrenner mentions the word “hallucination” only once in 165 pages.

“Unhobbling” Isn’t Enough

Aschenbrenner suggests that we’ll overcome current model limitations in part through what he calls “unhobbling” gains. “Unlocking latent capabilities” through tactics such as new prompting techniques and “and giving [models] tools” like internet access, he says, could “lead[] to step-changes in usefulness.”

But these counterarguments often fall to the same leaps of faith detailed above.

He says that connecting to the internet and other tools might help empower models. But accessing those tools won’t make models more discerning about what to use, when to use it, how to discern the “right” information, or how to prevent outright fabrication via hallucinations. That complex problem-solving requires general intelligence.

He says that giving models the ability to “digest” complex materials like humans—deep work like “read[ing] a couple pages slowly, hav[ing] an internal monologue about the material,” using “a few study-buddies”—will help unlock deeper understanding. But this proposal relies on models having an internal world view.

He says that tactics such as chain of thought (CoT) prompting, where models break prompts down into intermediate steps, can unlock “reasoning.” But CoT too relies on identifying the right subquestions and retrieving an accurate response to each one in turn. The same common-sense challenge thus emerges.

Impressive innovations like reinforcement learning from human feedback have improved model performance. But there’s no reason to believe this will overcome the structural problems of hallucinations and the long tail of exceptions that make the product unusable in safety-critical contexts.

Perhaps most important, Aschenbrenner’s arguments for unhobbling are expressed in the future conditional: “It actually seems possible that…” “I’m increasingly bullish that…” “It could,” justified with “an intuition pump,” or a quickly dismissable comparison.

AI can still be an incredible tool without cracking these shortcomings. Models far from AGI can help unlock both tremendous solutions—such as augmenting doctors in patient diagnoses—and formidable harms—such as political deepfakes and disinformation. But until these shortcomings are addressed, AI will not be a superintelligent replacement.

Deep Learning Hasn’t Solved Every Problem

Aschenbrenner asks us to put faith in deep learning because of its history of success. “If there’s one lesson we’ve learned from the past decade of AI,” he writes, “[i]t’s that you should never bet against deep learning.” Elsewhere: “Over and over again, year after year, skeptics have claimed ‘Deep learning won’t be able to do X’ and have been quickly proven wrong.”

But his account of deep learning’s track record is misrepresentative at best.

The obstacles detailed above have challenged the paradigm for years. Despite truly impressive gains, deep learning hasn’t crashed through every wall it’s faced. Problems of reasoning, data bottlenecks, and hallucinations—among others—have remained all along. Scale alone is not a solution. It will not catapult progress to AGI in three years without additional innovation,

***

One of the strangest parts of “Situational Awareness” is that Aschenbrenner often admits that the exercise he describes is fundamentally flawed, or faces a huge obstacle, but then continues anyway. He acknowledges that “[c]omparing AI capabilities with human intelligence is difficult and flawed” and then still uses the parallel. He accepts that hitting the data wall would invalidate his argument but omits the reasons why the data wall is coming. Even his cited sources sometimes note his arguments’ shortcomings. For instance, Aschenbrenner suggests that training AlphaGo offers valuable insights to training general intelligence. But he cites a talk where, within minutes, the presenter flags that most LLMs couldn’t be trained like AlphaGo due to the “[l]ack of a reward criterion” in most tasks. (The binary reward criterion in Go—a clear win/loss in every game—and constraints of a board differentiate it from the real world’s complex combinatorial explosion of scenarios and far murkier success metrics.)

He tucks in caveats that “the error bars are large.” But he places such qualifiers behind much more definitive headline claims, like: “Before the decade is out, we will have built superintelligence… Along the way, national security forces not seen in half a century will be unleashed.” He builds a book-length manuscript, concluding that “the free world’s very survival” is at risk, on these premises.

Maybe the answers to these questions are hidden behind nondisclosure agreements or in black boxes deep within AI development companies. Of course, I don’t have access to internal information at these companies. Perhaps foundation model labs are sitting on secret, yet-to-be-released breakthroughs that overcome all the challenges above. (Though this would contradict OpenAI Chief Technology Officer Mira Murati’s recent admission that their models behind closed doors “are not that far ahead [of] what the public has access to for free.”) If so, Aschenbrenner, OpenAI, or others should publish these studies for scrutiny and reproduction. Frontier model labs already publish many studies that advance narratives strategically beneficial to their fundraising. This would be no exception. Until then, they’re gaps and they should be treated as such.

Aschenbrenner released “Situational Awareness” amid headlines profiling leading figures in AI safety. Much of this attention is well deserved and should be celebrated. Many on OpenAI’s Superalignment team and beyond have risked losing millions of dollars in equity to raise the alarm about unrealized safety promises from executives like Sam Altman and have called for whistleblower protections. In the Economist, former OpenAI board members cited similar failed promises in calls for greater state regulation.

These individuals have done incredible amounts to bring AI safety to the prominence it now enjoys. Regardless of AGI timeline disagreements, they deserve our thanks. The same high stakes motivating them demand deeper looks into pieces like “Situational Awareness.”

Building aligned AI is incredibly important. To make sound policy and set the right research agendas, we must meet the field where it currently is. The obstacles laid out in this piece, among others, give us reason to believe AGI development is likely to be far slower than Aschenbrenner proposes.


Trent Kannegieter is the chief of staff, Platform Division, at Blue River Technology. Before Blue River, he was chief of staff at SparkAI, a machine learning company acquired by John Deere in 2023. He is an incoming JD candidate at Yale Law School.

Subscribe to Lawfare