Foreign Relations & International Law

Afghanistan, Policy Choices, and Claims of Intelligence Failure

David Priess
Thursday, August 26, 2021, 12:40 PM

Intelligence failure can contribute to a policy debacle. But policies often break bad without such a convenient excuse—an important thing to remember as we reflect on President Biden’s Afghanistan decision and its execution.

President Joe Biden walks to the North Portico entrance of the White House, Tuesday, April 27, 2021. (Official White House Photo by Adam Schultz/https://flic.kr/p/2m3CwsF)

Published by The Lawfare Institute
in Cooperation With
Brookings

An old adage in foreign policy circles—heard most often from policymakers at the White House or the State Department or the Pentagon—is that there are no policy failures, only intelligence failures.

This, of course, is a landfill-sized garbage take. The origins of poor policy decisions go much deeper and much wider than that; intelligence has always been merely one input in a complicated, multifaceted policy process.

Yet, it’s fair to admit a serious problem when intelligence is both a crucial factor in national security decision-making and inaccurate, not objective, untimely, or poorly communicated. In these cases, “intelligence failure” fits the bill.

Without knowing the details of intelligence that had reached President Biden, observers have nevertheless been deploying the phrase regarding the situation in Afghanistan. “This is an intelligence failure,” Rep. Jackie Speier, D-Calif., told NBC News on Aug. 15—just after the network’s chief foreign correspondent, Richard Engel, tweeted about it as “a huge US intelligence failure.” The next day, Bill Roggio, a senior fellow at the Foundation for Defense of Democracies, called it not only “an intelligence failure of the highest order” but also the “biggest intelligence failure” since the missed Tet Offensive in the Vietnam War.

But we don’t yet have what we need to fairly assess the actual applicability of “intelligence failure” here. This post explains why.

How do intelligence failures occur?

At each fundamental stage of the intelligence cycle—collection, analysis and dissemination—errors occur. When wrong intelligence data or assessments factor into major national security decisions, you’ve got a no-kidding intelligence failure.

Let’s look at each of these stages in turn.

First, collection errors: Intelligence officers might fail to get information that is within reach and would inform a particular decision, they might collect such information but fail to process it, or they might gather and process incorrect information. Any of these leave decision-makers with a less fully formed view of the situation at hand than should be expected. Think here of the lack of adequate raw intelligence about what was happening in Iran before the fall of the shah during the administration of Jimmy Carter, or of the inaccurate reporting from the source codenamed “Curveball” in the leadup to the 2003 invasion of Iraq.

Second, analytic errors: Intelligence officers might fail to effectively analyze available information. These missteps come in several forms.

Analysts might try their best but nevertheless present inaccurate judgments, by under-warning (discounting incoming information and thus failing to alert policymakers about its implications) or over-warning (weighing too heavily incoming information and thus issuing false alerts). One of the more striking failures of inaccuracy from U.S. intelligence history remains the failure to warn of the Egyptian attack of Israeli positions across the Suez Canal in 1973.

Analysts might also err by presenting biased assessments, losing their objectivity as they tell policymakers what they think policymakers want to hear—or, alternatively, taunt policymakers with what they think policymakers don’t want to hear. Either one is a corruption of the analytic process. My experiences (and those of other observers) suggest that such politicization both plays out less often than conventional wisdom would lead us to believe and, when it does emerge, comes more frequently from subconscious processes than by deliberate choice.

Even if analysts develop judgments that are correct and objective, they might still fail policymakers by presenting those judgments in an untimely fashion. Some analytic calls are inherently harder to make than others, as I’ll discuss below, but a core truth remains: On many international developments, it’s fair to label as “intelligence failures” predictions of events that arrive after those events have occurred.

Third, dissemination errors: Intelligence officers might make correct, objective, and timely assessments but communicate them ineffectively. The most logical, information-rich and ultimately accurate bottom lines do little good if they make it into documents that aren’t called to the relevant policymaker’s attention—or if they get lost in a multifaceted or incoherent oral briefing. To the extent that an important judgment isn’t presented clearly to the principal, the intelligence process falls short.

It’s a bit trickier when the intelligence officer thinks that the message has been delivered clearly enough, but the policymaker doesn’t see it that way. Henry Kissinger, national security adviser for two presidents, reportedly chided intelligence officers—who, he acknowledged, had briefed him on what proved to be a prescient intelligence warning—by saying, “You warned me, but you didn’t convince me.” In this case, context really matters. If a written intelligence product offers a bottom line but lacks the source material, logic, and argumentation to give the policymaker reason to take it on board, that’s an intelligence problem. If an intelligence briefer states an analytic bottom line but the policymaker was distracted and offered no sign of internalizing it, that’s an intelligence problem. But if the judgment was clear and effectively delivered, that’s a policymaker problem—even if that policymaker thinks the intelligence officer should have pressed the case harder.

At the level of the commander in chief, foreign policy crises tend to prompt queries like, “Was it in the President’s Daily Brief?” Although presidents receive intelligence analysis from many sources—the White House Situation Room, the national security adviser personally, other written intelligence products, oral intelligence briefings, to name only a few—it’s still a fair question. The PDB, for more than a half century, has stood as the tip of the spear when it comes to regular dissemination of intelligence analysis to the commander in chief.

If analysts or managers keep a piece of analysis that a reasonable observer would deem relevant to a presidential decision out of the PDB because they consider that analytic take to be below the threshold for inclusion, that’s an intelligence failure. If that same item makes it into the PDB, but the intelligence briefer neglects to ensure that the president sees it or hears about it, that’s an intelligence failure. If the briefer succeeds in calling attention to it but garbles the bottom line or undermines it, perhaps out of personal disagreement with the view presented in the PDB, that’s an intelligence failure.

How do policies break bad?

Taking a step back, let’s remember to resist the (almost irresistible) temptation to label as “failures” all policy choices that turn out poorly. A national security issue that cannot be resolved during the entire process from working-level interagency sessions through deputies’ committee meetings and principals’ committee meetings goes to the president’s desk. It gets that far precisely because there might be no universally recognized “good” options. Hollywood enjoys the freedom to craft Oval Office scenes with crystal-clear right vs. wrong choices for the president to give eloquent speeches about; real presidents much more often face true political and ethical dilemmas with downsides to every possible path forward.

But, sympathy for tough choices aside, yes—there still are bad policies.

And in some of them, none of the errors from the intelligence side of the relationship appear; the bad policy develops mostly or entirely from different factors. (Policy goes wrong, fundamentally, either if a suboptimal decision is made or if an optimal decision is made but the execution falters. For simplicity in this section, I’ll treat policy formulation and policy implementation as one process.)

Intelligence that has been appropriately collected, analyzed, and communicated might actually inform a decision, but a variety of other inputs in the policy process—ranging from consideration of foreign allies’ concerns or resource constraints to domestic political calculations or downright poor judgment—contribute to a disastrous policy outcome. Unethical officials might scream “intelligence failure,” in part because they know it’s harder (though clearly far from impossible) for intelligence officers to push back in public. That doesn’t make such outbursts valid.

And textbook cases of politicization—in which intelligence officers present clear but controversial judgments but policymakers take umbrage because the judgments clash with what they want to hear—place moral responsibility on the policymakers, not on the intelligence officers who are just doing their jobs.

Beyond these straightforward scenarios, context can limit how well tags of “intelligence failure” apply even to situations with some abnormalities in the intelligence-policy process.

Suboptimal intelligence might appear but not as the most significant factor in the ultimate problematic policy decision or implementation. Reasonable policymakers, in these cases, don’t try to punt responsibility to intelligence officers. George W. Bush, for example, has acknowledged that “false” intelligence about Iraqi weapons of mass destruction contributed to his choice to invade Iraq in 2003—but he continues to own that decision, writing in his memoir that “I decided early on that I would not criticize the hard-working patriots at the CIA for the faulty intelligence on Iraq,” and that he wouldn’t stoop to “finger-pointing.” Instead, Bush emphasized the non-WMD reasons for removing Saddam Hussein, “the man who had gassed the Kurds, mowed down the Shia by helicopter gunship, massacred the Marsh Arabs, and sent thousands to mass graves.”

In other instances, previous policymaker choices might plant the seeds for intelligence problems that later contribute to bad policy. Political leaders, for example, have forbidden gathering certain types of intelligence in particular countries—like Iran in the 1970s, when policy considerations severely limited collection on the Iranian opposition.

Intelligence officers and seasoned policymakers generally agree that one type of foreign event is extraordinarily difficult to predict confidently from afar: a third party’s overthrow of an established foreign government. Assessing foreign coup plotters’ true capabilities and likely support for their potential actions—not to mention their true intentions, which even the plotters themselves may not yet have settled on—is inherently challenging even with exquisite collection and analysis. “To predict a specific end of something—either a coup or that somebody’s going to lose power,” former National Security Officer Brent Scowcroft told me, “is awfully hard to do.”

Can we assess if intel failure contributed to the Afghanistan situation?

Although observers are now and will remain split on whether the initial policy choice—to remove remaining U.S. military personnel from Afghanistan—reflected wisdom or folly, they almost universally agree that its implementation and subsequent communication about it were on a scale somewhere between unfortunate and incompetent. Those discussions are taking place elsewhere.

Here, I’ll close by looking briefly at how intelligence failures might have contributed to Biden’s Afghanistan withdrawal decision and its implementation.

My bottom lines will disappoint almost everyone: We don’t know. And we’re unlikely to know with any confidence, unless a great deal more information becomes available.

What did intelligence community assessments back to the start of the year say about developments in Afghanistan—including the likelihood of Taliban advances, the ability and willingness of Afghan government forces to fight, the Taliban timetable for seizing territory, the credibility of any Taliban assurances to the U.S. about Kabul in particular, and so on? There have been to date only anonymous leaks of selected phrases from supposed intelligence documents, not enough to speak with any certainty on these questions.

Which of these assessments were presented to which policymakers? Were they communicated clearly? Which ones made it into the PDB? Were they orally highlighted, or pointed out for special attention in the written product? Did briefers consciously or subconsciously undermine the bottom line messages? On these questions, we know even less.

What policy processes took place surrounding the decision to withdraw U.S. forces by September, to announce that timetable to the world, and to execute the policy in the fashion we have seen? Did evolving intelligence assessments contribute to those ongoing processes at all levels, up to the collapse of U.S. forces at Hamid Karzai International Airport in Kabul? If not, why not?

The long trail of declassified intelligence assessments on other topics around the world strongly suggests that analysts predicted a Taliban resurgence, perhaps in general terms, and accelerated their timetable for Taliban encroachment on the capital, perhaps without specific dates for key developments. It wouldn’t surprise me to learn that intelligence officers honestly felt that point predictions would be inappropriately misleading under such rapidly changing circumstances and instead defaulted to what readers often decry as vague, low-confidence warnings. Or maybe I’m wrong, and the assessments were specific and nuanced even as the situation, hour by hour, evolved—and policymakers simply chose not to alter the policy’s execution.

But the reality is we lack reliable answers to many—most, in fact—of these questions. Some of them may only be known, if ever, after 40 more years—when the printed President’s Daily Briefs from this presidential term are declassified (assuming that a practice established in the Obama administration for declassifying PDBs continues). Even then, unless and until Joe Biden opens his mind and soul, we are unlikely to understand if he internalized the core judgments in any intelligence documents or briefings.

If enough political will coalesces, answers might come sooner from a robust investigation, akin to the 9/11 Commission. In that case, commissioners for the first time had access not only to policymakers (up to and including the president, with restrictions) but also to PDBs from two administrations (Clinton and Bush 43). The commission even published the text of two PDB articles, with only minor redactions, in its final report. A similar effort now could help answer many of these questions more fully than scattered leaks.


David Priess is Director of Intelligence at Bedrock Learning, Inc. and a Senior Fellow at the Michael V. Hayden Center for Intelligence, Policy, and International Security. He served during the Clinton and Bush 43 administrations as a CIA officer and has written two books: “The President’s Book of Secrets,” about the top-secret President’s Daily Brief, and "How To Get Rid of a President," describing the ways American presidents have left office.

Subscribe to Lawfare