Intelligence Surveillance & Privacy

Understanding Footnote 14: NSA Lawyering, Oversight, and Compliance

John DeLong, Susan Hennessey
Friday, October 7, 2016, 7:44 AM

In 2009, the government notified the Foreign Intelligence Surveillance Court (FISC) of a serious issue in the design and description of the National Security Agency’s (NSA) Business Records metadata program. In short, the NSA had implemented a part of that program using an erroneous interpretation of the term “archived data” that appeared in the court’s order. An inadvertent mistake in later reports to the FISC concealed the fact of the misinterpretation, which was incorporated into multiple reports over time.

Published by The Lawfare Institute
in Cooperation With
Brookings

In 2009, the government notified the Foreign Intelligence Surveillance Court (FISC) of a serious issue in the design and description of the National Security Agency’s (NSA) Business Records metadata program. In short, the NSA had implemented a part of that program using an erroneous interpretation of the term “archived data” that appeared in the court’s order. An inadvertent mistake in later reports to the FISC concealed the fact of the misinterpretation, which was incorporated into multiple reports over time.

Readers are likely aware of the incident, which has become a persistent reference point for NSA’s most ardent critics. One such critic recently pointed to a FISC memorandum referencing the episode as evidence that “NSA lawyers routinely lie, even to the secret rubber stamp FISA court”; another cited it in claiming DOJ’s attorneys made “misleading claims about the intent and knowledge NSA had about the phone and Internet dragnets” and that “NSA had basically willfully treated FISA-collected data under the more lenient protection regime of EO 12333.”

These allegations are false. And by insisting that government officials routinely mislead and lie, these critics are missing one of the most important stories in the history of modern intelligence oversight.

As people who served in the NSA during and after the time of this particular incident, we seek to offer a fuller account of this episode.

Why now?

In part, it is because of the renewed engagement on the subject of Edward Snowden. The Oliver Stone biopic, the pardon campaign, the interview with Snowden’s prior supervisor, and the Congressional report each advance their conclusions premised on the same basic oppositional narrative—that there is a good side and a bad side. This divide extends beyond the judgment of any one individual’s conduct or motivations; it defines a lens through which partisans evaluate the now-public record on NSA surveillance activities. This lens dictates whether any given piece of evidence shows patriotic civil servants, meticulously focused on law and policy, fighting to make the world a safer place for people and ideas, or if it instead confirms that NSA is an unaccountable agency, seeking to advance governmental power (or, at the extreme, achieve total population control) with little regard for laws and constitutional values.

The problem is that examining the record without setting preconceived notions aside, at least temporarily, often leads to distortions—and occasionally results in missing the real story and its lessons entirely.

Before proceeding, here is an obligatory caveat and disclosure: We both formerly served in the National Security Agency. The views expressed here are entirely our own and do not reflect those of the NSA, Department of Defense, any part of the federal government, or those of any of our current affiliations. Consistent with our clearance obligations, this article has undergone pre-publication review.

So we might be biased. But those with committed suspicion of the NSA might be too. And in this case, they are applying their particular lens to what is perhaps the most memorable product of this episode—the excoriating footnote 14 of the October 2011 memorandum opinion from the FISC. Footnote 14 reads, in part:

The Court is troubled that the government’s revelations regarding NSA’s acquisitions of Internet transactions mark the third instance in less than three years in which the government has disclosed a substantial misrepresentation regarding the scope of a major collection program.

In March 2009, the Court concluded that its authorization of NSA’s bulk acquisition of telephone call detail records . . . in the so-called “big business records” matter “ha[d] been premised on a flawed depiction of how the NSA uses [the acquired] metadata,” and that “[t]his misperception by the FISC existed from the inception of its authorized collection in May 2006, buttressed by repeated inaccurate statements made in the government’s submissions, and despite a government-devised and Court-mandated oversight regime.’ . . . Contrary to the government’s repeated assurances, NSA had been routinely running queries of the metadata using querying terms that did not meet the required standard for querying. The Court concluded that this requirement had been “so frequently and systemically violated that it can fairly be said that this critical element of the overall…regime has never functioned effectively.” (Redactions in original).

Specifically, critics conclude that this is proof that representatives of the government lied.

It’s tempting to respond to these accusations by defending the integrity of the individuals involved. After all, we know from firsthand experience that our former colleagues—both within the NSA and across the Department of Justice, the Office of the Director of National Intelligence, and the Department of Defense—serve the public with a high degree of integrity. But we think it is important to move beyond the focus on who is good and who is bad, and instead explore the history behind that footnote and the many lessons learned and incorporated into practice. After all, we are ultimately a “government of laws,” not of people.

On September 10, 2013, the government declassified and released a number of documents relating to the incident, including briefs and docket materials. The factual assertions we make here are drawn from those declassified documents, and readers can judge the submissions for themselves.

Here is our version.

What Really Happened

In 2006, the United States asked for, and the FISC approved, a set of orders under the Business Records provision of the Patriot Act. These orders, separated into a primary order (for the government) and a secondary order (for a corporation), permitted the NSA to receive compelled telephony metadata records – but not the content of any calls – strictly for counter-terrorism purposes under a regime that limited primarily the retention and use of those records. In the parlance, they are known as the Business Records order or BR FISA.

The primary order contained a number of specific restrictions on the NSA, and included the following paragraph:

Any search or analysis of the data archive shall occur only after a particular known telephone number has been associated with []. More specifically, access to the archived data shall occur only when NSA has identified a known telephone number for which, based on the factual and practical considerations of everyday life on which reasonable and prudent persons act, there are facts giving rise to a reasonable, articulable suspicion that the telephone number is associated with [] organization; provided, however, that a telephone number believed to be used by a U.S. person shall not be regarded as associated with [] solely on the basis of activities that are protected by the First Amendment to the Constitution. [Order, docket number BR 06-05 at 5.] (Redactions in original.)

This paragraph, imposing requirements on the NSA specifically in its handling of archived data, is the crux of a trifecta of disaster and is the initial spark of footnote 14. The simplest way to understand the episode, which occurred over two and a half years, is through that trifecta: the interpretation, the misrepresentation, and the delay.

To frame the discussion that follows, there were three major parts of the BR FISA program. First, compelled telephone metadata came into the NSA daily as a result of the secondary order. Think of it like a stream of metadata. Second, that incoming stream of metadata was put into a big pool, along with prior metadata, and all of it was retained for a limited period of time. Third, when a person (an analyst) wanted to search that pool, they had to demonstrate that the number they used to search met a specific standard of proof necessary to ensure that the connection to a terrorism was more than a hunch, albeit less than a certainty. That standard of proof is known as the reasonable articulable suspicion standard (the RAS standard).

That's a very simple description—but in what follows the details really matter.

The Misinterpretation

It is important to understand how and why the NSA arrived at an interpretation regarding the term “archived data” that differed from the FISC’s.

Acting under the BR order, NSA actually applied an intermediate step to the incoming stream of BR metadata, a step known as the alert process. The alert process notified counterterrorism analysts if and when any record in that incoming stream of metadata matched with known identifiers of interest to counterterrorism analysts. Think of this as a bell that would ring one time if, on that day, a known telephone number associated with terrorism was actually being used to make or receive a call. It wasn't a retrospective look through the metadata, but a single real-time alert.

The alert list—against which incoming BR metadata was compared—contained two partitions: telephone identifiers that had been determined to satisfy the RAS standard and those that had not yet been assessed for RAS. If a RAS-approved identifier triggered an alert, it was automatically used as the “seed” for searching further into the pool of metadata, the BR analytical repository. In the event a non-RAS approved identifier triggered an alert, a CT analyst was notified so that a RAS-assessment could be attempted. If the identifier satisfied the RAS-standard, then and only then, could it be used to search further into the pool of metadata. A distinct list, called the station table, held records of every identifier that had been assessed for RAS and the outcome (whether it met the standard or not). The station table was used to ensure only identifiers that had been RAS-approved were used for searches into the pool of metadata.

Sound complicated? It is.

Because of the complexity of the alert process and the seriousness of activity conducted pursuant to court orders, NSA operational personnel sought advice about implementing the paragraph of the BR FISA referenced above. Specifically, shortly after the order was issued, operational personnel asked NSA lawyers to review a set of internal BR procedures governing how and when the metadata could be accessed—both as it was coming into the NSA for the first time (the stream) and as it was in the database (the pool). These internal procedures described the existence of the alert list, which had both RAS and non-RAS identifiers.

The internal BR procedures expressly noted that as metadata records first came into NSA, those records would be compared against the “alert list” to see if there were any matches. If there was a match, NSA’s counterterrorism analysts would be alerted and then, depending on the priorities at any given time, the analyst might query the entire database of metadata if the specific “alerted” number had met the higher RAS standard or, in some cases, seek to prove that higher standard in which case the entire database of metadata could then be queried.

Given the terms of the primary order, the operative legal question here is what constituted “archived data,” since a search of archived data triggered the obligation that the RAS-standard be satisfied. At NSA “archived data” often referred to analytical repositories and did not include the processing and other steps that occur in order to make incoming collection useful for analysis or analysts. After all, if archived data is the same as all data, then what is added by the term “archived” in front of data? Consistent with the interpretive principle that all words be given meaning, NSA understood the term archived to mean stored.

That is the definition NSA lawyers used when determining whether the procedures were consistent with the order. The Office of General Counsel (OGC) concluded that the RAS-standard requirement was triggered when analysts sought access to the stored (archived) repository of BR FISA data. And so they signed off on the procedures, while specifically noting the importance of making a RAS determination before querying the archived data (in the pool).

But for the purposes of the order, the Court intended archived data to mean all metadata held at NSA, including data in processing (in the stream). This meant that the alert list which compared non-RAS identifiers against incoming metadata, to then trigger RAS-standard review, was not a valid interpretation of the order’s words. The RAS-standard review needed to occur before any list was compared, not after a match was discovered, even as applied just to the incoming stream of metadata. (To reiterate, there was no difference of opinion about the need for a RAS-standard before searching the entire database of stored metadata.)

NSA’s skeptics view this as intentional deception on the part of the agency. They allege that NSA adopted a definition it knew was different from the FISC’s for the purpose of exceeding the order. Critics examining the episode see further confirmation that the government uses secret definitions in order to mislead courts and the public. But the idea that “archived” meant stored data—or at least something different that the unmodified term “data”—was at that time unremarkable at NSA. In fact, no one ever flagged this interpretation as problematic—not the operational groups, and not the lawyers reviewing the program. Even the IG’s review did not note any problem—though it did suggest that the FISC be provided with a description of the alert list process.

Initial definitional confusion regarding complex systems is to be expected. The oversight regime tried to account for this by requiring agencies to initially and then periodically report on the implementation of programs. This error should have been detected when DOJ and NSA submitted reports on the BR program implementation to the FISC. The court should have seen an accurate description of the operation of the alert list, and thus known immediately that the implementation was based on a non-compliant interpretation.

But that didn’t happen.

The Misrepresentation

The second critical piece of the story involves understanding how an error in a report to the FISC, compounded by the misinterpretation discussed above, obscured the fact that there was a problem.

That secondary layer of judicial oversight, designed to catch interpretation and also implementation errors, depends on reports being accurate. If those reports are wrong, the oversight process doesn't have the information it needs. And here, one serious definitional breakdown compounded another.

A lawyer at NSA prepared the first draft of the BR FISA report for the FISC—a draft to be reviewed by NSA operational personnel, additional attorneys, and finally DOJ, who submits it to the court. Being a lawyer, and not an engineer, the NSA lawyer mistakenly described the process as follows in that draft:

Each of the foreign telephone numbers that comes to the attention of NSA as possibly related to [] is evaluated to determine whether the information about it provided to the NSA satisfies the reasonable articulable suspicion standard. If so, the foreign telephone number is placed on the alert list; if not, it is not placed on the alert list.

The process set out above applies also to newly discovered domestic telephone numbers considered for addition to the alert list, with the additional requirement that NSA’s Office of general Counsel reviews these numbers and affirms that the telephone number is not the focus of the analysis based solely on activities protected by the First Amendment. (Redactions in original.)

That, of course, is factually incorrect. It represented that only RAS-approved numbers are on the alert list. But in reality, most of the alert list contained telephone numbers that are not RAS approved.

Here is where the lens one brings really matters; it’s where two entirely different versions of the story emerge. Critics might see—and have seen—a vast conspiracy between many parts of NSA and the Department of Justice to lie to the FISC and then further conceal that lie. How convenient is it that one mistake would disguise another? On the other side, those within and sympathetic to the government might see a shocking but inadvertent disconnect that sparked a storm, but that didn’t in fact drive searches into the full database of metadata so, perhaps, no major foul?

But both of those versions miss the important lessons to be learned here. No one lied, yet the structural failures and compounding errors that follow raised serious questions about intelligence lawyering, oversight, and compliance. The real implications were worse than those conveyed by either sound-bite story: an inconsequential mistake can be fixed with a filing update and corrupt government officials can be fired, but a good-faith error going undetected for years means rebuilding a lot of things, and some from scratch. And that’s what the government ultimately did (but we’ll get to that part later).

The lawyer circulated the rough draft—which included the error—to others at NSA with a note: “This is NOT ready to go until it is reviewed again … I have done my best to be complete and thorough, but … make sure everything I have [said] is absolutely true.”

No one corrected the mistake. The operational reviewers presumed the lawyer’s description was legally accurate, because the lawyers had prepared the report and previously signed off on the internal BR procedures (that were blessed based on the interpretation of archived data). The description didn’t include a full description of the partitions—and thus non-RAS identifiers—but the operations personnel assumed the omission was a matter of legal distinction (this is, that only the RAS identifiers were relevant for the report) since no contact chaining was authorized against the BR FISA archived database without meeting the RAS standard. In short, the lawyers were relying on the operational personnel to identify relevant errors, and the operational personnel were relying on the lawyers to know what was legally relevant. DOJ lawyers relied on NSA’s representations about the programs, and the FISC relied on the DOJ-approved reports.

We suspect anyone who has served in government is nodding their head—it’s a classic bureaucratic screw up. Except, here, it was a lot more than that. This was a secret program authorized by a court operating in secret that implicated the rights of people around the world. The intelligence community cannot do anything that is not legally authorized. That misrepresentation meant that the court did not have the information it needed (and in fact had the wrong information) to either know up front or conduct oversight regarding the implementation of its order—the purpose of the entire regime. And, it turned out, a serious difference of interpretation lay at the core.

Naturally, it matters that the conduct in question was a mistake and not a lie. But in the realm of IC oversight, where there are heightened expectations of accuracy and candor, the result was unacceptable.

Still, it is worth pausing here to note what is probably the single most important lesson, and one that extends beyond the intelligence community. Unless the technical personnel in your organization are directly involved in the review of externally-relevant legal documentation, you are kidding yourself if you think you have done enough to ensure that you are acting consistently with the law, policy, and values to which you hold yourself as an organization. Even then, mental models matter and what your organization may in good faith think is a reasonable legal interpretation might nevertheless be clearly erroneous to a higher authority, which operates with a different mental model. (Another word of advice: if your organization’s externally facing privacy policy hasn’t been reviewed by in-the-weeds members of your information technology department, now is the time to bring them in and be prepared for some difficult discussions—but it will save you significantly, particularly going forward.)

The Delay

The final piece underlying footnote 14 is how the misrepresentation—which obscured the misinterpretation—was perpetuated over time.

The original report became the template for future reports to the FISC—that is, the incorrect language describing the alert process was repeated without close examination in multiple reports between May 2006 and December 2008. All the oversight entities had a single-minded focus on ensuring implementation compliance for queries into archived metadata and the contact chaining process. The alert list was not at the top of the priority list.

Because there was no indication that interpretation was a problem, all oversight and compliance bodies meticulously focused on the implementation, not realizing that the foundation they were standing on was infirm. Representatives from DOJ and NSA met periodically to discuss the program and ensure DOJ was making accurate representations to the court and non-compliance implementation incidents were detected and promptly reported.

Then in a January 9, 2009 briefing on metadata collection, DOJ and NSA discussed the alert list in detail. All it took was a single conversation to raise red flags that something had gone seriously wrong. In this meeting, DOJ learned for the first time that the alert list compared non-RAS-approved identifiers to the incoming stream of BR metadata. DOJ’s lawyers were stunned. They left the briefing and immediately sent an email to NSA asking that the agency confirm, in writing, that the alert list process worked as described in the briefing. NSA conducted a detailed internal investigation and formally responded to DOJ on January 15, confirming Justice’s understanding was correct. That same day, a preliminary notice of compliance incident was filed with the FISC.

The following day, January 16, NSA attempted to resolve the problem through a software fix and began the process of constructing a RAS-approved-only alert list. The software fix was unsuccessful, and so on January 24—two weeks after the original meeting—NSA shut down the alert process entirely.

It bears noting that this element of oversight does not get much external attention. Most people focus on the adequacy of the oversight systems themselves or point to the substance of the incident of non-compliance. But if we can offer one insight from the experience of lawyers and compliance officers at the NSA, it is that the response to a non-compliance event is absolutely critical.

When a major incident of non-compliance comes to light, the formal record understates the flurry of activity that follows. Everyone emerged from that January 9 meeting aware that this was a very big, very bad deal. We can’t speak for the precise response process that occurs at DOJ or other agencies, but NSA responds the way it knows best: intel. Relevant facts are assembled, relevant emails are produced, relevant documents are reviewed. It’s all analyzed together and senior leadership is quickly briefed. Just as the Intelligence Community provides United States decision-makers with critical information, at NSA it is considered critically important, when responding to an incident, that senior leadership is aware of what is known, how it is known, and what remains unknown. And that’s what happened here.

This instinct is contrary to what NSA critics might expect. They often suppose that the fundamental tendency is towards secrecy or obfuscation. That charge may have merit in the context of public engagements, but it simply isn’t true in scenarios like this one. Now that the foundational legal interpretation was in jeopardy, General Alexander, the Director of NSA at the time, personally informed then-DNI Dennis Blair and then-USD(I) James Clapper. The DNI General Counsel received ongoing updates. Once DOJ notified the Court of the incident, NSA notified its own Inspector General. And the incident was formally reported to the Assistant to the Secretary of Defense for Intelligence Oversight in the quarterly report, which was also provided to the President’s Intelligence Oversight Board. DOJ itself notified the FBI General Counsel. No one knew the exact resolution of the issue, but it was clear the journey ahead would be extremely turbulent.

The FISC was extremely alarmed, to put it mildly.

The court is often accused of being “a rubber stamp,” legitimizing intelligence activities without providing any real checks or constraints. But the episode here demonstrates the precise opposite. The Court ordered the NSA to provide a sworn affidavit answering a number of specific questions, including the names of individuals who were aware of the function of the alert system, a full description how the issue came to light, and why none of the entities tasked with oversight had identified the problem earlier.

The Response, Lessons Learned, and the Birth of Modern Compliance

The three elements above comprise the story of the non-compliance. But the real story—and the most important lessons to be learned—is as much the response as the incident itself.

The NSA produced a signed declaration outlining what happened and provided information to answer the Court’s directed questions. The Department of Justice then prepared a cover filing, forwarding the NSA declaration and also using the declaration to address a number of more legally-focused questions from the Court. (This two-step is a common practice since only the Department of Justice can represent the executive branch in front of the Court and only an agency like the NSA can really assert facts about its activities.)

This NSA declaration listed in detail who knew what and when. A table in the NSA report summarized who had been knowledgeable about the functioning of the alert list and whether they had been asked to review the accuracy of the descriptions going to the Court. Essentially, the critical information was known to a set of people, but in particular to technical personnel that had not been asked to review the accuracy. While technical personnel were certainly involved in implementation, they were not as involved as the operational personnel in communicating with lawyers, either inside or outside of NSA.

And based on what we know now, it isn’t likely that simply including technical personnel in discussions with lawyers would have magically averted the crisis and surfaced the issue of the legal interpretation, but they might have flagged the description of the alert list as potentially needing more detail. In any case, in retrospect, they should have had the opportunity. What occurred here is more a matter of interpretational non-compliance than implementation non-compliance. But that's a distinction without a difference, especially in light of a seriously concerned Court, a shocked Department of Justice, and an introspective and regretful NSA.

NSA is a deeply analytical place, and no one wanted this to happen again, and so the analysis continued in earnest. The NSA declaration asserted that a root cause of the failure was that no one person had a complete legal, operational, and technical understanding. NSA first looked at remedies within the government: more audits, more external oversight, more lawyers? Those all might help, but those solutions seemed more evolutionary than revolutionary. NSA recognized the problem was not a discreet BR FISA issue to be fixed; a solution would need to work for all of NSA’s activities.

Many commentators’ compliance timelines start in 2009, but in reality NSA’s internal compliance office had tripled in size between 2006 and 2009 to nearly 90 people. The first part of the solution lay with that compliance group, embedded within the operational part of NSA, as it had already started defining its value as something different than paralegal support or non-compliance incident management. The growing compliance office had begun in earnest to help NSA build quality in, rather than just help draft documents, count reported incidents of non-compliance and bird dog fixes. At the time, compliance focused more on people than systems, but the compliance office’s firm existence in NSA proved fortuitous—and spared the agency the monumental task of building from the ground up in 2009. Those personnel who worked in and grew that office over those years are owed a great deal of gratitude.

The second part of the answer, ultimately, came from outside the government. The corporate compliance profession was growing. Its focus was not purely on legal interpretation, and in fact the most progressive compliance officers were not within their organizations legal departments at all. It did not conduct audits, operations, technology, or independent oversight alone, but instead built processes and programs to integrate those components into something greater than the sum of the parts.

As a profession, corporate compliance officers and programs trace in part their roots to the 1990s, when health care organizations desired better practices and programs to safely handle medical information under increasing rules and both internal expectations and external scrutiny. The core idea is simple, but powerful: get out of the stands and onto the field and be a rule coach. Don’t set the rules—that's the lawyers and privacy officers—and don't ultimately call the balls and strikes—that’s the referees, with the help of the lawyers and oversight. Instead, compliance focuses on training and equipping people and systems to more consistently follow the rules. Help people identify when they needed to ask more questions; help system designers understand and apply the rules so that in works in practice and not just on paper.

Accomplishing that effectively requires understanding the resources, incentives, and—most importantly—unique dialects of different parts of the organization. This last part involves more than just translation. You have to help people interlock over time and build effective bridges in order to avoid the biggest pitfall that occurs when legal, technical, and operational personnel communicate: the illusion the job has been accomplished when it really hasn’t, to borrow from George Bernard Shaw.

That illusion took hold in 2006 in the BR FISA program, internally to NSA and even between the DOJ and NSA attorneys. Once created, it continued for years. Document after document was transferred around with everyone falsely believing that the communications were comprehensive and had accomplished their purpose. Recognizing the problem, however, is not the same as learning the lessons required to fix it.

  • Lesson #1: Interpretational Compliance (i.e. legal compliance)

Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that “all collected data must be deleted after two years.” NSA engineers must then make a list for the NSA attorneys:

  1. What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?
  2. What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it's largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?
  3. When does the timer start?
  4. What’s the legally-relevant unit of measurement for timestamp computation—a day, an hour, a second, a millisecond?
  5. If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? [Note: the word “material” does not appear in Rule 13(b) of the FISC Rules of Procedure, which addresses reporting requirements for incidents of noncompliance with a court order.]
  6. What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?
  7. What about hard copy printouts?

And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions—in particular the answers—require significant interpretations to be coordinated with DOJ and which determinations can be made internally?

Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?

We imagine that the individual reader—whether lawyer, engineer, or otherwise—may think the answers to the above are obvious. And for an individual bringing a particular perspective, they might be. But a functioning system of checks and balances cannot be based on a single individual’s decree. There has to be a shared understanding among numerous people—no one person can be asked to know or decide all. Doing so would vest one person with too much unchecked power, so by design the substantive decisions—as well as deciding what needs to be decided and at what level of detail—are distributed.

That's the real challenge here and also the root of the second lesson.

  • Lesson #2: Shared Understanding and Implementational Compliance

As the subsequent court filings show, the NSA launched a full end-to-end review of the BR FISA program. The root cause of the initial error (the interpretation) was more general than specific, and so no stone was left unturned. As each system, data flow, and operational process was evaluated against each sentence in the primary order, more specific points of non-compliance were uncovered. It was like peeling back an onion: each layer revealed new information and prompted more tears.

The process was neither painless nor pretty, but at the end of the review, after correcting all of the uncovered point errors, NSA could say with higher confidence that it was acting consistently with the court order. Confidence at a particular moment in time is one thing, trust requires keeping promises over time. And the NSA—and the FISC and Congress and the executive branch—needed to explain how promises would be kept over time in light of continuing changes in technology and people. This is distinct from accuracy (think legal interpretation) and is instead a question of precision. Precision is the central function of compliance; accuracy achieved every second of every day.

So NSA did what many other corporations were doing at the time and appointed a Director of Compliance, with direct access to the top leadership and the power and authority to guide implementation of (and sometimes direct) procedures and practices that integrate and synchronize the different elements of the organization for a singular purpose: precision, maintained over time and change, with respect to the application of law and policy.

There were a number of compliance program elements added over the years that followed, but one of the first was the result of NSA, DOJ, and ODNI personnel developing a rigorous process to draft and review documentation going to the Court. “Verification of Accuracy,” as it’s known, aims to have the correct parts of NSA involved in drafting and reviewing documents and provides multiple opportunities for shared understanding to flourish and disconnects to be revealed. No process is infallible or free from needed improvement over time, but the Verification of Accuracy process has paid untold dividends. And second, to also help prevent a repeat of the “delay” disaster, a “technology compliance” office was created to drive a more continuous process of technology compliance and involvement in legal and oversight matters.

And we haven’t even gotten into all of the issues contained in footnote 14. We’ll leave those for another day, though they occurred in the same period and are closely linked, in root cause, to the errors discussed here.

One could ask—and many did—why those subsequent problems weren’t uncovered sooner. It’s a fair question and we have no easy or magic answer. But it is important to recognize that as compliance programs develop and integrate into an organization, inevitably there is an uptick in uncovered errors. It may at first seem counterintuitive, but an initial uptick in errors is a sign things are working. It’s one reason why raw yearly incident counts, without additional context, can be a misleading measure of oversight or compliance failure. The aim is always zero errors, of course, though realistically the best one can probably hope for is a low error rate and rapid discovery after one occurs. (In fact, a seasoned compliance professional eyes organizations with zero reported errors with a high degree of suspicion.)

The compliance structure and processes added in the wake of that 2009 discovery remain in place today, and Congress made the NSA Director of Compliance a position required by law as part of the Intelligence Authorization Act of 2010. The Business Records program no longer exists in the form contemplated here. But the reality is that some of the most meaningful reforms at NSA—perhaps more significant than the USA FREEDOM Act which eliminated the program itself—were set in motion long before 2013. And they happened because, when confronted with a consequential mistake, everyone told the truth, no matter the short or long term consequences.

A Concluding Thought on Trust and the Spirit of Liberty

We don’t presume to have convinced anyone to set aside his or her beliefs about NSA, or even that person’s own version of this particular story. But we do hope to have added to the discussion. Of course, we have our own thoughts about the events of 2013, about who might be good and bad and both and neither. But if we have learned anything over the past years, it is that people ultimately don’t, and probably can’t, trust institutions. They are asked to trust the people in those institutions.

Professor Geoffrey Stone, a storied civil libertarian who served on the President’s Review Group, captured the same idea in noting his pleasant surprise after diving deeply into the most-classified parts of NSA. Contrary to the all the vilification of NSA he had heard, he found the people there to universally share the values of our Nation and our allies. And in the same breath, he asserted that NSA, as an institution, should never be granted universal trust.

We agree. Our Nation’s ethos is built on distrust of institutions, especially governmental ones. This is a good thing. Organizations need broad scrutiny and thoughtful skepticism. But the people who serve in them rarely deserve such dragnet and non-particularized accusations—at least not without a reasonable articulable suspicion, to borrow a phrase. And so we rest our defense.

In the days after June 2013, Learned Hand’s 1944 refrain echoed throughout the halls of NSA: “Liberty lies in the hearts of men and women; when it dies there, no constitution, no law, no court can save it; no constitution, no law, no court can even do much to help it.”

We’ve tried to provide a glimpse into the institutional protections and processes at work, but we cannot pretend to convince anyone of the hearts of the men and women charged with abiding them. We know a great deal about the details of compliance processes and FISC oversight. But when we see that set of glassy reflective NSA buildings, much of our trust is rooted in knowing the people inside; we understand how ominous those same buildings can seem to those who don’t.

Transparency is too weak a word to describe what is needed going forward to help keep liberty in the hearts of people around the world. Yes, the ever-growing set of laws and associated safeguards can provide some intellectual comfort. But whether one views footnote 14 as an example of empowered oversight or a terrifying failure of the same is ultimately a question of the heart. It’s either many bad people lying or it’s civil servants doing their best to correct a mistake. But an open mind is needed to actually understand the processes at play, and to draw the lessons about what meaningful oversight, constraint, and transparency might look like. If the point here is to get to better answers—to right answers, that is, and not the satisfying murmur of a social media echo chamber—then unthinking suspicion is as blinding as unthinking credulity.

We’ll end with more from Learned Hand’s speech that day in 1944, which seems mightily applicable in 2016:

The spirit of liberty is the spirit which is not too sure that it is right; the spirit of liberty is the spirit which seeks to understand the minds of other men and women; the spirit of liberty is the spirit which weighs their interests alongside its own without bias.


John DeLong is a fellow at Harvard University's Berkman Klein Center for Internet & Society. He is focused on cybersecurity and the implications of technology changes—including machine learning and artificial intelligence —on the framework and best practices for compliance, oversight, and ethics programs. He was formerly with the National Security Agency where he was the Director of Compliance from 2009-2014. He received a B.A. in Physics and Mathematics from Harvard and a J.D. from Harvard Law School.
Susan Hennessey was the Executive Editor of Lawfare and General Counsel of the Lawfare Institute. She was a Brookings Fellow in National Security Law. Prior to joining Brookings, Ms. Hennessey was an attorney in the Office of General Counsel of the National Security Agency. She is a graduate of Harvard Law School and the University of California, Los Angeles.

Subscribe to Lawfare