Speech of NSA General Counsel Glenn Gerstell to 2018 ABA National Security Law Conference

Matthew Kahn
Tuesday, November 6, 2018, 8:16 AM

National Security Agency General Counsel Glenn S. Gerstell made the following keynote address on Nov. 1 at the American Bar Association's Annual Review of the Field of National Security Law Conference. (Footnotes omitted.)

Starting my remarks with a short quotation from a hearing before the U.S. Senate seems fitting given that we’re at a legal conference in Washington:

Published by The Lawfare Institute
in Cooperation With
Brookings

National Security Agency General Counsel Glenn S. Gerstell made the following keynote address on Nov. 1 at the American Bar Association's Annual Review of the Field of National Security Law Conference. (Footnotes omitted.)

Starting my remarks with a short quotation from a hearing before the U.S. Senate seems fitting given that we’re at a legal conference in Washington:

Computers are changing our lives faster than any other invention in our history. Our society is becoming increasingly dependent on information technologies, which are changing at an amazing rate. Combine this rapid explosion in computing power with the fact that information systems are being connected together around the world without regard to geographic boundaries. This ... offers both opportunities and challenges ... [among them] vulnerabilities which represent severe security flaws and risks to our nation’s security, public safety, and personal privacy.

That quotation sounds like it might have come from a hearing earlier this year. But it was said by Senator Fred Thompson more than twenty years ago, well before the invention of the iPhone or YouTube, and just at the dawn of email.

The hearing, actually the first ever Congressional hearing on cybersecurity, featured some hackers who gave the Senators a clear and simple message: our computers, networks, and software are dangerously insecure. Despite this, it would take decades for our nation to appreciate the cyber threat, during which time we would see a steady accretion of malicious cyber activity.

Inflection points often go unnoticed, and in retrospect, it’s really not that surprising that the hackers’ testimony wasn’t appreciated for the dire warning that it represented. Looking back at the 1990s, we can now realize that, as the Internet was taking off, perhaps we missed an opportunity to chart a different course as to our cybersecurity.

I bring this up today because we stand in an analogous moment in history. If twenty years ago represented a tipping point of sorts for the Internet, then perhaps we are now at, or indeed even past, a comparable tipping point as to the broader digital revolution. The so-called “fourth industrial revolution” is upon us. As commentator Kevin Drum recently put it well in Foreign Affairs, the world sits at the dawn of a new age, and technological advances are set to make traditional forces of change no more than “mere footnotes … when we—or our robot descendants—write the history of [this] digital revolution.”

Maybe it’s no surprise that we are missing this tipping point too. Both the statistics—such as 20 billion connected devices—and the very concepts of profound change that we hear from futurists and technologists—are mind-numbing.

Of course, we aren’t doomed to watch this wave of profound change wash over us without some consideration. Are we missing another opportunity here? Challenging though it may be, we can examine and prepare for some aspects of this digital revolution that will have as fundamental implications for us as the industrial revolution did for 19th century Western society.

That revolution will have one particular consequence that will impact every one of us in personal and far-reaching ways, and it’s one that has special meaning for us as lawyers—I’m speaking of the effect on our privacy.

Although we continue to forge ahead in the adoption of new technologies, we simply haven’t confronted, as a U.S. society, what privacy means in a digital age. If you look at the advent of other novel technologies—from the automobile to electricity—regulations inevitably lagged, but we didn’t let technology get too far out in advance before our laws and societal norms caught up.

But not so today. Has there ever been a time where technological change has been this rapid, this ubiquitous, and this impactful? It’s no wonder that our societal norms and legal structures, especially in the area of privacy, have failed to keep pace. It’s worth examining those gaps so we can see where additional thinking and action will be required.

Given my vantage point and this occasion, I will focus on how the federal government affects the privacy rights, or at least the expectations, of the public. I’ll start by looking at the approach taken by the judiciary in fashioning the scope of our privacy interests and then turn to examples in the legislative arena. I will move on to implications for the private sector and then conclude by suggesting what are our responsibilities as lawyers in this critical area.

So let’s start our examination with an overview of how our judicial system has constructed our privacy regime, at least relative to the federal government. Privacy in the U.S. is a notion that has traditionally been rooted in the Fourth Amendment. Perhaps that comes as no surprise given how our country was formed, and how one of the enduring debates throughout our history has been the scope of the government’s involvement in our society. In any event, you may recall that the text of the Fourth Amendment makes no mention of the word “privacy,” and nowhere else in the Constitution or the Bill of Rights is a general right to privacy expressed. This is understandable, though, when you consider both the rudimentary state of technology at the time and the fact that the Fourth Amendment grew out of the experiences of the colonists, who resented the British Crown’s use of writs of assistance to force entry into their homes. The Fourth Amendment didn’t mention privacy, then, because protecting one’s physical property from unreasonable searches and seizures was sufficient. This also explains why, if you had reviewed the first hundred years’ worth of the Supreme Court’s many occasions to examine the Fourth Amendment, you would have found cases focusing on physical intrusion and property rights, but not a word about a privacy interests as such. Nor was there a decision, when the requisite technology later developed, that electronic surveillance itself qualified as a search or seizure for purposes of that amendment.

The clearest expression of the need for a change in legal approach appears in the prescient writings of Justice Louis Brandeis, who typically said where the law ought to, and usually did, go. In his seminal law review article with Samuel Warren entitled “The Right to Privacy” and in his famous and farsighted dissent in the 1928 Supreme Court case of Olmstead v. United States, Brandeis proposed to separate the concept of privacy from other legal principles, and recognize it as something entirely distinct. But you would have had to wait until 1967 for the Supreme Court, in Katz v. United States, to adopt that concept and overturn the almost four-decades old ruling in Olmstead. Writing for the majority, Justice Potter Stewart held that the Fourth Amendment protects people, not places, and in his concurrence, Justice Harlan fleshed out a test for identifying a “reasonable expectation of privacy.” This test was then further defined throughout the 1970s in United States v. Miller and Smith v. Maryland, where the Court held that there is no reasonable expectation of privacy for information (such as bank records or telephone numbers) that is voluntarily given to others (such as bank employees or the telephone company).

In the years that followed, our Fourth Amendment jurisprudence continued to develop in this manner, with courts largely focusing on the type and location of the surveillance taking place, based upon the facts of each particular case, to determine whether a protected privacy interest was implicated. I might add as an aside that almost nowhere in the case law is the real focus on the substance of the communication, except insofar as you get to consider that by reason of where the communication occurred.

As if we needed any further evidence of this very case-specific approach to the development of our privacy and surveillance legal regime, the Supreme Court just a few months ago gave us what the Court itself branded as a “narrow” decision. I am of course referring to United States v. Carpenter, which addressed whether the Fourth Amendment could be violated by a warrantless search and seizure of historical cell phone records that reveal the location and movement of the user. The Court held that the government’s acquisition of such records—or at least seven days or more of them—constituted a search under the Fourth Amendment, which required a warrant, because it violated a person’s “legitimate expectation of privacy in the record of his physical movements.” In coming to that conclusion, the Court noted that apart from disconnecting a phone from a network entirely, there is almost no way of avoiding leaving behind an electronic trail of location data. To the Court, then, the location information was “an entirely different species” of record than, say, bank records or phone numbers, and in no meaningful sense could it be said that the user voluntarily assumed the risk of turning over a “comprehensive dossier of his physical movements.”

As we stand here today, it’s too early to be able to discern the full ramifications of Carpenter. But one point is clear—the Carpenter case serves to highlight one of the major challenges in applying our Fourth Amendment jurisprudence in this digital age. By the very nature of our judicial system, which does not allow for advisory opinions, our courts are necessarily confined to deciding cases based on the specific facts (or the technologies) with which they are presented. These decisions are therefore inherently backward-looking, which feels like the wrong approach when addressing rapidly developing technology. By contrast, tort law principles can be extended to facts beyond the case at issue because concepts of negligence can be intuitively applied to a wide variety of facts and situations. Not so where the very legal principle is rooted in, and indeed expressed in, terms of the precise technology in the case.

I am not in any way being critical of our judiciary. Rather, I am simply pointing out that the limitations of our “case or controversy” scheme can result in a patchwork quilt of legal precedent that takes into account only the particular technology directly before the court in each case, which in turn leads to decisions that are sometimes hard to reconcile or are distinguishable only by factors that seem of dubious significance. It also yields a set of legal determinations in this area that are, at best, of uneven value in predictive utility. Indeed, the very fact that the nine justices generated five distinct opinions in Carpenter itself makes clear that even the best legal minds are divided over the right approach. And this was in a relatively straightforward case involving fairly well-established technology, where there was already ample Supreme Court precedent about the government’s access to other types of cell phone information and its use of technology to track a person’s physical movements.

Our experience tells us that if we want to be forward-looking to embrace future technologies and have more predictive legal principles, the legislative branch also has an important role to play, which I’d like to turn to now. While the courts have established the outer bounds of the Fourth Amendment, within those limits, it has been Congress that has enacted relatively strong privacy protection, but only in specific areas.

The fact that Congress has chosen to act in an important but limited way is also no surprise, for all of the obvious reasons. As I have just said, courts have been very active in this area, so Congress has in some respects had the luxury of simply deferring to their lead. These issues can be dauntingly technical in nature, and there are contentious political debates around privacy as well, which Congress, like any institution, would seek to avoid wading into if at all possible. So, as a result, in instances where Congress has chosen to act, it has often been to address only specific problems about which there was widespread consensus. We all know that political accord can be difficult to achieve, and thus in many cases, given the pace of technology, we have been left with either aging laws or no laws at all.

Take, for example, the Electronic Communications Privacy Act, commonly known as “ECPA.” Ironically enough, Congress passed ECPA in 1986 in an effort, in the words of the Committee Report, “to update and clarify Federal privacy protections and standards in light of dramatic changes in new computer and telecommunications technologies.” The state of the law at the time focused, in large part, on privacy protections related to telephone calls, and it was said to be “hopelessly out of date” in terms of addressing new means of communications. Of particular concern to Congress at the time was the Supreme Court’s decision in Miller and the increasing adoption of both email and computerized recordkeeping systems. Because this information had been voluntarily conveyed to a third party, this suggested under prevailing doctrine that it was entitled to little or no constitutional protection.

To address this, ECPA established a new framework that provided varying requirements for law enforcement to compel disclosure of the content of electronic communications depending, in part, on how long they had been in storage. For those communications that have been in storage for less than 180 days, a search warrant based on probable cause is required; in contrast, for those that have been in storage for more than 180 days, only a court order showing relevance to an investigation is needed. The rationale for this distinction was the state of technology at the time—in 1986, most electronic communications systems (including email services) did not retain electronic records for longer than six months. As a result, Congress concluded that “[t]o the extent that the record is kept beyond that point, it is closer to a regular business record maintained by a third party and, therefore, deserving of a different standard of protection.”

Regardless of how you feel about where Congress drew this line, there can be no debate that, due to subsequent developments in technology and commerce, the environment in which this framework was adopted differs markedly from today’s. Almost universally, we now conduct most of our affairs online, and we have access to virtually limitless, inexpensive electronic storage. Most of us store our most sensitive information there—from our emails, to our pictures, to our financial records—and thus, as many have pointed out, the fact that we choose to keep key electronic records longer suggests that they are deserving of more protection, not less. It also raises the larger question whether this regime still makes sense given these new realities.

The Department of Justice has addressed some of the issues related to ECPA, at least to some extent, through policy changes in recent years. For its part, Congress has also considered legislative updates to the statute, and it successfully passed the CLOUD Act earlier this year to address a different, pressing ECPA-related issue involving law enforcement access to electronic communications stored abroad. As I mentioned earlier, though, much like other times when Congress has acted in the privacy arena, the CLOUD Act served to resolve only a very specific problem about which there was significant consensus. In my view, no matter how highly we think of Congress’s efforts, one-off, hand-crafted solutions like the CLOUD Act are simply too time- and labor-intensive to meet our needs in this age of rapidly developing technology.

The situation isn’t all that different with respect to privacy in the context of our national security laws, most notably, the Foreign Intelligence Surveillance Act, or “FISA.” As many of you are well familiar, FISA was originally enacted in 1978 to provide the Executive Branch with a court-authorized process for conducting electronic surveillance against foreign powers or their agents operating inside the U.S. In creating such a system, Congress sought to carefully balance and protect both our national security and the privacy and civil liberties of all Americans. And, indeed, the statute has done so admirably for more than four decades now.

Much like ECPA, however, FISA’s structure, which is largely rooted in a four-part definition of “electronic surveillance,” has remained basically unchanged even as technology has zoomed ahead. Again, to give credit where due, Congress did address this to a significant extent through the enactment of Section 702 as part of the FISA Amendments Act of 2008, which is one of our most important foreign intelligence surveillance authorities. But taking a step back, we should recognize that that this Section represents only a small part of the larger FISA framework and, again, addresses only a discrete technological problem. The rest of FISA is still based on its original definitions, with the result that we have wound up with a complex, multi-agency statutory scheme that hinges in part on the type of collection and the location of collection, as well as the purpose and use of the collection, and that doesn’t specifically address issues such as ubiquitous encryption, web-based communications applications, the possibility of intelligence information becoming available through new technologies, and the global dispersion of computer servers and data storage.

Again, to be crystal clear, I mention ECPA and FISA and some of their deficiencies today not because I am calling for any particular set of changes or improvements. Rather, I believe that they are emblematic of how technological changes can drive the need to update statutory frameworks, and they demonstrate the shortcomings of how we have attempted to address these issues legislatively in the past.

These shortcomings become even more noticeable when you consider how our privacy laws regulate the private sector. As I noted earlier, the legal restrictions we put in place to ensure our notions of privacy in America are mostly focused on curtailing government. By contrast, we have largely let market forces—which is to say, no regulation—establish whatever individual rights we may have in this area relative to corporations and other businesses. True, the private sector’s collection and use of our personal data are in some areas subject to a complex assortment of federal and state statutes, but many of these statutes apply to only particular sectors or types of data (for example, your financial or health information) about which there is a deep consensus on a heightened need for privacy. The rest provide only broad consumer protections and are really not focused on privacy rights per se. Admittedly, there are benefits to this approach, which allows wide latitude for states to legislate and reduces the risk that there will be the sorts of unintended consequences that often accompany broad, comprehensive legal regimes.

Compare, just for a minute, the U.S. regime to how privacy is regulated in Europe. There, the concept of privacy focuses on the dignity of the person and very much extends to private sector activity of all types. This approach has traditionally resulted in laxer regulation of government surveillance, but much stricter and comprehensive laws about, for example, data protection, credit reporting, and workplace privacy. The General Data Protection Regulation, or GDPR, which came into effect earlier this year throughout the EU, is a perfect example. GDPR instituted a new set of wide-ranging and significant privacy protections and applies broadly to all EU organizations and companies around the globe holding or processing the personal data of people in the EU.

Europe is far from being alone in passing comprehensive privacy laws. In recent years, Japan, India, Brazil, and many other countries, including some of our largest trading partners, have all enacted new privacy regimes relating to how companies may handle personal information. According to one estimate, more than l00 countries now have some form of privacy laws, and some 40 other countries have pending legislation or initiatives in the works.

That is not to say that there haven’t been attempts here in the U.S. to strengthen and standardize our privacy laws. In part as a result of the federal government’s failure to adopt such proposals as a consumer privacy bill of rights, California recently enacted its own Consumer Privacy Act, which extends a broad range of new consumer privacy rights and data security protections.

While many have cheered California’s approach, there are also many who fear that it will serve only to further complicate the already muddled or excessive regulatory landscape in the U.S.

The various approaches have been the subject of widely-publicized hearings before the U.S. Senate and the Federal Trade Commission in recent months. The National Institute for Standards and Technology has also begun looking at the issue, with the goal of issuing a “privacy framework” in the same vein as its widely-heralded cybersecurity framework.

No matter how you view these efforts—and as you would expect, I’m not taking a position on them—it is clear that many in our society feel that the approach that we have taken to date with respect to regulating the private sector is increasingly problematic. The recent level of public and Congressional attention to the Facebook/Cambridge Analytica issue is illustrative of that feeling. With the international community pushing ever more aggressive laws and the global nature of our digital society, the choice regarding how we go about addressing privacy here in the U.S. might soon be out of our hands. Companies operating internationally are being forced to adapt to regulations implemented in foreign countries. If we want to play a role in shaping those policies to suit our own notions of privacy, we need to get engaged. At the same time, we also need to recognize that the more that states seek to occupy this space, the more likely that we will end up with additional complexity and inconsistencies. In short, we no longer have the option of addressing this issue in an ad hoc fashion.

This will require the public and private sectors to take a holistic approach to addressing privacy concerns associated with our increasing reliance on digital technologies. Perhaps, as in Europe, we need new comprehensive requirements to regulate how our personal information can be used, shared, or disseminated online. Or perhaps we don’t need any additional government regulation, as simply updating our current laws to reflect the state of technology today might be sufficient. Alternatively, voluntary industry-generated approaches might also meet our needs. I’m not here to advocate any of these or other potential approaches, but rather, my point is simply that we must have a societal dialogue about how we want to confront the problem.

Even more broadly, though, we also need to be asking ourselves the more fundamental question of what privacy really means to us here in the U.S., both as it relates to our interactions with the government and with the private sector. Under our current legal framework, the same piece of electronic information may be protected from interception or disclosure to the government, but it could be disseminated, sold, or used by a private company with few, if any, restrictions. Have we genuinely reflected on whether that is the actually the best approach when we consider the forthcoming digital revolution? Moreover, the confluence of the Internet of Things and increased monitoring for cybersecurity purposes imply an almost inconceivable level of potential knowledge about an individual. Will we feel comfortable that a machine will see, aggregate and analyze this data, knowing that there’s always the possibility that a human could extract the resulting knowledge? Some advocates have asserted that a violation of privacy occurs when the government’s computers simply scan citizens’ emails looking for a terrorist’s email even though it’s all done without human intervention, but at the same time, my private email provider already reads all my emails looking for spam. How do we reconcile this?

To be sure, a social media company or a data broker can’t put you on trial or in jail, but consider how much information those companies actually know about you—everything from the relatively mundane like your contact information to some of your most personal, intimate, and potentially even unconscious, interests and habits. Isn’t it fascinating that we’ve reached a point where, arguably, the private sector now has even greater impact on our privacy than the government? Have we paused to consider how to appropriately account for that? Or, perhaps, have we reached the point where we have come to accept this status quo because—to quote Ben Wittes of Brookings—“our concept of privacy is so muddled, so situational, and so in flux that we are not quite sure any more what it is or how much of it we really want.”

I would submit that a natural and appropriate place to begin these conversations would be to reexamine the Supreme Court’s 1967 formulation of our privacy interests. In lieu of evaluating the “reasonable expectation of privacy” as a threshold and ultimately dispositive question, maybe we could implement it instead by means of a functional approach. This would place the focus more on the type of the information at issue, its intimacy and sensitivity, and how it is protected (including considering whether one truly and voluntarily shared the information with any third parties), while deemphasizing factors like the type of communication collected, the means by which it was collected, or the location of its collection. It might also result, for example, in stricter controls on information such as medical records, and lesser protection for information such as the time, date, duration, and identities of a telephone conversation. Please note that I am not specifically advocating for this approach, but I do find it to be a logical alternative worth considering. And just to be clear, I am not seeking any diminution of our privacy to facilitate surveillance powers. Actually, I think a cogent approach to this topic could strengthen our sense of privacy in many respects.

I would also caution that, in having these sorts of discussions, we must avoid the temptation to view things in absolutes and to reflexively label ideas as anti-privacy, anti-security, or even unconstitutional just because we might think that they should be. This will be particularly important when addressing politically- and emotionally-charged topics like encryption, which undoubtedly will continue to be a significant part of the privacy conversation in the years to come. Rather than simply asserting that any potentially weakening of privacy protections (legal, technical, or otherwise) is inherently bad and thus off the table for discussion, we need to be intellectually honest about what interests that we are trying to protect, what harms genuinely may occur, and how should we balance these against other potential benefits such as increased safety or convenience. Lapsing into jargon and retreating into our traditional corners will serve only to stall this important debate. We should instead be working to find consensus and principles that we agree upon.

It is undeniable that these are extremely complicated issues with no clear or correct answers. Throughout our nation’s history, lawyers have been the leaders in helping our society wrestle with those types of issues and forge a consensus on what is best for our country. So through our very work as lawyers in the national security realm, we are in the vanguard in thinking about privacy in this digital age, and that is why we have a responsibility to use our knowledge and skills to help lead a constructive dialogue about how to better shape our legal framework for the years to come. Let’s not miss this opportunity; let’s not let this inflection point pass us by. I hope that, through my remarks today, I have contributed in a small part to that process, and I thank you for attention this afternoon.


Matthew Kahn is a third-year law student at Harvard Law School and a contributor at Lawfare. Prior to law school, he worked for two years as an associate editor of Lawfare and as a junior researcher at the Brookings Institution. He graduated from Georgetown University in 2017.

Subscribe to Lawfare