Cybersecurity & Tech Surveillance & Privacy

Breaking the Encryption Stalemate: New Research on Secure Third-Party Access

Alan Z. Rozenshtein
Thursday, March 29, 2018, 1:00 AM

Last month, the National Academies released their report on potential solutions to the problem of law enforcement access to encrypted data. The reaction was polite but unenthusiastic.

Published by The Lawfare Institute
in Cooperation With
Brookings

Last month, the National Academies released their report on potential solutions to the problem of law enforcement access to encrypted data. The reaction was polite but unenthusiastic. The response of Access Now’s Amie Stepanovich was typical: “This report contains a great compendium of information, but doesn’t fundamentally alter the conversation.”

The “conversation” is the longstanding and bitter debate between law enforcement and the information-security community on whether it’s possible to design an encrypted system that is both secure and gives individualized access to third parties (i.e., the government) subject to court orders. But buried in the report is an important development that may end up marking a turning point in that debate: High-level experts in the information-security community itself are trying to build secure third-party-access systems. As the New York Times reported on Saturday, non-government researchers’ sudden willingness to work on the problem has given the FBI and the Justice Department new momentum in their push for legislative mandates for third-party access to encrypted data.

To understand why this is a big deal, it’s important to appreciate just why the current debate has stalemated. The overwhelming consensus among the academic cryptographers, security researchers, and industry technologists who make up the information-security community is that encrypted systems that allow third-party access are always insecure. But this consensus is vulnerable to two criticisms.

The first is that the consensus argument against secure third-party access depends on a very specific meaning of “secure.” It is undoubtedly the case that a system with third-party access is less secure than the same system without third-party access, no matter how such access is designed. One factor is that third-party access adds complexity, and more complex systems are, all else being equal, less secure than less complex systems. This is for the simple reason that more complex systems are harder to design and manage. Thus the information-security maxim: “Complexity is the enemy of security.”

But in the real world, security is never an all-or-nothing proposition. Security always comes at a cost; for example, it takes more time and money to design more secure systems, and security often requires trading off user features like password or data recovery. The real question is whether a particular system is “secure enough.”

This in turn requires that we answer two sub-questions. First, what is the use case? “Secure enough” for a smartphone that can only be hacked if it’s in an adversary’s physical possession is a less demanding standard than it is for the internet-connected systems running the electricity grid. Second, in addition to security’s individual and social benefits, what are its individual and social costs? This question includes considerations that fall outside the expertise of the information-security community—namely the costs to public safety in the form of relevant data that is unavailable to law enforcement. Even more importantly, this question implicates policy tradeoffs and value judgments that neither technology companies nor the information-security community have the necessary democratic legitimacy to make on their own. It’s neither up to Apple nor the Electronic Frontier Foundation (nor, for that matter, the FBI) to unilaterally decide how much information security is worth sacrificing to save a life or stop a crime; that’s a decision for the public, acting through its elected government, to make for itself. (Hence the ultimate need for a legislative solution to settle this debate one way or another.)

This line of argument—that “secure” is neither all-or-nothing nor excludes broader social costs—has been the standard critique of the position that “everyone knows that third-party access is impossible.” (I discuss it at length in this article.) But the strength of the critique has been blunted by the fact that those making it (myself included) could never demonstrate that secure (or, more precisely, “secure enough”) third-party access systems were feasible. The lack of proof of concept (or even proof of feasibility) has rendered appeals for secure-enough systems moot. Whenever anyone would raise the prospect of third-party access, the result would be something like this tweet from a Stanford mathematician responding to the Times story:

This reaction is typical. Whenever this debate begins anew, one side says, “It’d be great if Silicon Valley/academic researchers could design secure third-party access.” To which the the other side responds, “Nope. #math.”

This is why the National Academies report and the Times story are so notable: They undermine the argument that secure third-party-access systems are so implausible that it’s not even worth trying to develop them.

This brings me to the second criticism of the consensus view: that its research basis was never as firm as advertised. There are, after all, two kinds of consensus. The first is the kind that comes from many independent researchers having tackled a problem and coming to the same conclusion. The second is the kind that comes when the views of a few key players come to be seen as the received, not-to-be-questioned wisdom—in other words, groupthink. If the consensus against secure third-party access is not a true consensus but, rather, groupthink, it becomes much harder to support the argument that we should reject, out of hand, government proposals for secure third-party access.

This is a controversial claim, and I don’t particularly enjoy bomb-throwing, so let me be clear as to what I’m not saying. First, I’m not arguing that critiques of third-party access are made in anything but good faith. Relatedly, I am neither questioning the expertise of these critics nor claiming that my own expertise is an adequate substitute when it comes to evaluating the technical details of any third-party-access proposals. I don’t pretend to be a cryptographer or a computer scientist, and I’ve taught myself enough about the field to get a humbling sense of just how immense are the challenges—both when it comes to designing the protocols themselves and implementing them in the real world.

Second, I am not arguing that secure (or even secure-enough) third-party access is possible. Frankly, I have no idea. For one thing, the failure of the government-designed Clipper Chip as a workable key-escrow solution in the 1990s (a failure that led to the government’s defeat in the First Crypto War) makes clear that designing secure third-party-access systems is hard, and that no government proposal should be taken at face value. And certainly the government hasn’t yet proved its case this time around. This is in part because, stung by the failure of the Clipper Chip, it has refused to put forward concrete proposals and has instead hid behind irritating platitudes about Silicon Valley’s technological wizardry. Consider how FBI director Christopher Wray framed his argument in January:

I’m confident that with a similar commitment to working together, we can find solutions to the Going Dark problem. After all, America leads the world in innovation. We have the brightest minds doing and creating fantastic things. If we can develop driverless cars that safely give the blind and disabled the independence to transport themselves; if we can establish entire computer-generated virtual worlds to safely take entertainment and education to the next level, surely we should be able to design devices that both provide data security and permit lawful access with a court order.

The problem with urging Silicon Valley to “nerd harder” is that there’s no reason to think that it can. As Matt Blaze nicely put it, Wray’s request is a bit like saying, “If we can put a man on the moon, well, surely we can put a man on the sun.” And if substantial research resources are thrown at the problem and a real, up-to-date, and evidence-based consensus emerges that the problem simply can’t be solved in a way that wouldn’t create even greater information-security and public-safety risks, I’d be more than happy to consider the matter resolved.

What I am saying is that those arguing that we should reject third-party access out of hand haven’t carried their research burden. And this burden is particularly high given the real costs that ubiquitous end-to-end encryption poses to law enforcement and thus public safety—as even the staunchest critics of third-party access have recognized.

There are two reasons why I think there hasn’t been enough research to establish the no-third-party access position. First, research in this area is “taboo” among security researchers. That’s not my characterization, but rather that of the internationally recognized Belgian cryptographer Bart Preneel, who observed at the Eurocrypt 2016 conference:

It seems to be also a kind of a taboo to work on law-enforcement access, and I think we should actually break this taboo. We should not say it’s impossible. I think we should think about it at least. Write papers: how can we do this better? … I don’t think it should be a forbidden question to think about. Imagine we had perfect channels, perfectly secure devices—there [are] some cases where government may need access. How would we do this? In an auditable way, in a controllable way, in a limited way. We have actually no answers either.

I think it’s fair to expect that major policy decisions should have the benefit of research that’s not taboo to even conduct.

Fortunately, this taboo is gradually being broken. This gets to the second reason why I believe more research needs to be done: the fact that prominent non-government experts are publicly willing to try to build secure third-party-access solutions should make the information-security community question the consensus view. (To be clear, government experts are of course capable of doing high-level cryptographic research. The issue is that, because they’re from the government, their findings standing alone won’t be taken seriously by the wider research community, whose buy-in will be essential to any ultimate third-party-access system.)

The National Academies report included high-level descriptions of approaches to designing secure third-party access from three legitimate experts: Ernie Brickell, former chief security architect at Intel and the founding editor in chief of the Journal of Cryptography; Ray Ozzie, former chief software architect at Microsoft (and, strikingly, former director of the Electronic Privacy Information Center); and Stefan Savage, a professor of computer science and engineering at the University of California, San Diego. The Times story also notes that these three have also discussed their research at workshops run by Daniel Weitzner, principal research scientist at the MIT Computer Science and Artificial Intelligence Lab. Say what you will, but none of these individuals are amateurs or government hacks. And none would be researching this issue (and bearing the reputational costs within their communities for furthering government surveillance) if they thought the question was settled.

Again, we still don’t know whether secure third-party access is possible. None of these researchers (nor anyone else) have publicly put forward proposals at the level of detail that would be required for a full evaluation. And if they did, it’s almost certain that the research community would find serious, perhaps even fatal, flaws. But that’s how cryptographic research moves forward, and we ought to put our energy into such research rather than spend it on what Herb Lin described in 2015 as a “theological clash of absolutes”: that is, the abstract “it can’t be done/it can be done” debate that has dominated the encryption conversation. (Benjamin Wittes made a similar point around that time.)

At least for scholars and policy analysts, the question should be how to encourage this incipient line of research. It’s understandable that the government would prefer that technology companies come up with the solutions, but Washington now has an important opportunity to drive a pragmatic, research-based process. Perhaps the National Institute for Standards and Technology should lead the way, just as it successfully ran a multi-year, global, and public competition to select what became the widely used AES cryptographic protocol. But that’s just one option out of many. The larger point is that after years of bitter stalemate, the debate over law-enforcement access to encrypted systems may finally be making real progress. And to track this progress, we should focus less on the war of words between the government and certain parts of the information-security community and instead focus on those security researchers—whether in academia or industry—who are working to discover whether secure third-party access really is a contradiction in terms.

Correction: A previous version of this post misidentified former Intel chief security architect Ernie Brickell as the current holder of that position.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare