Making Progress on the Encryption Debate
In a recent debate between NSA director Mike Rogers and Yahoo Chief Information Security Officer Alex Stamos, the topic of law-enforcement restricted access to encrypted communications once again came up.
To summarize the debate as it has been expressed to date, one side believes in encryption that only the user can decrypt.
Published by The Lawfare Institute
in Cooperation With
In a recent debate between NSA director Mike Rogers and Yahoo Chief Information Security Officer Alex Stamos, the topic of law-enforcement restricted access to encrypted communications once again came up.
To summarize the debate as it has been expressed to date, one side believes in encryption that only the user can decrypt. Those on this side of the debate (call it Side A) include large technology companies such as Apple, Yahoo, Google, and Microsoft and also privacy advocates such as the ACLU and EFF. Those on the other side of the debate (call it Side B) acknowledge the value of strong encryption to protect confidential information, but want to be able to decrypt to the underlying information without the user’s knowledge or consent when they are legally authorized for such access. Those on Side B include the FBI and the Justice Department.
Side B proposes a technological solution to the problem—a mechanism to decrypt the information that can be accessed by law enforcement authorities and no one else—this is often known as a NOBUS solution, where “nobody but us” (NOBUS) can gain access. (According to the report of the debate referenced above—the NSA believes in the technological feasibility of a NOBUS solution.)
Side A argues that it’s already hard enough to build secure systems and that introducing a NOBUS solution will inevitably make the system vulnerable to unauthorized access to the decrypting mechanism. That is, they argue that technological impossibility of having access that is limited only to law enforcement authorities, and thus that NOBUS is not an appropriate feature (or bug, depending on how you see it) to introduce into any encryption system.
The debate—now 20 years old (think Clipper chip)—is entirely polarized, because there is no common ground in the middle. But when I unpack the claims on both sides, I find a real conceptual mess. No one taking Side A can provide a proof that the NOBUS mechanism will eventually become available to unauthorized parties. No one taking Side B can provide a proof that it is possible to build a tamperproof mechanism for NOBUS access. And so the argument continues to rage, as a clash of absolutes.
But the real technological issue is one of time scale. It is probably true that NOBUS access can’t be kept secure forever. But it doesn’t need to be kept forever. If a way to break NOBUS access takes 1000 years to develop, then it doesn’t matter. If NOBUS access lasts for 1 minute, then it’s clearly not workable. So somewhere between 1000 years and 1 minute, there’s a threshold (a fuzzy threshold with big error bars, to be sure) where the idea of NOBUS making sense technically changes over to not making sense.
Viewed in these terms, it seems to be a problem of risk management, namely “For any given NOBUS mechanism, how might one estimate the period that it might plausibly remain secure, and what are the error bars around that estimate?” If we had such numbers, we might be able to make reasonable decisions about whether a given NOBUS mechanism might be technologically feasible or not.
Taking this approach has the virtue of separating the values issue from the technical issue. Some (maybe many) on Side A are opposed in principle to the idea of any access to encrypted communications that are not authorized by the user. Others may favor Side A primarily because they do not believe that NOBUS is a viable technical approach, but would go along with Side B if it were.
[Note added after initial post.] I should also point out that NOBUS access does not necessarily presume that the holders of the encrypted data will have the ability to decrypt it. Thus, the tech companies will not themselves be able to decrypt data if and when other countries demand access.]
So I would propose that the debate move forward in the following way. Side B proponents should propose a specific NOBUS mechanism, much as they once did in the 1990s with the Clipper chip. And they should present an analysis that says why the mean time to compromise is closer to 1000 years than to one minute. Once that proposal is in hand for public scrutiny, Side A should show why the mean time to compromise is closer to one minute than to 1000 years. The analyses of Sides A and B can thus be compared. If it turns out that A’s analysis is more plausible than B’s, we can refrain from considering that mechanism further. If it turns out that B’s analysis is more plausible than A’s, then we can have an argument that is centered on the values proposition and not obscured by a technical smokescreen.
From my perspective, this would be progress.
Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.