Cybersecurity & Tech Surveillance & Privacy

The Encryption Debate Enters Phase Two

Daniel J. Weitzner
Wednesday, March 16, 2016, 12:13 PM

With the benefit of historical hindsight a few years from now, we may look back on March 2016 as a turning point in the debate over encryption, surveillance and civil liberties. The new phase of the debate is characterized by a growing acceptance that mandatory, infrastructure-wide back doors are a bad idea. At the same time, political leaders elevate their demands that we in the tech community engage the question: if not back doors, then what?

Published by The Lawfare Institute
in Cooperation With
Brookings

With the benefit of historical hindsight a few years from now, we may look back on March 2016 as a turning point in the debate over encryption, surveillance and civil liberties. The new phase of the debate is characterized by a growing acceptance that mandatory, infrastructure-wide back doors are a bad idea. At the same time, political leaders elevate their demands that we in the tech community engage the question: if not back doors, then what?

Last week, Robert Hannigan, Director of GCHQ gave a talk at MIT—entitled “Front doors and strong locks: encryption, privacy and intelligence gathering in the digital era”—on his views of the evolving issues of encryption and surveillance. His message was clear: It does not make sense to ban or weaken end-to-end-encryption, nor does he favor 'backdoors' in the infrastructure. But he believes the obstacles posed by encryption are a “moral challenge” that society, broadly speaking, must face.

Hannigan’s emphasis on GCHQ’s information assurance mission makes clear that companies should only be required to offer assistance in a manner that avoids creating security risks:

Much of GCHQ‘s work is on cyber security, and given the industrial-scale theft of intellectual property from our companies and universities, I’m acutely aware of the importance of promoting strong protections in general, and strong encryption in particular. The stakes are high and they are not all about counter terrorism.

Adding that he is “accountable to our Prime Minister just as much, if not more, for the state of cyber security in the UK as I am for intelligence collection,” he outright opposed mandatory back doors:

The solution is not, of course, that encryption should be weakened, let alone banned. But neither is it true that nothing can be done without weakening encryption. I am not in favour of banning encryption just to avoid doubt. Nor am I asking for mandatory backdoors.

And the UK’s move away from infrastructure-wide mandates comes as US Secretary of Defense Ash Carter made similar remarks at the RSA Conference:

As we together engineer approaches to overall human security in the information age, I know enough to recognize that there will not be some simple, overall technical solution—a so-called ’back door’ that does it all…. I’m not a believer in backdoors or a single technical approach. I don’t think that’s realistic.

Speaking with US Secretary of Commerce Penny Pritzker, European Commission Vice President Anders Ansip repeated his opposition to weakening encryption with mandatory back doors. In this discussion hosted by the MIT Internet Policy Research Initiative, Ansip argued that people simply will not trust systems that have built-in government controls. Drawing from his experience as the Prime Minister of Estonia who famously digitized much of the government, he observed that over 2/3rds of Estonian citizens vote online. “How will they trust the results of the election,” VP Ansip asked, “ if they know that the government has a back door into the technology used to collect citizen’s votes?”

Is President Obama an outlier in the trend against mandatory back doors? I don’t believe he is. True, Obama did say at SXSW that he opposed ‘absolutist views’ of the issue and wants to ensure that the government preserves tools to investigate terrorism and serious crime. But at the same conference his CTO Megan Smith referred to the debate as an ongoing conversation, with many issues still to resolve. And the President did not contradict Secretary of Defense Carter’s statements on the matter either.

These statements from US, UK and EU government officials demonstrate that our underlying technical analysis against mandatory back doors in Keys Under Doormats has been largely accepted.

But this positive shift will also increase pressure to determine which practical paths are available to assist law enforcement. In fact, Hannigan’s visit to MIT was largely aimed at issuing a call for the technical community writ large, together with civil society, to work with governments to find practical solutions to the challenges posed by what he calls the ‘misuse of encryption.’ The problem, of course, is not really one of cryptography per se. Rather, the challenge emerges from the fact that the Internet—like any technology—is sometimes used for nefarious purposes by bad people. He explains:

At its root, the ethical problem presented by encryption is the problem presented by any powerful, good invention, including the Internet itself, namely that it can be misused. TOR is the most topical example: a brilliant invention that is still invaluable to those who need high degrees of anonymity, notably dissidents, human right advocates and journalists; but an invention that is these days dominated in volume at least by criminality of one sort or another. The technology of the Internet and the web is morally neutral, but those of us who use it aren’t.

The rational response here is neither to panic nor to assume that encryption itself is intrinsically bad. Instead the appropriate path is to look for a sensible, pragmatic, and proportionate response to a shared problem: the abuse of encrypted services by a minority of people who want to do harm to others.

In his speech, Hannigan raises at least two specific alternatives to back doors. First, he praises the excellent “Don’t Panic” report from my colleagues at the Berkman Center. This would indicate Hannigan’s interest augmenting law enforcement investigatory capability through alternative data sources beyond the content encrypted on hardware devices. Second, in discussing the brightest historical star of the GCHQ firmament, Alan Turing, he notes that “Turing and his colleagues recognised was that no system is perfect and anything that can be improved can almost inevitably be exploited, for good or ill.” This raises the alternative possibility of the vigorous exploitation of vulnerabilities—somewhat like what the FBI is asking Apple to do in the San Bernardino case.

In context of the UK Parliament’s deliberations on the Investigatory Powers Act, this puts significant focus on the meaning of "practicable or technically feasible" in defining the scope of obligation of Internet companies to decrypt or otherwise provide “assistance.” Does technical feasibility include a consideration of whether the assistance offered would impair the overall security of the information infrastructure? What about consideration of the security and privacy interests users not subject to a surveillance order? In light of Hannigan’s remarks, one would certainly expect that UK law would give weight to these factors. The question remains: how far should companies go in offering assistance that does not create security risks?

Most importantly, Hannigan’s speech recognizes that the questions we face here about the scope of technical surveillance assistance speak to fundamental policy choices regarding both user privacy rights and the nature of the surveillance powers which we accord our governments in order to protect public safety. A number of technology companies have admirably stepped into this issue as both the protectors of their own commercial interests and the defenders of their user’s privacy rights. We should commend them for this.

In this second phase of the debate, we must find clear answers to the following questions:

  1. What kinds of assistance, if any, can technology companies provide to law enforcement agencies carrying out lawful surveillance orders that does not compromise the overall security of the infrastructure and individual user data? These answers will derive from serious technical analysis of different platforms and services, and may well vary across contexts.
  2. What kinds of surveillance powers should we grant to our governments that preserve our democratic values in a globally competitive environment?

It is time to begin a serious public debate about how much surveillance is enough, what kind of surveillance tools can be used without creating undue risk to overall security, resilience and innovation, and how do we do this without undermining our fundamental liberties?


Daniel J. Weitzner is Director of the MIT Internet Policy Research Initiative and Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab. From 2011-2012, Weitzner was United States Deputy Chief Technology Officer for Internet Policy in the White House. Weitzner’s computer science research has pioneered the development of Accountable Systems architecture to enable computational treatment of legal rules and automated compliance auditing. He teaches Internet public policy in MIT’s Electrical Engineering and Computer Science Department. Before joining MIT in 1998, Weitzner was founder and Deputy Director of the Center for Democracy and Technology, and Deputy Policy Director of the Electronic Frontier Foundation. Weitzner has law degree from Buffalo Law School, and a B.A. in Philosophy from Swarthmore College.

Subscribe to Lawfare