Cybersecurity & Tech Surveillance & Privacy

A Roadmap for Exceptional Access Research

Mayank Varia
Wednesday, December 5, 2018, 2:52 PM

This is part of a series of essays from the Crypto 2018 Workshop on Encryption and Surveillance.

Published by The Lawfare Institute
in Cooperation With
Brookings

This is part of a series of essays from the Crypto 2018 Workshop on Encryption and Surveillance.


Over the course of the recurring “crypto wars,” technologists and government officials have debated the social desirability of encryption systems that provide End-to-End (E2E) protection from sender to receiver, as well as the technological viability of building Exceptional Access (EA) encryption systems that provide the government with the selective ability to read decrypted contents. This tussle has left E2E encryption in an unstable equilibrium for the past several decades, with many countries mandating recoverability of plaintext (or threatening to do so) via blunt techniques that are much less safe than EA could be.


These crypto wars have been fought over the past several decades as a political battle. Even though all parties have the common goal of improving public safety, they enter the fray with differing personal and professional biases, which cause them to prioritize either improving digital security to prevent crime or providing tools for law enforcement to investigate crimes with a digital component. Predictably, this politicization has led to dour consequences. Views become polarized into an “us vs. them” mentality, with technologists and government officials both claiming the imperative to act as strongly as the law permits and abhorring any “middle ground.” Illogical arguments promulgate, such as the focus of EA proponents’ on terrorism even though EA is more likely to aid in the investigation of ordinary criminal activity, and the focus of EA opponents on past mistakes that bear no relation to modern proposals. Finally, participants tend to use provocative, alienating rhetoric like “responsible encryption” or “backdoors.”


Rather than continuing to politicize encryption, in this article I propose instead to use the lessons learned from the decades-long crypto policy debate to inform the process of developing EA technology. This article is organized into four parts: (1) reviewing the benefits and risks of an EA encryption system from a policy viewpoint; (2) providing a skeletal definition of the security guarantees that EA encryption should provide in order to mitigate the policy risks; (3) listing several possible capabilities that an EA system might provide in an attempt to identify a minimum viable product together with law enforcement; and (4) constructing policy to revive research into EA’s technology challenges, an area that has been mostly dormant for two decades.


Reviving technological research into EA is essential to improve the state of the policy debate. In the short term, this would provide time and space for technologists to improve upon the state of the art in exceptional access: the reality today is that all existing EA proposals have been insufficiently vetted and, if deployed, would pose systemic risk to the digital ecosystem. More research and engagement between law enforcement and the technical community could change that—but this cannot happen when these groups are pitted against each other in a political debate. In the longer term, developing a variety of modern EA proposals would crystallize and refocus a policy debate currently lacking in technical specifics.


Before continuing, I want to emphasize upfront that many of the policy and technology arguments surrounding encryption are subtle, and the brevity of this article may diminish some of this nuance. For example, I sometimes lump together cryptosystems that protect data in transit with those that protect data at rest, even though EA affects those systems in different ways, and I use the term E2E to denote any cryptosystem that precludes a government or technology intermediary from reading the contents.


1. What are the benefits and risks of exceptional access?


“Finding the right balance between security, privacy and surveillance is a complex problem. The current practice of intelligence agencies adding covert backdoors into system deployed world wide is hugely risky and carries with it a significant potential for collateral damage, since criminals or adversarial states can misuse the backdoors. … What seems clear is that there are no perfect solutions. Governments need to be able to enforce their laws and protect their citizens from adversaries, however, this should not be done at any cost. … Unfortunately, currently the surveillance debate is polarised with absolutes being pushed by both sides. It is highly unlikely that either extreme – total surveillance or total privacy – is good for our society. Finding the right balance should be framed as an ethical debate centred around the potential for collateral damage.”


Professors Matthew Smith and Matthew Green


Governments intermittently clamor for changes to encryption to provide law enforcement access. The current such clamoring comes in the form of Australia’s Assistance and Access Bill, which would “establish frameworks for… industry assistance to law enforcement and intelligence agencies in relation to encryption technologies,” as well as the related Five Eyes Statement of Principles on Access to Evidence and Encryption (though it remains unclear what this statement portends).


In the face of such clamor, it is worth assessing the policy benefits and drawbacks of exceptional access as a concept, independent of the details of any particular EA construction (or in other words, even if there is consensus that the system “works properly”). As with many policy debates, both action and non-action create risks and unintended second-order consequences.


EA systems create various ethical and policy concerns:


  • Public safety and national security: A system that permits exceptional access by governments can be more vulnerable to external hacking, insider threat, or human error than the same system incorporating E2E encryption. A government official or employee of a technology intermediary could purposely or inadvertently (e.g., if their computer is compromised or if identification checks fail) betray personally sensitive information to those who might use it for nefarious purposes. Any breach or misuse of an EA system impacts individuals, businesses, and governments equally because everyone uses similar commodity computing devices over a shared network.
  • Democratic values: Private speech is essential to a democracy. Because encryption is the primary method of achieving privacy for digital speech, any type of surveillance can have a chilling effect on speech and social behavior.
  • International ramifications: As the Communications Assistance for Law Enforcement Act (CALEA) demonstrated, legal and technological requirements within the United States that facilitate government access can be quickly replicated by many other countries around the world, including those that have less concern for their citizens’ public safety and democratic values.
  • Alternatives: Any required change to encryption would have limited effectiveness given that E2E encryption schemes already exist and may be legally sold in other countries; it is technologically possible to construct an E2E messaging system starting from only EA components; and other cybersecurity protections, like network anonymity and remote deletion, may also be deployed to thwart government from finding the digital data that it wishes to decrypt.

Conversely, the status quo on encryption also presents several policy concerns along similar dimensions.


  • Public safety and national security: Many crimes in the physical world have “a technology and digital component,” according to FBI Director Christopher Wray, and some “occur almost entirely online” (one example is child sexual exploitation). EA could improve deterrence of crimes that primarily occur in the digital world. Additionally, any action taken now to build an EA system may forestall a poorly-thought-out action taken hastily by a government later.
  • Democratic values: Because E2E encryption upends some of the norms and established processes for law enforcement investigations that have evolved over centuries of societal input, a democratic society should consciously decide whether to adopt it rather than simply delegating the decision solely to the tech intermediaries.
  • International ramifications: More than half of the world’s population currently lives in countries with laws that mandate recoverability of encrypted content. The encryption systems currently deployed in these jurisdictions permit more surveillance with less accountability than a properly-designed EA system might offer.
  • Alternatives: The government’s main alternative to EA, lawful hacking, is fraught with its own ethical and policy concerns. It is an imprecise tool, because everyone is vulnerable to their effects; it disincentivizes government to report vulnerabilities to vendors as part of the Vulnerabilities Equities Process, thereby worsening the relationship between government and the tech intermediaries; and it may lack sufficient legal oversight to check its use. To quote Steven Levy, “lawful hacking is techno-capitalism at its shadiest.”

Attempts to balance the relative merits of these competing sets of policy concerns (e.g., within these recent studies) are exceedingly difficult, especially because everyone involved is sincerely trying to improve public safety. The U.S. government is requesting tools that aid law enforcement rather than enable dragnet surveillance, and technology companies promote E2E encryption to protect their customers proactively against crimes like stalking, blackmail and identity theft.


Perhaps the most important action that can be taken to advance the policy debate is to acknowledge the good faith and common interests of everyone involved. As the Encryption Working Group within the House Judiciary and Energy & Commerce Committees astutely observes, it is simultaneously true that:


The widespread adoption of encryption poses a real challenge to the law enforcement community and strong encryption is essential to both individual privacy and national security. A narrative that sets government agencies against private industry, or security interests against individual privacy, does not accurately reflect the complexity of the issue.


Both sides of the Encryption Working Group’s observations can be found even within the U.S. federal government. Within the Justice Department, the last three directors of the FBI and the current deputy attorney general have argued for law enforcement access to encryption systems. Within the intelligence community, several former leaders of intelligence organizations have argued against changes to encryption on national security grounds.


Subsequent sections of this article are written under the assumption that the people broadly decide to explore building an EA encryption system. I follow this approach not to put my thumb on the scale, but instead because there is little that needs to be said or done if society opts for the alternative: E2E encryption, after all, already exists. In the next two sections, I describe how technological choices regarding EA’s security guarantees and access capabilities have nuanced policy implications of their own.


2. What limits on access should EA guarantee?


I suspect that the answer [to exceptional access] is going to come down to how do we create a system where the encryption is as strong as possible, the key is as secure as possible, it is accessible by the smallest number of people possible for a subset of issues that we agree are important.


– President Barack Obama, SXSW 2016


Cryptographers labor intensely over, and rely heavily on, definitions that codify the security requirements and expected capability they want from system features like encryption. Because any specific implementation believed to be secure today can be broken tomorrow, cryptographers emphasize definitions because they guide the search for candidate constructions and allow the community to decide how a system should work (that is, independent of any limits on the current ability to construct it).


Defining the security guarantee for E2E encryption is conceptually straightforward: the intended communicators hold a key that allows them to read messages, and everyone else in the world is equally clueless about the message contents. Put simply, possession of the key is equivalent to the capability to read messages. Defining EA encryption is more nuanced because the system must distinguish between different types of unintended recipients: out of the billions of people in the world who don’t have the cryptographic key, only the government is (sometimes) empowered to read a message anyway. Sadly, throughout the decades of debate about the ethical and moral quandaries surrounding EA, not even a rough consensus has emerged for how EA might formally be defined if it did exist.


At first blush, this may seem like a pedantic concern: why waste precious time and energy on defining EA rather than simply building it? However, from decades of experience, cryptographers have found that definitions provide the only way to understand the true intention behind a construction and thus to test whether a proposed EA system actually meets its goals. They also often outlive any particular construction, and they enable principled policy and moral discussions to occur about the values society wants without getting bogged down by the scientific details of any particular construction.


In this section, I provide a taxonomy of security goals that might be required of an EA system, and I reference existing EA proposals that provide partial progress toward these goals. Several of these security properties (at least somewhat) ameliorate the policy concerns with EA laid out above; however, the international ramifications of exceptional access are irremediable and must be taken into account when deciding whether to adopt an EA system.


  • Government-only authorization: EA must distinguish between government actors and non-governmental actors, only providing access to the former. Additionally, the system may distinguish between different government actors and provide them with different levels of access. Existing EA proposals tend to provide access only to one government, using techniques from public key cryptography to distinguish between actors from the intended government versus anyone else; recently, Charles Wright and I proposed a system that uses economics to distinguish nation-states from non-nation-state actors, but does not attempt to distinguish between governments (though it can be composed with other EA proposals that do).
  • Targeting: Like warrants, the government’s EA requests should be scoped to specific people and specific data contents. All existing EA proposals are strongly scoped so that governments must know precisely which device, file or communication to target; some even restrict EA to devices within the government’s possession. To handle warrants that are scoped instead based on features within the data contents, EA might be combined with techniques that allow for searching on encrypted data.
  • Accountability: To deter opportunities for abuse, people should be able to validate that the government only exercises its EA authority pursuant to lawful processes (e.g., a warrant). This validation may come after a time delay, and it need not require people to see the full warrant or decrypted data. Existing EA proposals show how this validation can be privately accessible by the targets of EA, available to an auditor for oversight purposes or publicly transparent to all.
  • Federation: To mitigate insider threats, EA should require approval from multiple actors, ideally in different institutions, to view encrypted data. In particular, the system must not rely upon a single master key. Existing EA proposals use techniques like secret sharing to split any cryptographic material used for access between different government agencies, between the government and a technology intermediary or even between different governments (thereby requiring international cooperation to recover message contents). I would caution, however, that in other contexts tech providers have had difficulty authenticating whether requests originate from valid law enforcement officials and have been duly authorized by court officials.
  • Limits on access: To mitigate the chilling effect of surveillance and opportunities for abuse, EA should limit governments only to recover a limited amount of data from a small number of people; furthermore, any effort expended to access content based on one request should provide no benefit toward the next request. Existing EA proposals can limit access using the existing legal system for obtaining warrants as well as a technological limit enforced via a marginal economic cost for each government request using a proof-of-work approach later popularized by Bitcoin.
  • Crypto transparency: Following Kerckhoff’s principle, the design and implementation of an EA system should be open for anyone to confirm that it meets the properties listed in this section. Almost all EA proposals are openly (though incompletely) documented; one prominent exception is the U.S. government’s Clipper chip proposal from the 1990s, which relied on a secret design in order to provide compliance.
  • Defense in depth: Rather than relying on any single point of trust for a power as strong as EA, the security properties of such a system should be enforced through a variety of mechanisms involving computers (e.g., high-assurance software and trusted hardware) and people (e.g., the legal framework and feedback from society) so that a compromise in one mechanism need not destroy all the above security properties.

To emphasize the final point, the biggest strength of the EA proposals set forward to date is their variety. While no existing proposal is individually strong enough to use in practice today (and indeed several have known flaws or limitations), collectively they offer a set of options that might be composed to provide a viable EA system with defense-in-depth.


There is currently a lack of consensus as to which security properties are desirable socially; people may reasonably argue that my list provides either too few or too many constraints on government power. I welcome such discussion. Indeed, as with all normative debates, discussion on exceptional access is more fruitful and enduring when debate focuses on the capabilities and limits that EA should have, rather than evaluating the strength of any single proposal.


3. What capabilities should exceptional access provide to law enforcement?


Political debate will not make the user versus law-enforcement conflict vanish. Even though some would prefer to not have any form of [exceptional access], the pragmatic view is that reaching some sort of compromise is necessary. … Thus the technical question is to find a solution as palatable as possible to both sides. It should guarantee enough privacy for the individual that people would go along with it, and yet be acceptable to law enforcement too.


– Professors Mihir Bellare and Shafi Goldwasser, CCS 1997


Perhaps the most challenging issue with constructing EA is not providing strong security but rather determining precisely and concretely the contexts under which access should be provided to the government. Here again, EA encryption poses unique challenges: With E2E, the appropriate functionality of encryption is simply that “people with the key should be able to read the data,” whereas with EA, one must also specify the scenarios in which access is granted to the government.


Due to their professional inclination toward secrecy combined with the organizational challenge of accumulating statistics across field offices, the U.S. federal, state and local governments have thus far been unwilling or unable to categorize the cases in which encryption tends to stymie their investigations. Simply stating the number of investigations affected by encryption (whether that number is 1,000 or 8,000) does not inform the discussion of what types of capabilities are being desired. Neither do individual stories of law enforcement being unable to access the devices of deceased victims, a problem that could be addressed instead via key management systems that enable people to delegate data to next of kin.


I list below an attempt at a (likely incomplete) set of considerations that would inform the design of access requirements for EA. Within each dimension, I list possible options from least to most difficult to achieve. I also note the portion of this space that has been explored by existing proposals for exceptional access; unfortunately, this space is largely underexplored to date, likely due to the lack of dialogue between government and the research community.


  • Data location: Does the government seek to unlock data at rest on personal devices or the cloud? Alternatively, does it also seek to intercept data sent in transit on an encrypted messaging system or when browsing the web? Most EA proposals to date focus on encryption of data at rest, although some of the techniques might extend to encrypted messaging services.
  • Physical vs. remote access: Is it sufficient only to recover data stored on a device in government possession, or should exceptional access also be possible remotely? (Note that this question is independent of the above “data at rest vs. in transit” question.) Some EA proposals necessitate physical access by requiring that certain operations be computed directly on the target device; however, this approach has security concerns of its own, because (from the government’s point of view) sensitive operations are performed on an untrusted device.
  • Scope: Is the government’s EA interest limited to encryption provided by default or does it extend to all providers of encryption services, including third-party software? Most existing EA proposals focus on the former.
  • Past vs. future: Is it sufficient to access data produced only after the government registers an interest in a target, as in Ian Levy and Crispin Robinson’s potential solution, or must the system retroactively offer decryption capabilities to data encrypted in the past? Most EA proposals provide retroactive decryption of data encrypted in the past, in large part to distinguish EA from alternatives like legal hacking whose effectiveness is predominantly in observing actions of targets in the future.
  • Accessible agents: There are thousands of law enforcement organizations and judges in the United States. Does the government plan to make requests from a central location (which might also perform an oversight role), or must an EA system and/or tech intermediaries directly handle requests from every police officer? While some existing proposals explicitly require the tech provider or a government oversight agency to intermediate requests, the majority of proposals do not address this question.
  • Compliance: Is it important for the government to determine whether encrypted content properly follows the EA scheme, even before any attempt is made to decrypt the data? If so, then compliance can be determined with cryptographic proofs in software that verify compliance with EA or with the aid of trusted hardware devices. (The Clipper chip sought to do this, though researchers later showed how to bypass compliance.)
  • Delay: How much delay is acceptable between when a request is made and when the data is provided? Can the request take multiple days to fulfill, or must the system operate in near-real-time? Most EA proposals assume the need to operate in a “ticking time bomb” scenario in which near-real-time responses are necessary, though a few proposals purposely leverage delays to establish physical control or limit the rate of access.

I caution proponents of EA to view the above text as a set of tradeoffs, not a wishlist. The level of access provided is proportional to the time required to construct, vet and maintain an EA system that provides strong security guarantees such as the ones listed in the previous section. For sufficiently strong access requirements it might be possible to prove the impossibility of simultaneously meeting all access and security goals. On the other extreme, if the government only desires access to files of previously-consenting victims or files stored at rest on a smartphone at the (slow) rate of current software updates, then it may be possible to build a compliant system very soon.


There need not be an immediate “silver bullet” to provide everything the government desires. Instead, researchers can and should design EA systems iteratively, beginning with easier settings. In the final section of this article, I provide policy proposals to spur such EA technological development.


4. What is the path to a future with viable EA systems?


It makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact.


– Former FBI director James Comey, October 2014 speech at the Brookings Institution


This is the easiest of the four questions to answer, so I provide my response upfront: Congress should prohibit the federal government from requiring that the private sector deploy exceptional access mechanisms. Paradoxically, supporters of EA should endorse such a ban (at least in the short term) as the single best method to resolve the Going Dark problem. Let me explain.


It is currently premature to request either voluntary or mandatory introduction of EA encryption into computers and networks. We simply do not yet have the cryptographic building blocks, the systems security architecture, or the regulatory policy framework necessary to build EA today that meets all or even most of the definitional requirements above. Ergo, time is needed to fuse crypto design, systems development and policy guidelines into a concrete EA regulatory proposal.


But the current state of affairs is even worse than that. Throughout the “crypto wars,” government actors have intermittently made credible threats to seek legislation to mandate EA, even though the crypto and systems communities have declared that it is premature. These threats stymie research into EA itself, because scientists and engineers fear that initial, half-baked approaches might be precipitously thrust upon society. Ergo, not only is there an absence of sound EA systems today, but the existing research is not currently on a trajectory toward producing them.


The best analogies I can find to describe this issue come from FBI Director Wray. At the Aspen Security Forum earlier this year, he said: “We put a man on the moon [and] we have autonomous vehicles. The number of things that are created every day in this country really defies imagination sometimes, and so the idea that we can't solve this problem as a society, I just don't buy it.”


In the scientific explorations into the moon expedition and autonomous vehicles, the government did not simply rely on private industry to find and deploy solutions. Instead, government engaged with the scientific research community by cultivating an environment of mutual trust and understanding; clearly defining and articulating the desired objective; providing time and funding to research the technological and policy aspects of the problem; and committing to hold off on deployment until reaching consensus from scientists and engineers that the technology was safe.


All these features are absent in the “crypto wars,” and all of them are within the government’s power to address. The policy debate can be advanced by first acknowledging the good faith and common interests of everyone involved. The FBI, along with local and state law enforcement organizations, can provide concrete knowledge about access mechanisms that could address many of the cases currently stymied due to encryption. Congress can also appropriate funds to jump-start scientific research for exceptional access and design a rigorous policy proposal for encryption regulation. Finally and most importantly, legislatures should impose a moratorium on EA so as to restore trust and foster an environment in which scientists can evolve EA prototypes and run small-scale pilot tests without the concern that a half-baked EA system will suddenly be introduced. That moratorium can subsequently be lifted if and only if EA constructions mature to the point of achieving agreed-upon definitions of security and functionality, and societies decide to use those constructions, cognizant of the benefits and risks of this decision.


Mayank Varia is a research associate professor of computer science at Boston University and the co-director of BU's Center for Reliable Information Systems & Cyber Security. He holds a bachelor's degree from Duke University and a PhD from MIT.

Subscribe to Lawfare