Cybersecurity & Tech Surveillance & Privacy

Exceptional Access: The Devil is in the Details

Susan Landau
Wednesday, December 26, 2018, 10:00 AM

Lawfare has run a series of posts concerning exceptional access.

Published by The Lawfare Institute
in Cooperation With
Brookings

Lawfare has run a series of posts concerning exceptional access. Josh Benaloh presents an excellent discussion of the irresponsibility of so-called “responsible encryption,” and Cindy Cohn persuasively argues that the computer security community should be working to improve the security of systems, not the reverse. Mayank Varia contends that the issue is politicized and more technical research might provide a solution, though I would argue that he fails to address the underlying issue of security risks posed by exceptional access.

With the greatest respect to my colleagues, I’d say that while the framing is useful, many of the arguments are those one might expect. But the first article of the series, Principles for a More Informed Exceptional Access Debate is something different. This piece by Ian Levy and Crispin Robinson, two members of the United Kingdom's primary signals intelligence and information assurance agency, Government Communications Headquarters (GCHQ), plows new ground, and it is on this piece I wish to focus. Levy and Robinson begin by setting out principles to govern exceptional access solutions that bring a fresh approach to the often divisive discussion; they then present a particular proposal. Let’s start by looking at their principles.

1) Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there's a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorization.

The first principle puts exceptional access in context, emphasizing that use will be appropriate and follow the rule of law. It’s worth noting that in the U.S. context, this is fundamentally a restatement of the Fourth Amendment—“no warrants shall issue, but upon probable cause”—and the requirements of the Wiretap Act, which permits the use of wiretapping in criminal cases only when other techniques appear unlikely to succeed or are deemed too dangerous.

2) Investigative tradecraft has to evolve with technology.

This is an “of course” statement—“Of course, investigative techniques must keep pace with the times”—yet this does not always happen. Sometimes a disruptive new technology appears—microwave transmissions in the 1970s, IP communications in the 1990s—and signals intelligence agencies, whether GCHQ or NSA must scramble to catch up.

U.S. signals intelligence does catch up—doubters should look at Snowden disclosures—but U.S. law enforcement has been much slower to keep pace with the modern developments of technology. In 2011, I testified before the House Judiciary Subcommittee on Crime, Terrorism, Homeland Security, and Investigations alongside the president of the International Association of Chiefs of Police, who was also the chief of police in Smithfield, Va. Phones were largely unlocked devices in those days, and encryption did not lock the phones. But the police chief explained that he was nonetheless impeded from getting data off the devices: “If I seize a cell phone, I don't have the capabilities—as you well understand, I don't have the capabilities to be able to do it except with some off-the-shelf products that are, frankly, obsolete.”

Such lack of technical expertise is not limited to one police department in one town or one state. U.S. law enforcement has been slow to come to developing the expertise for tackling crime in the Digital Age (see, e.g., a recent report by the Center for Strategic and International Studies that points out many easy-to-fix concerns). And even when the problem is recognized, the response is often slow and inadequate. It took the FBI five years to roll out the National Domestic Communications Assistance Center (NDCAC), which offers law-enforcement officers training and limited consultation on the specifics of modern communications systems and how the technologies, devices and services work. But NDCAC does not offer sophisticated technological help, such as recommendations of ways to open locked phones or wiretap in the presence of encryption.

3) Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time.

This principle provides a healthy dose of reality, a reality recognized by investigators everywhere but not always acknowledged publicly during debates on surveillance and encryption.

4) Targeted exceptional access capabilities should not give governments unfettered access to user data.

Here is a head nod to the principle of proportionality, a necessity for any nation subject to the rulings of the European Court of Human Rights. Proportionality involves a four-part test:

  1. Is the objective of the measure sufficiently important to justify the limitation of a protected right?
  2. Is the measure rationally connected to the objective?
  3. Could a less intrusive measure have been used without unacceptably compromising the achievement of the objective?
  4. Does the severity of the measure's effects on the rights of the persons to whom it applies outweigh the importance of the objective (to the extent that the measure will contribute to its achievement)?

The U.S. Bill of Rights does not explicitly discuss proportionality, yet the concept is inherent to its application by the courts. Levy and Robinson’s fourth principle is behind the intent of Section 702 of the Foreign Intelligence Surveillance Act, the Wiretap Act and various other U.S. surveillance statutes.

In the U.K., wiretap evidence is not introduced in court, meaning that the methods by which evidence is obtained are also not subject to court examination. Thus the emphasis in the U.K. principles on transparency: In the U.S., some of that transparency regarding the use of wiretaps come naturally from exposure in court cases.

5) Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.

Principle 5 is quite important. It is a principle that is not entirely expected from a signals intelligence agency, for it implies that the government should not impose an exceptional access solution that would cause customers to distrust their service providers. And yet numerous suggestions to enable exceptional access, including the oft-repeated proposal that companies should use software update mechanisms to provide ways to unlock devices, would do exactly that. This is therefore a striking statement to be issued from relatively senior members of the GCHQ—and both the principle and its authors should be much lauded.

6) Transparency is essential.

This principle is one held essential by computer security experts. Since 1883, it has been a fundamental tenet of cryptography that any system used by more than a small group of people should be made public. Putting it another way, it's likely that the details of any system will leak out—and thus that the enemy will know the system. Thus security resides solely in the secrecy of the key. For this reason, public exposure of encryption algorithms provides an extremely valuable benefit: it enables outside cryptographers and security analysts to vet the algorithm. Such public vetting is important. The lack of ability to vet the system means its security cannot be properly evaluated—and so, not surprisingly, cryptographers call proprietary encryption systems whose details are not public "snake oil."

***

Principles 1, 4, and 5 establish the importance of the rule of law and providing transparency, while Principles 1 and 4 establish the importance of protecting citizens' privacy. Principles 1 and 5 seek to ensure a reasonable relationship between communication providers and the government. Principle 2 is particularly interesting in the context of exceptional access; it is less directly about the rights of the citizenry and more about the government's responsibilities in conducting investigations. Meanwhile Principle 3 is simply an honest take on the situation, courageous to include.

Hammering out these principles and publishing requires imagination, courage, and integrity. Levy and Robinson and their bosses are to be lauded for this.

This set of principles is sound, but their value is to be found in how they deal with the reality on the ground. Or as the British are fond of saying, the proof is in the pudding. Levy and Robinson present a proposal for conducting exceptional access:

It's relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who's who and which devices are involved—they're usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there's an extra 'end' on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn't give any government power they shouldn't have.

I'm afraid this proposal is rather problematic. What does adding a silent listener mean? Such a step can be taken in one of two ways. Either the provider fails to reveal the list of participants of an encrypted communication, or the system is designed so that it can present a modified—as in, incomplete—list when adding a law-enforcement eavesdropper. That is, the only way the service provider can silently add a participant, law enforcement or otherwise, is either through failing to disclose how the system works or by subverting the disclosed protocol and hiding the additional member.

To understand the problems posed by this solution, it’s useful to recall what happened two years ago when The Guardian erroneously reported that WhatsApp had a security backdoor allowing interceptors to decrypt messages that had been encrypted end-to-end.

The issue was how WhatsApp handled the situation when a message recipient changed their phone or SIM card, something that happens frequently in some parts of the world; for simplicity, from now on I will simply describe this as changing a SIM card. A change of SIM card means that the recipient's public/private key pair had changed, and thus WhatsApp could not authenticate the user on their new device.

The details are a bit complicated. In providing an end-to-end encrypted messaging service, WhatsApp acts as an identity service for its users (for more details of what this means, see Matt Green’s detailed explanation ). That means when the sender transmits a message, WhatsApp looks up the recipient and their public key, encrypts the message, and sends it. If the message cannot be delivered because the recipient has changed their SIM card, making it impossible to authenticate the recipient with the old key, then WhatsApp would inform the sender of this fact. But such a notification would appear only if the sender had actually turned on such notifications.

There is a risk there, namely the chance that an attacker—an eavesdropper—could have obtained the recipient's old SIM card. In that case, the eavesdropper could authenticate themselves as the legitimate recipient of the message to WhatsApp and decrypt the message. If the sender did not have those notifications turned on, she would not even know that the contact to whom she had sent the message had changed their SIM card and that the message had been picked up by someone else.

This is a security risk, but it is not a particularly serious one. The so-called attack works only if the eavesdropper is able to obtain the recipient's SIM card. That is relatively difficult to do, making this surveillance technique less likely to be used to intercept communications than other, simpler forms of interception.

One can ask why WhatsApp was designed this way; after all, Signal, the best end-to-end encrypted communication system, only sends an encrypted message after first authenticating the recipient to the sender. The answer is reliability. WhatsApp has billions of users around the world, and many do change their phone or SIM cards frequently. For almost all of these users, the reliability of message delivery is far more important than protecting against the relatively minor risk that the send-then-authenticate design choice allows. So the security "vulnerability" was not a security breach at all, but rather a deliberate decision to trade off a slight risk to security for far better usability. And this tradeoff was public, described in the technical documentation of the WhatsApp system.

But once the Guardian’s story appeared, journalists and human rights workers became worried that their communication system was subject to interception; some thought to move to less secure communications systems. The problem was largely alleviated by the computer security community’s jumping in to quickly correct the misapprehensions in the Guardian article. Moxie Marlinspike, a founder of the team that developed Signal, immediately published a blog post describing the reporting as "false" and detailing its inaccuracies. Brian Acton, one of WhatsApp’s co-founders, did the same. The Electronic Frontier Foundation called the description "irresponsible"; a group of seventy computer science experts wrote to the paper and explained WhatsApp’s design choices, calling the proposed threat a "remote scenario" and describing the risk as a "small and unlikely threat."

The Guardian rapidly retracted much of the story. Key to the immediate corrections were two things: WhatsApp's white paper detailing the technical aspects of the system and thus experts’ knowledge of how the system worked. Technologists had been able to probe WhatsApp and ask such questions as: How is the protocol constructed? What is the key exchange mechanism when a user changes their communication device or SIM card? What is the cryptographic authentication of the users to ensure that a new member is who she says she is? Is the communication constructed so that the convener of the communication is informed as a group member joins or leaves?

With that story in mind, I will return to the Levy-Robinson proposal of a silent listener added by the service providers. Obviously this technique will work only if the other participants of the communication are kept in the dark as to who the other participants are. Who would trust such a system?

Levy and Robinson write that, "This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions."

I strongly disagree. Yes, alligator clips, as they’re called on this side of the Atlantic, intercept communications, but they do so for communications for which the service provider has not made a commitment of providing end-to-end encryption. The difference between alligator clips and the proposed virtual crocodile clips is that in the latter, the service provider is being asked to change its communication system to provide exactly what the end-to-end encryption system was designed to prevent: access by a silent listener to the communication.

Levy and Robinson argue that this change is reasonable, explicitly recommending that the clock be moved back a few decades to a world before we had encrypted services such as WhatsApp, iMessage, and Signal. That request carries a veneer of reasonability, but it fails to take into account multiple changes to communications infrastructure. Decades ago, circuit-switched telephone networks were relatively robust against interception. The move to richer IP-based communications networks provides great flexibility in types of communications, and communications have moved from fixed landline phones to mobile devices.

This richness and flexibility has come with some tradeoffs. Interception, which used to require physical presence, is now controlled by software, enabling eavesdropping to be accomplished at a distance. That's useful for law enforcement, but such interception is done not just by the government operating under legally obtained warrants, but also by the bad guys. Software vulnerabilities simplify the ease of eavesdropping. The situation is made worse by the fact of who builds communications systems. The United States and United Kingdom used to rely on communications infrastructure supplied by trusted companies. No longer. Indeed, the U.K. government has expressed serious concern over the security risks of having the nation's communications infrastructure supplied by Huawei. The only protection against such interception is end-to-end encryption.

Levy and Robinson write, "We're not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we're normally talking about suppressing a notification on a target's device, and only on the device of the target and possibly those they communicate with. That's a very different proposition to discuss and you don't even have to touch the encryption."

That's a bit of a stretch. Levy and Robinson's proposed solution involves changing how the encryption keys are negotiated in order to accommodate the silent listener, and that means creating a much more complex protocol—raising the risk of an error. But they do touch the communications protocol. So let me rephrase my statement above. The only protection against such interception is end-to-end encryption using a communications protocol that outsiders can vet—and thus that users can trust.

I think the principles that Levy and Robinson propose are an excellent basis on which to have the conversation about investigations in the Digital Age. As it happens, I have met both Levy and Robinson on several occasions. In our conversations about interception, these British investigators, technologists to their very core, immediately jump to the details of the communications they sought to intercept. They ask: which device or communications systems are you seeking to intercept? For what type of criminal activity? Are you tracking child porn? Insider threat?—Each of these might demonstrate interesting characteristics in the transmission that will aid an investigation. Different types of criminal activity may expose a particular set of time, place, and other communication characteristics that cannot be easily hidden by encryption. Just as each of the devices or communication systems will have certain aspects that are easier to push at, each of the criminal activities will have certain aspect of their action that provide critical telltale leads. GCHQ has been successful in cracking child porn, terrorism and other cases because it has followed these leads.

In the U.S., the conversation has often been stymied by a law enforcement refrain of impediments rather than an embrace about acquiring new technological capabilities. Sometimes, indeed, law enforcement says they can't when it turns out that they simply didn't try hard enough.

With this in mind, I have several takeaways from the Levy and Robinson post. The first is that even though some of the principles already fall under U.S. jurisprudence, these principles are useful for developing policy around the tradeoffs in conducting investigations in the Digital Age. I’d like to see these principles explicitly adopted by U.S. law enforcement. Second, the concrete proposal about adding a silent listener to end-to-end encrypted conversations fails the security and trust tests that Levy and Robinson recommend. That does not mean that the principles are inadequate, but rather that this particular proposal for exceptional access doesn’t pass muster. And finally, U.S. law enforcement should take a page from these investigators from across the pond and develop a far more technically sophisticated approach to conducting investigations involving digital technologies. Such advice is not new: the FBI has been told this for over two decades. It's well past time such recommendations were acted upon.


Susan Landau is Professor of Cyber Security and Policy in Computer Science, Tufts University. Previously, as Bridge Professor of Cyber Security and Policy at The Fletcher School and School of Engineering, Department of Computer Science, Landau established an innovative MS degree in Cybersecurity and Public Policy joint between the schools. She has been a senior staff privacy analyst at Google, distinguished engineer at Sun Microsystems, and faculty at Worcester Polytechnic Institute, University of Massachusetts Amherst, and Wesleyan University. She has served at various boards at the National Academies of Science, Engineering and Medicine and for several government agencies. She is the author or co-author of four books and numerous research papers. She has received the USENIX Lifetime Achievement Award, shared with Steven Bellovin and Matt Blaze, and the American Mathematical Society's Bertrand Russell Prize.

Subscribe to Lawfare