Cybersecurity & Tech Surveillance & Privacy

An Approach to James Comey's Technical Challenge

Matt Tait
Wednesday, April 27, 2016, 7:00 AM

In 2014, at the very beginning of the “Going Dark debate,” FBI Director James Comey gave a challenge to the technical community. Is it possible to create a “front-door” that law enforcement can use to access encrypted devices that doesn’t put other users at risk?

Published by The Lawfare Institute
in Cooperation With
Brookings

In 2014, at the very beginning of the “Going Dark debate,” FBI Director James Comey gave a challenge to the technical community. Is it possible to create a “front-door” that law enforcement can use to access encrypted devices that doesn’t put other users at risk?

A great deal of ink and keystrokes have been sacrificed over the topic of "going dark" so far: is law enforcement actually “going bright”? Is “secure law enforcement access” an oxymoron? But it’s hard to find anyone even trying to tackle the technical question the Director actually asked: Is it really impossible to have secure law enforcement access to encrypted devices? What are the risks involved?

It’s a shame that nobody’s focusing on this question, because it’s actually an interesting technical question, and it deserves a serious technical answer.

At the risk of over-simplification, full-disk encryption normally works as follows: files on the hard drive are scrambled with a “filesystem key” chosen by the operating system when it is first installed on the computer. That filesystem key is sealed inside a password-protected digital “vault” which is stored on the unencrypted part of the hard drive. When the computer starts up and asks the user to enter his or her password, that password allows the operating system to open the vault and retrieve the filesystem key, and that key then allows the operating system to access the files on the encrypted part of the drive.

The technical question at the heart of the “Going Dark” debate is whether it is possible to allow law enforcement to recover the filesystem key from the drive without knowing the user’s password. To be secure, the scheme would need to block access to thieves, foreign governments, hackers, criminal misuse by rogue police officers without a warrant, or misuse by staff working for the device’s manufacturer.

I think it’s doable. Here’s one possible approach: “cryptographic envelopes.”

Normal envelopes are pretty simple. You write a message on some paper, put it in an envelope, write the addressee’s name on the front and glue it shut. You can then deliver the sealed and addressed envelope via mail, Fed-Ex, in-person or simply leave it on the addressee’s desk. But however you deliver the message, the intention is always the same: only the person named on the front is allowed to open up the envelope and read the message contained within.

Cryptographic envelopes sound complicated, but they are conceptually very similar. In the case of cryptographic envelopes, the addressee’s name takes the form of his or her public key, and strong cryptography, rather than glue, seals the digital message inside. Once it is sealed, only the addressee can open the envelope to read the message using the corresponding private key.

It’s worth stressing that unlike ordinary envelopes, cryptographic envelopes cannot simply be steamed open by would-be snoopers. Mere application of brainpower or the purchase of millions of dollars of computers cannot unseal them. After all, cryptographic envelopes are the foundation of modern cryptography. If they could be steamed open without the private key, all of online security would quickly crumble.

Cryptographic envelopes have existed since the dawn of public-key cryptography itself, and programs that use them will be very familiar to most Lawfare readers. PGP’s central feature, for example, is nothing more than a mechanism by which users can create and manage cryptographic envelopes.

The anonymity program The Onion Router (TOR) uses cryptographic envelopes too, but in a slightly different way. If you browse the web using TOR, your browser’s requests are wrapped inside three layers of stacked cryptographic envelopes, one inside the other, with each envelope addressed to a different computer inside the TOR network.

This cryptographic “onion” of layered envelopes is then passed around inside the TOR network, each node unwrapping the outermost layer of encryption and forwarding the sealed envelope within, much like a pass-the-parcel at a children’s birthday party, until the final node opens the innermost envelope to reveal the prize: the user’s browser request, which it services on the anonymous user’s behalf.

Although doing so is unconventional, we can use cryptographic envelopes with filesystem encryption too. We already put the filesystem key inside a password-protected vault on the unencrypted part of the drive. We could, if we were so inclined, also put a copy of that filesystem key inside a sealed cryptographic envelope left on the unencrypted part of the drive. That way, decryption of the drive could take place in one of two different ways: either by using the user’s password to open the password vault, or by opening the envelope with the corresponding private key.

Depending on whose public key we use to seal this envelope, we get to choose who can decrypt the drive without a password. For example, if we put the filesystem key in a cryptographic envelope addressed to the FBI, the FBI can decrypt the drive without a password. If we put the filesystem key inside an envelope addressed to the manufacturer, the FBI has to ask the manufacturer to open the envelope with its private key.

But if we borrow an idea from TOR, and layer the envelopes, things start to get a bit more interesting. Suppose, for example, we put the filesystem key in an envelope sealed with the FBI’s public key, and then put that sealed envelope inside another envelope, this time sealed with the manufacturer’s public key.

To start with, the drive can no longer be decrypted unilaterally by the FBI. The FBI doesn’t have the manufacturer’s private key, it can’t open the outer envelope. The drive also can’t be unilaterally decrypted by the manufacturer. Although the manufacturer can open the outer envelope, only the FBI can open the inner one to retrieve the filesystem key. Decryption of the drive (at least, without knowledge of the user’s password) now cryptographically requires both organizations to work with each other—all but eliminating the possibility of criminal misuse by insiders, or institutional misuse, such as secretly decrypting devices without a warrant, or recovery of customer data on devices returned to the manufacturer. A curious police officer or employee working for the device manufacturer simply could not decrypt a drive on a whim. Without going through the full formal process involving both organizations, the drive will not yield.

It’s also noteworthy how configurable the whole scheme is merely by altering the order and public keys of the various layered envelopes. Want the EFF to validate all device-decryption requests? Simply put an EFF-addressed envelope as one of the layers. Want your German iPhones to be indecipherable to American law enforcement? Then swap-out the FBI’s public key with a public key corresponding to the Bundespolitzei in your devices sold in Germany. The FBI would then be cryptographically unable to unilaterally decrypt German phones. They wouldn’t even be able to compel the device manufacturer to open them with an American warrant. The only way for the FBI to unlock the German device would be to dust off their MLAT and ask the Germans and the manufacturer to do so on their behalf.

Having a public key owned by the manufacturer in the cryptographic Russian doll of envelopes also mitigates the ethical “Chinese democracy activist” question too. If the device manufacturer has ethical questions about servicing the decryption, it can simply refuse to decrypt its layer. Device manufacturers already routinely refuse law enforcement requests from repressive regimes; they could do the same for decryption requests if the warrant doesn’t satisfy the manufacturer’s legal or ethical bar.

But of all of the idea’s merits, the one I like the most is its transparency. You could take the whole design, fling open the doors and open-source the whole thing. Auditors could then independently verify precisely who can decrypt the drive—and the multi-organizational process different actors have to go through to achieve it. Manufacturers wouldn’t have to just tell their German customers that the FBI can’t decrypt German-bought phones. They could independently verify it for themselves, either by reading the code, or by reverse-engineering the phone.

Of course, there are some problems the idea doesn’t (and doesn’t attempt to) solve. Technically savvy individuals can, of course, choose to run additional encryption products to layer additional encryption on their files. But most criminals aren’t technically savvy, and the few that are still make mistakes. If criminals always used the best operational security and encryption available, prisons would be much emptier than they are today.

It’s also certainly true that private keys can be stolen. But this is not the unassailable risk some commentators would have you believe. For a start, if hackers could simply hack into any company and steal the private keys of any organization they choose, automatic software updates wouldn’t be a security feature; they’d be a Russian-roulette of ransomware mass-delivery.

Take a moment to compare this regime to contemporary practice. To decrypt a drive encrypted with this design, a hacker would have to steal private keys from the manufacturer, the FBI, and any other holder of a private key corresponding to an envelope used to seal the filesystem key. Each private key is held by a different organization, and can be held offline, on non-Internet connected computers in the basement of their respective organizations. And even if our hacker is somehow successful and manages to steal all of the private keys, she still only gets to decrypt individual drives she already has physical access to. This worst-case scenario is still no nightmare of Orwellian mass-surveillance.

By contrast, with automatic software updates in wide use today, a hacker needs to only steal a single private key from the manufacturer to have the power to distribute ransomware or spyware en masse to all of that manufacturer’s customers.

Even so, maybe the risk isn’t worth the gain. But let’s call that what it is: a debate around the value of law enforcement access to encrypted devices, rather than a concern about per se unacceptable cybersecurity risk.

I think it’s unlikely we’ll see this design in any technology company’s products in the near future. After all, for a lot of technologists, academic cryptographers, information security experts, and even many customers, locking law enforcement out of devices is a “feature,” not a “bug.” If you start your analysis with the view that law enforcement must be evicted from devices by any means necessary as axiomatic to “security”, then clearly any scheme that mediates law enforcement access is ”insecure” and undesirable. But that’s not a technical point. It’s a values judgment.

My view is more pragmatic. I don’t think we have a choice between whether law enforcement will have access to encrypted devices, but rather a choice over how they will inevitably have access. And for all the faults or “insecurity” of any cryptographically mediated access, I’m quite confident that the alternative will be less “secure.”

When law enforcement warns about “Going Dark,” officials may be talking about the risk of having less evidence available to investigations, but that’s not the risk I worry about. When I hear law enforcement warning that they’re “going dark,” I worry that law enforcement’s methods are becoming darker and less accountable.

We may evict law enforcement from encrypted devices, but at what cost? It seems we’re already long down the road of institutionalizing law enforcement use of zero-days and hacking. It’s no longer clear that secret court orders can’t compel tech companies to digitally sign malware. The US, UK and France all have legislation in progress that will compel tech companies to retain access to customer data.

If law enforcement access is by definition “insecure,” I’m not sure our current alternatives are any more “secure.” They certainly sound less accountable and transparent. And whatever your view on encryption “front-doors,” we should be worried about the direction law enforcement is going.

Because if this is the road “Going Dark” takes us, it’s a dark road indeed.


Matt Tait is the Chief Operating Officer of Corellium. Previously he was CEO of Capital Alpha Security, a consultancy in the UK, worked at Google Project Zero, was a principal security consultant for iSEC Partners, and NGS Secure, and worked as an information security specialist for GCHQ.

Subscribe to Lawfare