Cybersecurity & Tech Surveillance & Privacy

Apple's Cloud Key Vault and Secure Law Enforcement Access

Matt Tait
Wednesday, September 14, 2016, 10:09 AM

Michael Specter and the “Keys Under Doormats” (KUD) group have an interesting post, entitled “Apple's Cloud Key Vault, Exceptional Access, and False Equivalences” responding to my earlier post on Apple’s Cloud Key Vault.

The conversation so far has meandered somewhat over different sites, so if you’ve not been following along, the conversation thus far is:

Published by The Lawfare Institute
in Cooperation With
Brookings

Michael Specter and the “Keys Under Doormats” (KUD) group have an interesting post, entitled “Apple's Cloud Key Vault, Exceptional Access, and False Equivalences” responding to my earlier post on Apple’s Cloud Key Vault.

The conversation so far has meandered somewhat over different sites, so if you’ve not been following along, the conversation thus far is:

Specter and the KUD group’s response is interesting and worth a read, but it responds to my points about Apple’s Cloud Key Vault (CKV) by dismantling a claim that is subtly, but importantly different to the one that I made. Their argument is that CKV is used to help protect uploaded user backup files from law-enforcement. Consequently it’s not a “backdoor” because Apple employees have no access to it.

My argument, rather, is that CKV is a cryptographic backdoor, just one carefully designed so that only the unmodifiable code inside CKV can use it, and only under precisely prescribed scenarios laid out by CKV’s program code ahead of time. Additionally, since most of the technical arguments around law-enforcement decryption also apply to Cloud Key Vault, Apple’s implementation of it exposes many such arguments as invalid or solvable.

Furthermore, my argument is that if law-enforcement use of hacking to circumvent encryption is the alternative, it is an infinitely more dangerous capability, and one that is profoundly more worrying from a transparency, oversight, insider abuse and civil liberties position than carefully developed encryption access systems. In short: the alternative is considerably less secure.

There’s a lot in there to unpack, so let’s go back to the beginning.

The problem Apple wanted to solve is this: You buy an iPhone and a MacBook. It would be really convenient if when you “save” a website password for, say, Gmail on your MacBook, your iPhone also magically “knows” the password without you having to type it in again. This synchronization is done securely via a process called “iCloud KeyChain”.

Unfortunately, one day a user could lose her backpack with both her iPhone and MacBook in it. Apple wants to make it so she can technically re-synchronize her new MacBook with her (now gone) old devices, but Apple wants to do so in such a way that malicious Apple insiders, hackers, and law enforcement with a court-order cannot use the system to synchronize her files onto their own new MacBook to gain access to her files without her permission.

The first bit of the solution is simple: password protect the backup before uploading it. The password for this file is called the “iCSC” in Apple-speak, but in practice, this is your device passcode. Strong cryptography means you can’t open the file unless you know the passcode.

The problem is users choose terrible passcodes. Law enforcement could subpoena the passcode-protected file if it was uploaded directly, and simply guess passcodes sequentially— trying “1111”, “1112”, “1113” and so on—until eventually the correct passcode unlocks the file. Curiously, when the right passcode opens the file it gives law enforcement two pieces of information: the data in the file, and the passcode that opened it, which is the user’s device PIN code.

The short story of the Apple-vs-FBI case from earlier in the year is that Apple doesn’t want FBI to be able to sequentially guess PIN codes to open a device. Apple needed to do something to protect these high-value passcode protected files before uploading them in form that law enforcement could subpoena. So Apple designed and built an ingenious system: Cloud Key Vault.

CKV’s primary job is to keep a secret private key very secure. The security of that key is the basis of the entire security of the whole design. When Specter and the Keys Under Doormats group say “Apple’s Cloud Key Vault does not have any Apple-owned private key” they are simply mistaken. Apple refers to this key obliquely in their iOS Security Guide as the “HSM cluster private key” on page 46.

This CKV secret private key means Apple doesn’t have to upload dangerous passcode-protected files for backup directly. Instead, your device seals the passcode-protected backup in a cryptographic envelope so only the CKV can open it. It is this sealed CKV-envelope—which Apple calls the “iCloud Escrow Record”—that your device uploads to enable re-synchronization in the event you lose all of your devices.

Law enforcement could subpoena this file, but it won’t do them much good. The whole setup is designed to ensure that the files can only be opened inside the CKV itself. Because the escrow record can only be opened inside the CKV, the CKV can count the number of password guesses on the passcode protected file inside and limit passcode guesses to just 10 tries. If the user gets the passcode right, CKV grants the user access to the backup file to recover access to the user’s files. If the user gets the passcode wrong, CKV permanently logs the failed attempt, to forcibly limit the number of PIN guesses to 10.

Which gets us to the Keys Under Doormats analysis. Their claim that “Apple’s Cloud Key Vault does not have any Apple-owned private key” is an error. The CKV secret private key is the engine that makes CKV work; its security determines the security of the entire system. If it is lost to an adversary, that adversary can open CKV-envelopes and perform unlimited PIN-guesses on uploaded backup files, and this is why I referred to it as a “cryptographic backdoor”. The KUD group correctly note that Apple destroys the access cards to CKV into dust with a blender (yes, really). But this merely stops Apple changing any of CKV’s program code once it is deployed. Destroying the access cards is not equivalent to destroying the CKV secret private key, which will live on until Apple eventually decommissions the whole CKV.

Perhaps KUD’s confusion comes from one of the many oddities of CKV: that what it is, and what it is for are opposites. Looked at from 50,000 feet, you’ll see a backup system carefully designed to prevent Apple or law-enforcement from intercepting uploaded backup files and guessing PIN codes. It’s expressly about keeping law enforcement out, not letting them in.

But zoom in a bit further, and we see that CKV is just a different cryptographic “actor” to Apple; one that’s trusted with a cryptographic backdoor to the whole system. CKV’s trustworthiness comes from the fact that its code can’t be modified after it is switched on; its use of its cryptographic backdoor is precisely prescribed in advance. CKV ensures that it only reveals the contents of CKV-encrypted files if its code decides the user’s inputted passcode is correct. Its code similarly ensures that invalid decryption attempts are recorded, and passcodes limited to 10 guesses.

CKV isn’t a law enforcement access system, but it’s not hard to see how to modify it to be one. Consider, for example, a minor modification to CKV. Let’s call it “Access Key Vault” or AKV for discussion purposes.

Apple devices store user backup files inside an envelope only CKV can decrypt and uploads this sealed envelope to Apple’s servers. An AKV access system, by contrast, could store the device’s decryption key inside an envelope only the AKV can decrypt, and store this AKV-sealed envelope on the device itself. This way, to get the AKV envelope, someone would need to first seize a device, and then forensically recover the AKV envelope from it. Only the AKV would be able to decrypt the sealed envelope with its secret private key, and thus nobody would be able to get at the individual device’s decryption key inside without first delivering the envelope to AKV technicians who would then feed the envelope into the AKV vault for decryption.

Inside the AKV vault, AKV’s unmodifiable code could then open this sealed envelope with its secret AKV private key. For auditability, AKV would irrevocably cryptographically log the request, and then output the content of the envelope — the device’s decryption key — to the technician outside of the vault. Investigators could then type the device’s decryption key via a forensic tool into the seized device to gain access to the files within. Note that the AKV secret private key would never leave the AKV, which would permanently reside at Apple headquarters.

Ideally, we’d go one step further; splitting keys over multiple AKVs distributed over multiple organizations. Splitting the keys stops AKV technicians having the ability to unilaterally decrypt devices; if the key is split, multiple AKV technicians at multiple sites need to cooperate to decrypt any physically seized device. This reduces the ability of insiders to abuse their access, but it also massively increases the cost of hacking the technicians and submitting unauthorized decryption requests, since now even a single device decryption requires multiple organizations to coordinate, each of which are logging their part of the decryption request onto some undeletable ledger.

While split-key access improves security, its real benefit, counter-intuitively, is adding extra layers of legal oversight. Forcing the inclusion of an additional organization into the decryption process gives this other organization the opportunity to check the basis of—and potentially veto—the decryption order.

Specter and the Keys Under Doormats group do raise an important point regarding the security of the AKV technicians who submit encrypted envelopes to the AKV device for decryption. What is the worst case scenario if some or all AKV technicians go rogue, or get hacked?

This isn’t as big a problem as you might think. Firstly the technicians can’t walk off with the AKV secret private key. The whole premise of the Cloud Key Vault is that the key is secured so well that Apple employees can’t run off with it or use it in a way not precisely prescribed beforehand by Apple’s developers. If CKV can do it, AKV can too. Of course, a malicious technician or a technician who gets hacked would be able to submit unauthorized decryption requests to the AKV, but they could not do so without detection. The cryptographic log of all decryption requests would mean nobody could submit a request and later pretend it didn’t happen. And if the key is split across multiple sites, a single insider or successfully hacked technician also gets the adversary nothing. The adversary needs to submit an unauthorized decryption request to each AKV at each organization in the precisely correct order even to decrypt a single device that the adversary has forensic local access to.

This is why the “we can’t build this securely” claims by many technologists is so deeply disingenuous. We can absolutely build systems so secure that compromising them would require a grand conspiracy of technicians going rogue at multiple different organizations or that multiple air-gapped computer networks be hacked to compromise the technicians directly. We can make the system so secure that the consequences of even this vanishingly unlikely event is not a nightmarish vision of mass-surveillance, but limited to the unauthorized decryption of a small number of individually targeted devices to which the plotters must already have forensic local physical access. Moreover, not only would the adversary need to expend enormous effort for such paltry capability, we can make it so they wouldn’t even get away with it: a cryptographic log forever exposing the unauthorized decryption request and the targeted device at every AKV organization in a way that could never be erased.

Perhaps this scheme does not meet the platonic ideal of “secure,” but describing the risk as “high potential for catastrophic loss” seems to me an abuse of ordinary language.

Importantly, much of the rest of the Keys Under Doormats argument is not so much technical as political. For example, I’m pretty sure Apple doesn’t execute subpoenas submitted from Uzbekistan. It does, however, already process many legal requests from the UK, France, Germany, Italy, Canada, Brazil and so on. Perhaps it processes too many court orders. Perhaps it processes court orders from countries we’d prefer it not to. But these are not technical questions reserved only for some elite group of technologists and cryptographers. Simply put, these are not related to technical security and therefore are questions properly put to a wider set of debate participants.


Matt Tait is the Chief Operating Officer of Corellium. Previously he was CEO of Capital Alpha Security, a consultancy in the UK, worked at Google Project Zero, was a principal security consultant for iSEC Partners, and NGS Secure, and worked as an information security specialist for GCHQ.

Subscribe to Lawfare