Cybersecurity & Tech Surveillance & Privacy

Apple, CALEA and Law Enforcement

Matthew D. Green
Tuesday, December 19, 2017, 7:00 AM

On Dec. 11, Nick Weaver argued that Apple isn’t doing enough to help law enforcement wiretap iPhone users. That's undoubtedly true, as Apple is building communications systems optimized for privacy and security, not wiretapping. Nick's piece makes some good points but it also makes some assertions that deserve pushback. In particular, Nick wrote:

Published by The Lawfare Institute
in Cooperation With
Brookings

On Dec. 11, Nick Weaver argued that Apple isn’t doing enough to help law enforcement wiretap iPhone users. That's undoubtedly true, as Apple is building communications systems optimized for privacy and security, not wiretapping. Nick's piece makes some good points but it also makes some assertions that deserve pushback. In particular, Nick wrote:

iMessage and FaceTime have a cryptographic architecture that enables prospective wiretapping, yet there is reason to believe that Apple not is fully complying with lawful court orders to exercise this capability. There is also evidence that, although Apple is supposedly complying with pen register orders, the company is actually providing something substantially less than what the law is able to compel them to provide in response to a pen-register or trap-and-trace (PR/TT) order.

Let's start with iMessage, Apple's default text messaging service. iMessage uses end-to-end encryption, which in the simplest description means that Alice encrypts her messages directly to Bob -- without allowing Apple to read them.

But this simple description leaves out a lot of detail.

Before iMessage can encrypt Alice's message to Bob, the iMessage software must obtain Bob’s public encryption key. Apple assists in this process by operating a dedicated key server, which acts as a sort of “white pages” for the messages. When Bob registers a new phone, he sends a key to Apple’s server. When Alice sends a message to Bob, her phone can ask Apple for Bob’s key. As long as Apple gives Alice the right key, Alice is ready to communicate securely with Bob. The problem to which Nick alludes, is that Apple doesn't necessarily have to give Alice Bob's key; instead, Apple could give her the FBI's public key. Then Alice would be communicating with the FBI, and the FBI could then forward her message to Bob, fooling Bob into thinking he's hearing from Alice. (This is called a man-in-the-middle attack.)

Nick contends this is a serious weakness in iMessage; I agree. This problem can arise in iMessage because of the way the iMessage protocol has been set up. Neither the sender nor the receiver has a way of knowing if any others are in the communications path because Apple provides no way for Alice or Bob to check. Other end-to-end encrypted systems have been designed to prevent this problem. WhatsApp or Signal, for example, both provide users a way to check if there is a man in the middle (see here and here respectively). This is a security weakness of the iMessage protocol and should be fixed.

There's another way that Alice and Bob’s encrypted communication can be eavesdropped upon. Because iMessage allows multiple devices on a single account, Apple could add the FBI as a second "virtual device" to Bob's account.

Then Bob would get Alice's messages—and so would the FBI. Both would be able to decrypt the end-to-end encrypted messages, so again the FBI could listen in. But Apple has made this attack on its security hard to hide. Currently, when someone adds an Apple device to a user's account, Apple's security and privacy system triggers a warning to alert the user of this change. So if the FBI were to add an eavesdropping account to Bob's account, Bob would get a warning from Apple that there's a new device on his account—and he'd know someone was listening.

Nick thinks that Apple could change its warning system to only let good guys—those whom law enforcement isn’t tapping—get the announcements of a new device on the line. While Nick correctly identifies a technical weakness of iMessage, he's incorrect about Apple’s legal requirements and how easy it would be for Apple to accommodate the FBI's demands. Nick’s first argument, which is wrong, is that Apple has a legal obligation to design iMessage to be able to have the FBI "join" Alice and Bob's communication, that is, to wiretap it. Nick alludes to the 1994 Communications Assistance for Law Enforcement Act (CALEA), but the law explicitly rules this out:

This subchapter does not authorize any law enforcement agency or officer—

(A) to require any specific design of equipment, facilities, services, features, or system configurations to be adopted by any provider of a wire or electronic communication service, any manufacturer of telecommunications equipment, or any provider of telecommunications support services; or

(B) to prohibit the adoption of any equipment, facility, service, or feature by any provider of a wire or electronic communication service, any manufacturer of telecommunications equipment, or any provider of telecommunications support services.

On the matter of encryption, CALEA is even more explicit:

A telecommunications carrier shall not be responsible for decrypting, or ensuring the government’s ability to decrypt, any communication encrypted by a subscriber or customer, unless the encryption was provided by the carrier and the carrier possesses the information necessary to decrypt the communication.

Apple does not provide the encryption keys for iMessage, thus there is no current legal obligation for Apple to redesign the system to provide law enforcement access. In fact, it's exactly these explicit limitations of CALEA that apparently have been driving law enforcement to push for new legislation (which has in the past been referred to as "CALEA II"). That effort has not yet born fruit, despite a number of high-profile speeches.

The second issue that Nick has wrong is a technical one that my colleagues and I have discussed multiple times (e.g., Keys under Doormats). If Apple was to change its technology to be able to surreptitiously add a device to the account—that's the way the wiretapping would work—it would have to deactivate the warning system. (Otherwise the bad guy would know he's being tapped.)

The real question is not whether Apple can do this. (The answer is yes.) The real answer is whether Apple can do this in a way that doesn't disrupt the system for everyone else and destroy their security. That's far from clear. Apple is a company managing a billion active devices across multiple geographies. In order to make the change Nick suggests, Apple would need to carefully re-architect specific portions of its infrastructure so that these warnings would be disabled.

But they would have to be disabled only for specific customers, under a wiretap order, on demand. Performing this sort of modification is no small task. It carries the risk that Apple could seriously harm the operation of its network: either producing costly global outages or opening new security holes for malicious hackers to exploit. And if Apple's system for disabling the warning system happens to disable the warning system for users who are not the subject of a wiretap order—an accident waiting to happen—then Apple would have broken the security not just of the end-to-end encryption of iMessage for those users, but potentially much more, including the security of users knowing which devices have access to their account, including the data stored in iCloud, iTunes, etc. Such a breach would be a disaster, both for the customers and for Apple. Nick is proposing a "solution" that might work on a system supporting a couple of thousand—or maybe even a few hundred thousand—phones, but one that carries high security and operating risks for the number of customers and devices actually being served. It is simply not reasonable.

Apple is consistently making choices to protect users privacy and security. In the face of the kinds of attacks we've been seeing, from the "hack in a box" that Chinese criminals were selling to the sophisticated hacking Jupiter's VPN, the better security is on phones and in communications, the better off we all are. So while Nick is right on the current vulnerability in iMessage, he has it wrong on both on Apple's legal obligations under CALEA and how easy it would be for the company to accommodate law enforcement's demands.


Matthew Green is a professor at the Johns Hopkins University Information Security Institute. His research focus is on cryptographic techniques for maintaining users’ privacy, and technologies that enable the deployment of privacy-preserving protocols. Green writes on topics related to cryptography and surveillance.

Subscribe to Lawfare