Cybersecurity & Tech Surveillance & Privacy

What’s Involved in Vetting a Security Protocol: Why Ray Ozzie’s Proposal for Exceptional Access Does Not Pass Muster

Susan Landau
Monday, May 14, 2018, 8:00 AM

For almost a decade, the FBI has been stating that law enforcement is “going dark"—meaning it is increasingly unable to listen in to communications and, more recently, to open locked devices in which important evidence lives.

Published by The Lawfare Institute
in Cooperation With
Brookings

For almost a decade, the FBI has been stating that law enforcement is “going dark"—meaning it is increasingly unable to listen in to communications and, more recently, to open locked devices in which important evidence lives. The bureau has sought legislative solutions for exceptional access—law enforcement access to encrypted communications and locked devices—while computer security experts have repeatedly warned that such a requirement will make an already-bad cybersecurity situation far worse. Law enforcement’s response has been to say that they were sure that the smart engineers of Silicon Valley can build an exceptional-access solution that will be secure—they just have to think harder.

Recently, Wired ran an article that seemed to support this reasoning. The magazine profiled Ray Ozzie, the developer of Lotus Notes and former chief software architect at Microsoft, who claims to have a solution to the exceptional-access problem.

Spoiler alert: It ain't so.

At this point, readers have three choices. You can just take my word for it. You can read my argument below that explains in, some detail, the reasons behind that statement. Or you can take the middle route and see how Ozzie’s proposal stacks up against the structured framework proposed by the recent National Academies encryption study for evaluating proposals to provide authorized government agencies with access to encrypted content. (Disclosure: I served on the Academies committee.)

As readers may know from reading earlier articles on Ozzie’s work and the Academies study, Ozzie presented his approach to the Academies committee last year. The committee was not constituted to study the security of particular proposals, and so I approached Ozzie and asked if he would be willing to have an ad hoc group of security researchers look at his proposal. Ozzie graciously agreed. Last May, I met with Ozzie along with Steve Bellovin, Matt Blaze, Dan Boneh, Ron Rivest and several industry researchers. My comments below are based on this joint effort and discussion.

Ozzie’s “Clear” proposal claims to provide a secure way to do lawful exceptional access for devices. The idea is that device decryption keys would be wrapped in an encryption key known to the device manufacturer and stored on the device. In order for a law enforcement agency to open a locked device in its possession and which it has appropriate legal authority to unlock, the agency would send the phone’s wrapped key to the manufacturer. The manufacturer would have an unwrapping key and return the protected unwrapped key to the law enforcement agency, which could then unlock the device. Once unlocked, the device “bricks” itself—that is, no longer turns on or works (thus resembling a brick). This keeps the phone’s evidence pristine and also protects the device’s owner, for the locking provides the owner with notice that someone has accessed her device’s contents. (Of course, the locked state also prevents the owner from ever using the device again, but I’ll ignore that here.)

That’s the entire proposal. It has the virtue of being simple. But security can be subtle, and simple solutions often miss critical aspects. And so it is with Ozzie’s approach.

Ozzie’s proposal is based on the seemingly straightforward premise that if we can trust manufacturers to provide software updates, then we can trust them to provide exceptional access. The rationale is that software updates are trusted because they are signed by keys that manufacturers secure. If the manufacturers can keep update keys secure, Ozzie’s argument goes, they can also secure Ozzie’s unwrapping keys. This argument has the virtue of sounding reasonable. But it is a false analogy.

Software-update keys are of only limited use to an attacker. In order to use a software-update key to cause harm, an attacker who would have to develop a software update that changes the device’s operating system in a way that is unnoticeable by the device owner—that is, so the device is still appears to be operating properly—but also accomplishes some task the hacker wants done. This is not easy to do. Operating systems are large and complex pieces of software, and adding code that accomplishes an additional task without disruption is not simple. Doing this successfully would take significant time and could well be undone—or disrupted—by the manufacturer’s next software update. The task is even harder because the Apple operating system is closed source, so reverse engineering is needed, and the Android ecosystem involves multiple differing implementations of the operating system. So even if an attacker gains access to a software update key, there’s likely to be a substantial delay before she would be able to use it.

A key for exceptional access—like Ozzie’s “unwrapping” key—is different. If the attacker has a phone she wants to open, the key would be of immediate use. And that makes an exceptional access key much more valuable than an update key. The unwrapping keys would be high value and highly desired by attackers, so the storage for such keys would be much more susceptible to attack than the storage of signing keys.

Companies have been able to secure software-update keys, which are tools that are rarely used. FBI director Christopher Wray recently stated that the FBI dealt with slightly fewer than 7,800 locked devices it could not access over the course of 2017; at that rate, exceptional access keys would be used multiple times a day by federal law enforcement, let alone state and local police. The process that secures update keys would not scale; instead, the exceptional-access keys would be at much greater risk. So the fact that the software-update keys have been secure does not imply that the companies have the ability to secure exceptional-access keys. They will undoubtedly be the target of attacks, including by sophisticated nations.

What’s more, Ozzie’s proposed technique for secure exceptional access is actually insecure. As cryptographer Eran Tromer has demonstrated, the protocol would allow an attacker in possession of a device to fool law enforcement into obtaining an unwrapping key from the manufacturer and providing it back to the attacker. Tromer also notes that jailbreaking the device would enable it to circumvent the Clear protocol. Put another way, Clear weakens the security of locked devices while failing to prevent determined users from circumventing the exceptional-access system that the protocol is intended to provide in the first place.

Ozzie’s solution is also underspecified—as a matter of conscious choice. He believes that a fully detailed design would have overconstrained system design, preventing engineers from building an exceptional access system in the most appropriate way. But Ozzie’s underspecification is a serious omission in proposing a security system.

Clear’s design has some good aspects: The fact that devices will be unlocked individually, and that phones are bricked upon law-enforcement unlocking, make it difficult to use the protocol from being for mass surveillance. These positive features are far outweighed, however, by the failure to adequately consider security.

Security software is subtle; even minor modifications of a system can change a system from a secure configuration to an insecure one. This makes program specifications crucial, but Clear lacks any such approach. The specificity is particularly important when a system has many moving parts, for it is at program interfaces that security errors often creep in. Consider, for example, the recent attack on the WPA protocol securing Wi-Fi communications. This protocol was well trusted; it had been used for 14 years, and aspects of it had even been proved correct. But interactions between two parts of the protocol—handling of resets during authentication that led to reusing an encryption key—enabled attackers to decrypt communications, including login credentials.

Exceptional access is complicated. Ozzie’s Clear proposal attempts to solve a single aspect of the exceptional-access problem: transmission of an unlock key to law enforcement holding a phone that it has a warrant to open. A working exceptional access system must do much more: It must successfully navigate interaction across an array of device manufacturers and law-enforcement agencies, securely authenticating the requests, handling frequent device updates, all the while running in real time. There are a number of protocols that would need to be designed in order to make this happen—much less happen securely—but Ozzie’s proposal touches on none of these aspects. In that sense, the proposal is not an exceptional-access solution, but rather an approach to a single, narrow aspect of the problem. (And, as Tromer’s attack shows, that solution is flawed.)

The way we develop trust that security solutions work correctly—whether it is solutions for authentication protocols, cryptography, proofs of correctness or trust management—is through thorough examination of detailed proposals presented at scientific meetings and workshops. Vetting is done by a wide variety of experts, their differing approaches serving to uncover different types of problems. Ozzie omitted the vetting step, moving from an initial idea and a patent application to a public discussion of his claim that exceptional access can be done securely. But when Bellovin, Blaze, Boneh, Rivest and I applied the criteria for evaluating encryption choices developed in the National Academies study to Clear, we found that the “approach … is insufficiently complete to fully assess its risks.” If you can’t determine how something works, you can’t determine the risks of using it. In that case, you should not base your future systems on the model.

The FBI has raised the specter of going dark since the beginning of the decade, and during that time, it has repeatedly pressed for legislative solutions—even when security experts have argued that such solutions will create insecurity. Ozzie says he skipped over the vetting process to spark widespread discussion of exceptional access solutions, but the danger is that his shortcut will give legislators and the public the false impression that he has, against the common wisdom, produced an exceptional access solution. The discussion above show that he has done nothing of the sort. It is possible that somewhere, somehow, there is an exceptional access technique that does not seriously injure security. But Clear is not that technique—and it should not be used as a basis for legislation.


Susan Landau is Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, and is founding director of Tufts MS program in Cybersecurity and Public Policy. Landau has testified before Congress and briefed U.S. and European policymakers on encryption, surveillance, and cybersecurity issues.

Subscribe to Lawfare