Cybersecurity & Tech Surveillance & Privacy

Resisting Law Enforcement’s Siren Song: A Call for Cryptographers to Improve Trust and Security

Cindy Cohn
Friday, November 30, 2018, 11:19 AM

This is part of a series of essays from the Crypto 2018 Workshop on Encryption and Surveillance.

Published by The Lawfare Institute
in Cooperation With
Brookings

This is part of a series of essays from the Crypto 2018 Workshop on Encryption and Surveillance.

The world is waking up to something that digital security experts have known for a very long time: Digital security is hard. Really hard. And the larger and more complex the systems, the more difficult it is to plug all the security holes and make them secure and trustworthy. Yet security is also increasingly important in systems ranging from the smartphones in our hands to our power grids. So why isn’t everyone—especially the governments of the Five Eyes countries—promoting, supporting, and celebrating important security work? In part, it’s because law enforcement in each of these countries wants to take advantage of the same security holes that criminals do—a result that puts us all at risk. Even worse, many of these governments are now pushing companies—both through both law and through nonlegal pressure—to ensure that any future technology that the public relies on continues to have security holes they (and criminals) can use.

The drumbeat of the daily headlines on cybersecurity incidents shows us that the risks of weak digital security are already here. We’ve heard about phishing attacks by foreign governments aimed at undermining our democracy and ransomware attacks on hospitals. We’ve seen nation-state level attacks on corporations like Sony and on sensitive government employee databases like the one maintained by the federal Office of Personnel Management. Corporate data breaches like the ones suffered by Equifax and Target have affected tens of millions of users. Countless others have suffered from identity theft, malware, and more recently, spouseware—malware used by domestic abusers. Meanwhile, we’ve seen more research proving how easy it is to break into many U.S. voting systems. The attacks and undermining strategies are different in each of these, but the underlying problem is the same: Our digital systems are not secure enough, and our current security techniques are not up to the task.

Creating systems of trust and real security for users should be all hands on deck, from government to the private sector. We need to encrypt the web, secure data at rest and in transit, and ensure that homes, cars and anything that can be connected to the internet are safe and trustworthy. The array of options is poor since security architects have to bolt security onto insecure systems. But that’s all the more reason to encourage people who understand how computer security works (and how it fails) to help. After all, there are only so many hours in the day, and the more attention we pay to these problems, the faster and better we can address them.

It’s not just individuals and private institutions who should be focusing on improving security for users, of course. Governments should be shouldering their responsibility for public safety by leading, incentivizing and, in places, even legally mandating real digital security for their increasingly vulnerable citizens.

But they are not. While the U.S. government has pushed hard to make sure that companies give them information about security problems—in the Department of Homeland Security’s Information Sharing and Analysis Centers and in the Cybersecurity Information Sharing Act passed in 2015, for example—there has been very little information or tools coming back to protect the public as technology users. This is even as we’re pushed into a world that increasingly relies on the internet for every facet of our daily lives. It’s also as the consequences of losing control of our data grow larger and more dire. Digital networks are now increasingly coming into our homes and cars. There are pushes to move to online voting, to the horror of security experts. The vast majority of us carry our phones with us everywhere; with them comes access to a tremendous amount of intimate information about us, our loved ones and our business and personal associations, both stored on the device and accessible through them.

The government should generate, incentivize and support efforts to build a more secure and trustworthy internet, along with the devices and services that rely on it. Instead, law enforcement in the U.S. and elsewhere too often demonize companies and individuals that offer strong security and pressure them to offer worse tools, not better ones.

There’s a history here. In the 1990s, the U.S. government nonsensically stuck with the Digital Encryption Standard (DES) long past the time when it was broken. In the 2000s, the NSA secretly placed security weaknesses in software we all rely on through a program called Bullrun. The government’s goal has been to the preserve its ability to access any conversation or data of anyone around the world. But each time the government has tried to publicly mandate access, from the Clipper Chip in the 1990s to the more recent Burr-Feinstein “Compliance with Court Orders Act,” it has failed. That’s because citizens and companies want strong security and recognize the obvious fact: There’s no way to weaken encryption for the government’s purposes without weakening it in ways that lets bad guys in too.

Recently, we’ve seen a change in tactics by the governments of the U.S., Australia and the United Kingdom. Rather than the strategy that the FBI used in attacking Apple for offering strong encryption on the iPhone, these governments have shifted to promoting efforts to weaken our security in ways other than by weakening the encryption. One goal seems to be to convince the world’s leading cryptographers and security researchers to “fix” the problem of strong security preventing governmental access by creating or using non-encryption security weaknesses. The end result is that the public is still insecure, just through different mechanisms so that the governments can crow about how they support strong encryption while still not supporting strong security. Sadly, some researchers are taking the bait. Several were featured at a workshop on technical and policy aspects of encryption and surveillance at this year’s Crypto conference in Santa Barbara.

Why are these governments pushing in the direction of claiming to embrace strong encryption? The reason isn’t technical; it’s about a policy fight in Congress, in parliaments and in governmental agencies around the world. As hard as they’ve tried, law enforcement has not been able to overcome the overwhelming consensus of security experts—from academics, industry experts, even former government officials like Michael Chertoff and Michael Hayden—about the importance of strong encryption to digital security. Despite several years of pushing technologists to “nerd harder,” there has not been a single serious security expert who thinks that we can build a system at scale that allows the government a way to break the encryption on networks and devices that doesn’t leave all of us at risk to bad actors. Government reports and private reports alike agree on this.

So the governments are trying another tack. The push is to convince a few cryptographers and security experts—and they only need a few—to “fix” this problem by finding ways to access information secured by cryptography without actually breaking the cryptography. To date, these efforts all involve some quite clever ways to say “we support strong cryptography” by undermining the very reason that people want it in the first place, for security.

During the Crypto conference, the U.K. government described a basic strategy called a “ghost,” although the better term is “stalker.” The idea is to insert a secret participant into a conversation or create a secret “approved device” to connect to a targeted account or a device and obtain the sought data. So you as a user think you’re just talking to a loved one, but the government is secretly listening to your conversation. Or you think you’re just syncing data from your phone to your laptop, but your data is being copied to an unseen government device. The cryptography remains as it was, but the trust in the security of your system—that users know who is listening to their conversations or getting access to their data—is broken. The U.K. government has openly acknowledged that this “stalker” strategy is its preferred approach to eavesdropping on encrypted systems like WhatsApp and iMessage.

In a similar vein, the Crypto conference also included a few high-level technical presentations for “tainted update” systems. These rely on technology companies providing their users with compromised security and other updates. Most are variations on the old, discredited key-escrow ideas, some with hardware pieces along with software. The major shift is that companies rather than governments hold the keys.

None of these ideas—the new “stalker” idea or the old key escrow idea with some new window dressing—has been well-vetted, much less demonstrated on the scale that modern digital markets require. But aside from the technical concerns (and there are many), each of them turns on betraying the trust the public needs to have in the companies that provide us with our tools, including critical security updates. And we can be sure that if one of them is deployed, it will be in secret, with gag orders preventing the companies from informing their users, meaning that the bad guys may find it and begin secretly using it to steal from us without us ever knowing.

So why should security professionals resist the government’s siren song? Why should they spend their time and energy answering the broader public need for trustworthy end-user security rather than responding to the government’s desire for ensured access?

First, a practical reason. There are only so many hours in a day. Time spent undermining security, even if today’s undermining is a little less dangerous than yesterday’s, trades off with time spent making the internet safer. We urgently need to make the internet more secure for end users. While the security field is large, and thankfully, growing, the problems are also multiplying while growing more complex. The number of academics and thought leaders in security who tackle these issues broadly is still relatively small. Efforts that make us less secure, or less secure than we would otherwise be, are headed in the wrong direction, even if they aren’t quite as bad as earlier ideas.

Second, regardless of how technically clever they are, these approaches cannot solve one of the key nontechnical problems with the law enforcement’s wish for ensured access. Even without compromising the cryptography, there is no way to allow access for only the good guys (for instance, law enforcement with a Title III warrant) and not for the bad guys (hostile governments, commercial spies, thieves, harassers, bad cops and more). The NSA has had several incidents in just the past few years where it lost control of its bag of tricks, so the old government idea called NOBUS—that “nobody but us” could use these attacks—isn’t grounded in reality. Putting the keys in the hands of technology companies instead of governments just moves the target for hostile actors. And it’s unrealistic to expect companies to both protect the keys and get it right each time in their responses to hundreds of thousands of law enforcement and national security requests per year from local, state, federal and foreign jurisdictions. History has shown that it’s only a matter of time before bad actors figure out how to co-opt the same mechanisms that good guys use—whether corporate or governmental—and become “stalkers” themselves.

Moreover, even assuming a “government only” strategy, if this access is created for the U.S., U.K. or Australian government, that capability will be demanded by China, Egypt, Turkey, Saudi Arabia and other governments with far less due process and other protections. The truth is that governments around the world have very different standards and practices for when they believe they should have access to private data. Yet a company having built this capability would find itself having to decide which of those standards are good ones and which are not, or else face the fact that they are facilitating human rights abuses and repression. Conceptually, before we can have a global technology infrastructure and global trade in technology that ensures access to communications and data, we need to come to global consensus about the circumstances in which communications privacy should be violated. No amount of technology can protect against this core legal and policy issue.

Third, these “ghost,” “stalker” and “tainted update” approaches all undermine the trust that people need to have if we’re ever going to get out of our current dismal security hole. Under it, users will rightfully worry that every change in their system might be undermining their security, secretly adding another listener into their conversations, secretly adding another device to their approved keychain or secretly siphoning copies of their data away. But security is a process, not an end state. Everyone using digital tools needs to be able to trust that the companies that create them are not working for one government or another (or many). Users especially need to be able take and apply security updates and other patches without fear. But all of these approaches compromise that trust. They turn our technology providers into secret governmental agents. The Big Brothers of any number of countries around the world won’t just be watching us from afar: They will be in our very pockets.

Finally, this is fundamentally an attempt to shoehorn a policy debate into a technical question. Having failed to demonize cryptography, law enforcement wants to say that expert security professionals differ on the question of whether the government can preserve the ability to eavesdrop on every private, digital conversation without touching the cryptography. An initial proposal from Ray Ozzie along these lines, for instance, was greeted with an enthusiastic quote from law enforcement on Wired celebrating: “[T]he fact that someone with [Ozzie’s] experience and understanding is presenting it.” But the question of how much risk we should create for individuals—not to mention whether we can tolerate undermining the trust necessary for secure systems to function—in order to ensure ubiquitous eavesdropping capability is not fundamentally technical. It’s a matter of policy. That’s obvious from Ozzie’s shrugging response to questions about how companies should handle responses from repressive governments.

People with digital security skills should be working to encrypt the web, secure our data whether at rest or in transit, ensure our homes, cars and anything that can be connected to the internet are safe and trustworthy. They shouldn’t use those skills to undermine our security or try to find clever ways of undermining it just a little less.

In freeing up encryption from government regulation in the late 1990s, the Ninth Circuit recognized that our privacy was slipping away: “Whether we are surveilled by our government, by criminals, or by our neighbors, it is fair to say that never has our ability to shield our affairs from prying eyes been at such a low ebb.” There’s a strong argument to be made. that since then, our ability to protect our privacy has further eroded, taking our basic security along with it.

Security work is more important than ever. I urge our community to spend its time and talents making the digital world safer rather than demolishing what trust and security still remain in our technology, our devices, and our infrastructure.


Cindy Cohn is the executive director of the Electronic Frontier Foundation.

Subscribe to Lawfare