Five Hard Encryption Questions
Over the past few weeks, I have been up to my neck in encryption.
Published by The Lawfare Institute
in Cooperation With
Over the past few weeks, I have been up to my neck in encryption.
Usually, when a public policy issue consumes me like this, it’s because I have taken a strong position on it of one sort or another—one with which people disagree—and the result is a debate. In this case, I haven’t taken a strong position. Rather, I have struggled with a variety of ideas, trying to think my own way through to the right answer. Along the way, I have identified some questions in this debate that are, to my mind anyway, really hard. In this post, I'm going to lay out five of them. Three are challenges for government and law enforcement; two others are challenges for industry and civil libertarians. In all five cases, I am formulating these as questions because I genuinely don't know the answers.
Question #1: What About Telegram?
If you talk to industry folks about why they think FBI Director James Comey is out to lunch on this subject, the name “Telegram” comes up pretty fast. Telegram’s “secret chats” involve an end-to-end encrypted messaging service based in Germany and thus not subject to US jurisdiction. The significance of Telegram to the discussion is simple: Even if you imagine that all US internet companies—either voluntarily or not—retained the means to decrypt user communications, bad-guy traffic can simply migrate to Telegram and other services like it. Then, not only will the contents of communications be encrypted, but the metadata will be beyond law enforcement’s reach too.
So, the industry folks wonder, isn’t the FBI better off dealing with grownups subject to US jurisdiction than with small, ideological startups that are outside it?
It’s a fair question. I can think of a number of possible answers to it, but none is entirely satisfying. One is that defaults matter. If the major internet companies in the United States—which account for some large percent of traffic worldwide—retain the capacity to decrypt, some bad-guy traffic will migrate to safer environments. But not all will, just as not all bad guys stayed off the phone lines knowing that NSA collected a huge percentage of overseas traffic. Some will be dumb and use the easiest default options available to them, and that group may involve important actors. Strong cryptography has been available for a long time. The more of a pain in the neck it is to use, and the biggest pain in the neck is getting others to use it with you, the more people will fail to do so.
The trouble with this argument, in my view, is that the actors we need to fear most seem most likely to be the ones who will actually migrate to more protective environments. My guess is that ISIS will be the first to guide its people to the most secure platforms. The dumb actors, I suspect, will tend to be the child pornographers and others who lack guidance from organizationally sophisticated actors. That could be wrong.
But in any event, the question remains: Assuming Comey gets the system he wants, what is its real security value given the prevalence and easy availability of services like Telegram? And might the proximate result be better visibility into the activities of the stupid and worse visibility (because no metadata) into the activities of the most sophisticated?
Question #2: Is This Really an Effort to Preserve the Status Quo?
As I have said before, I am sympathetic to Comey’s policy objectives here (though I’m not sure they are technically feasible consistent with other cybersecurity objectives). But there’s a key aspect of the way that he and other government officials talk about this issue that seems off to me. That is the way they talk about the Fourth Amendment in relation to the creation of new infrastructure. To hear Comey and Deputy Attorney General Sally Yates talk about this issue, you might think the Fourth Amendment protects the government’s surveillance authorities when it has an appropriate warrant. As the two put in their recent joint testimony before the Senate Judiciary Committee, "One of the bedrock principles upon which we rely to guide us is the principle of judicial authorization: that if an independent judge finds reason to believe that certain private communications contain evidence of a crime, then the government can conduct a limited search for that evidence."
Before the Judiciary Committee, Yates put it this way: “we're not seeking any new authority that we don't already have. We already have the authority that we need under the wiretap statute and under FISA. What we don't have now is the capability to be able to execute that authority.”
Comey added later on: “I think ordinary Americans when they hear this think so long as it's pursuant to the Fourth Amendment it's OK to live in a world where a judge can make a showing of probable cause and issue a warrant to get access to a safe or to a phone.”
I don’t disagree with these statements, but there’s a logical hop that Comey and Yates would do well to avoid making here. That is that the Fourth Amendment is not, in fact, a guarantee of the authority to surveil with a warrant. It is prohibition (generally speaking) of surveillance without one.
That is, I’m not sure it’s quite accurate for Comey and Yates to talk about "going dark" in terms of preserving the status quo. The more accurate way to describe what they're asking for is altering the legal status quo in order to maintain the status quo in their level of capability. That’s a rather different thing. And there’s a presumption in that request that does not sound in the Fourth Amendment.
The Fourth Amendment, in fact, makes no assumptions about and imposes no restrictions on what steps individuals may take to frustrate surveillance—probably because it wasn’t always possible to live one’s life in a fashion that is quite so resistant to surveillance as it is now. A safe can be opened. A room, even one with a big lock on the door, can be searched. The Fourth Amendment does not entitle law enforcement to regulate the manufacture of locks.
This idea, however, has crept into our law, however—in the form of CALEA, the 1994 law which required that the telephone companies maintain wiretapping capacity as they migrated to digital systems. Then too, law enforcement talked about it as an effort to preserve the status quo.
So here’s my question: If we could build a safe deposit box so strong that it could not be opened except by the key holder—not by the bank, and not by law enforcement with a warrant—should that be legal to use? And if banks offered SCIF space for meetings between customers, including potentially ISIS folks—spaces that could not be subject to electronic surveillance (content monitoring impossible) but which you could watch people go in and out of (metadata collection still possible)—should that be a legal service? In other words, how far do we want to extend the CALEA principle that companies have an affirmative burden both to facilitate surveillance and to design systems capable of supporting it?
Question #3: What About Authoritarian Governments?
Civil libertarian critics of the administration are a bit too blithe in throwing around what we might call the China Card. The question goes something like this: If the US can compel the creation of an extraordinary access regime, then so can China and Russia and you’ve effectively enabled authoritarian governments everywhere.
I say the point is too blithe for a few reasons. One is that these governments can do this anyway. Neither China nor Russia has any particular inhibition about regulating the internet. So the real consequence of US behavior here lies in norm creation and legitimization.
But there is an aspect of this question that we should, I think, take very seriously. That is, if the United States takes the CALEA-like position that companies should build in decryption capacity to be utilized in accord with our laws, it follows that we think it’s legitimate for other countries—including Russia and China—to require the same technical capability to be utilized in accord with their laws. The US might retain arguments with the substance and permissiveness of their surveillance laws, but we couldn’t object per se to the technical mandate that the companies be capable of delivering decrypted signal.
As a practical matter, would this function as a virtual legal requirement that major American internet companies either cooperate with these regimes or pull out of those countries? I’m not sure I mind such a mandate, if that's what it is—as facilitating Apple’s access to the Chinese market is not my highest foreign policy priority. But it seems to me important that—if that’s what a policy initiative means in practice—to be honest about it.
Let’s turn now to some tough questions for industry and civil libertarians.
Question #4: Is It Really Technically Impossible?
It is an article of faith in the tech community that what Comey is asking for is not technically possible, that retaining the capacity to decrypt inherently builds vulnerabilities into systems. In talking to a bunch of people about this issue over the past few weeks, I have become suspicious that there’s a bit of intellectual jiu jitsu going on here with respect to the definitions of "secure" and "vulnerable."
Right now, after all, the major internet companies all provide—in one form or another—for the ability to decrypt signal. Google, for example, retains the ability to decrypt Gmail and Gchat communications for its own business reasons; that’s how it routes you ads based on the contents of your material. To my knowledge, Google does not take the position that this service is insecure, nor do I know of any particular security issues that have arisen as a result of it. Similarly, Facebook’s Messenger can be decrypted by Facebook. And although Facebook has announced that it’s moving towards end-to-end encryption for this service, it hasn’t—to my knowledge, anyway—acknowledged that its service as currently constituted is insecure. Nor do I know of security problems that have arisen because of its current lack of end-to-end encryption.
Even Apple, despite its aggressive move towards end-to-end encryption, still allows iCloud backups of messages in a form that can be decrypted. To my knowledge, Apple does not take the view that its services are insecure to the extent users avail themselves of these backup options services. Nor do I know of significant cybersecurity problems that have arisen because of the availability of these services.
Unless, that is, you count as a security vulnerability the capacity to cooperate with lawful surveillance—in which case, aren’t you just defining as a cybersecurity problem what is really a philosophical opposition to lawful government surveillance.
So here’s my question to industry and to the technologists who insist what Comey is asking for is impossible: What is the evidence that Gchat, Apple’s iCloud services, and Facebook Messenger are actually insecure against attackers who are not exercising lawful process?
Question #5: Is this Really About Pressure from Consumers?
The companies have all framed their moves toward bulking up encryption in terms of consumer demand and pressure, particularly abroad. Color me at least a little skeptical about this claim. Yes, lots of people are upset by the perception that these companies are in bed with NSA. And the Snowden revelations have certainly created political problems for US companies operating overseas. But a brief look at the user growth figures for these companies in the post-Snowden era suggests some reason to doubt that there’s a particular crisis of confidence in American internet companies.
"Facebook Posts Solid Gains, a Feat Eluding Rivals,” reports the New York Times:
Facebook’s namesake social network has kept drawing in new users and has persuaded them to come back frequently. The company said that 1.49 billion people logged on at least once a month during the quarter, up from 1.44 billion in the first quarter. About 65 percent of Facebook users checked in daily — about the same level as it has been over the past year.
The company’s daily active user figures have also continued to climb.
Ditto Google. TechCrunch reports:
Google’s senior vice president of products Sundar Pichai today announced that Gmail now has a total of 900 million users at the company’s annual I/O developers conference. That’s up from 425 million in 2012, the last time the company updated its official Gmail users stats. Pichai also noted that 75 percent of these Gmail users now access their accounts on mobile devices.
When Google last updated its numbers, Gmail became the most popular email service in the U.S. after passing Hotmail (now Outlook.com), which had long held on to the number one spot. Sadly, most of Google’s competitors haven’t updated their user numbers in quite a while. Chances are, though, that Gmail is still at the top of the pack.
It’s hard to see in these numbers the intense consumer pressure these companies are facing because of their maintenance of the capacity to decrypt signal.
This brings me to my question: Is the pressure the companies are facing really coming from consumers or is it coming from each other?
In the immediate wake of the Snowden revelations, the companies one-upped one another in promising ever-greater security. That trend has continued. Apple CEO Tim Cook recently gave a speech in which he both defended Apple’s use of strong encryption and did so in the context of distinguishing Apple (which is fundamentally a device maker) from companies that traffic in your data. As TechCrunch writer Matthew Panzarino summarized:
Cook lost no time in directing comments at companies (obviously, though not explicitly) like Facebook and Google, which rely on advertising to users based on the data they collect from them for a portion, if not a majority, of their income.
“I’m speaking to you from Silicon Valley, where some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information,” said Cook. “They’re gobbling up everything they can learn about you and trying to monetize it. We think that’s wrong. And it’s not the kind of company that Apple wants to be.”
Cook went on to state, as he has before when talking about products like Apple Pay, that Apple ‘doesn’t want your data.’
“We don’t think you should ever have to trade it for a service you think is free but actually comes at a very high cost. This is especially true now that we’re storing data about our health, our finances and our homes on our devices,” Cook went on, getting even more explicit when talking about user privacy.
“We believe the customer should be in control of their own information. You might like these so-called free services, but we don’t think they’re worth having your email, your search history and now even your family photos data mined and sold off for god knows what advertising purpose. And we think some day, customers will see this for what it is.”
. . .
Cook then switched gears to talk about encryption — directly addressing the efforts by policy makers to force Apple to offer a ‘master key’ that would allow government agencies access to consumer devices. “There’s another attack on our civil liberties that we see heating up every day — it’s the battle over encryption. Some in Washington are hoping to undermine the ability of ordinary citizens to encrypt their data,” said Cook.“We think this is incredibly dangerous. We’ve been offering encryption tools in our products for years, and we’re going to stay on that path. We think it’s a critical feature for our customers who want to keep their data secure. For years we’ve offered encryption services like iMessage and FaceTime because we believe the contents of your text messages and your video chats is none of our business.”
. . .
It’s a masterful stroke of speechifying. As I’ve mentioned before, by taking this stance (which I do not believe to be disingenuous, their profit centers support it), Apple has put all other cloud companies in the unfortunate position of digging themselves out of a moral communications hole to prove their altruism when it comes to user data.
I’m not saying that Cook is correct in brutalizing the motives of companies like Google or Facebook — but it does craft a strong portrait — because Apple is safer and ‘not interested’ in your data casts a cloud (ahem) of doubt over pretty much every other company in its league.
Apple’s privacy policies sound similar themes, again linking the device maker's use of strong encryption to the fact that its business model is not based on the exploitation of user data:
Your iMessages and FaceTime calls are your business, not ours. Your communications are protected by end-to-end encryption across all your devices when you use iMessage and FaceTime, and with iOS 8 and Watch OS your iMessages are also encrypted on your device in such a way that they can’t be accessed without your passcode. Apple has no way to decrypt iMessage and FaceTime data when it’s in transit between devices. So unlike other companies’ messaging services, Apple doesn’t scan your communications, and we wouldn’t be able to comply with a wiretap order even if we wanted to. While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want. And we don’t store FaceTime calls on any servers.
In other words, to what extent is the companies' posture here really about protecting material from Jim Comey bearing a warrant? Conversely, to what extent is this about companies bashing each other for holding consumer data and exploiting it commercially and other companies defensively trying to shield themelves from this criticism? Put another way, what is the real evidence that consumers are demanding end-to-end strong encryption in any significant numbers?
I will deal with a final issue—the fascinating question of FBI device hacking—in a separate post.