Surveillance & Privacy

Encryption Keys and Surveillance

Paul Rosenzweig
Monday, August 5, 2013, 2:00 PM
In this post I want to offer some extended thoughts on the question of encryption and its intersection with surveillance on the web.  The jumping off point for this discussion is a recent article by Declan McCullagh for CNet in which he describes increasing pressure being brought to bear by the government on service providers like Microsoft and Google to give the government their master encryption keys.  [Also related is

Published by The Lawfare Institute
in Cooperation With
Brookings

In this post I want to offer some extended thoughts on the question of encryption and its intersection with surveillance on the web.  The jumping off point for this discussion is a recent article by Declan McCullagh for CNet in which he describes increasing pressure being brought to bear by the government on service providers like Microsoft and Google to give the government their master encryption keys.  [Also related is this article regarding efforts to install surveillance devices inside certain systems.]   The companies are, apparently, resisting the requests as unauthorized.   To understand the import of these requests (and also the limitations of them) I want to, first, describe the architecture of encryption and then delve into the law. Encryption on the Web Encryption is closely related to, but distinct from the issue of wiretapping (or, more broadly, communications interception).   Encryption – when and how communications and information can be encoded and decoded so that only the people you want to read the information can have access to it  --  and wiretapping – that is, whether and under what rules someone can intercept messages in transit and divert or copy them to their own purposes – are just two sides of the same coin.  The linkage between the two seems apparent – wiretapping a message you cannot decrypt still leaves the content of the message concealed and even unencrypted information is safe if the transmission channels are absolutely secure.   Of course, 100% security in either domain is exceedingly difficult to achieve in practice.  As a consequence, those engaged in surveillance in cyberspace (whether governments or individuals) want both capabilities – to intercept/divert information and to decode it so that they can read its contents. And their opponents prefer that they have neither. Let’s focus for a moment on only the first of these two problems – the question of encryption.  if done properly, is can assure that your information is confidential and can’t be read by anyone else.  It also, in contemporary usage, can provide you with a means of confirming that the information has not been tampered with in any way (since using modern algorithms, any alteration of the data would result in a gibberish product).  Encoding information can keep it secret and sealed.  Properly used, it can also allow you to share information with a trusted partner while excluding others. But this expansion of cryptographic capabilities to protect cyber networks comes with an uncertain cost to order and governance.  Advances in cryptographic technology have made it increasingly difficult for individuals to “crack” a code.  Codebreaking is as old as codemaking, naturally.  But, as the run of technology has played out, encryption increasingly has an advantage over decryption, and recent advances have brought us to the point where decryption can, in some cases, be effectively impossible.  [I read, recently, that a particular program used to protect a child pornographer’s data base had completely defeated FBI efforts to crack it.]  This has the positive benefit of allowing legitimate users to protect their lawful secrets – but it has the inevitable effect of distributing a technology that can protect malevolent uses of the Internet.   If the United States government can encrypt its data, so can China, or the Russian mob, or a Mexican drug cartel. And that, in turn, leads to the critical question posed by the McCullagh article.  Since strong encryption is, for most purposes, uncrackable, governments have turned to an alternate method of securing access to encrypted communications – forcing those who do the encryption to provide it with the encryption keys to decrypt the messages directly. But even here an important distinction needs to be made between end-point encryption and intermediate or service-provider encryption.  It’s a common technological distinction and one, I think, that has critical legal ramifications. Google, Microsoft, Dropbox and other cloud service providers use forms of intermediate encryption.  When, for example, you store data in Dropbox or leave your email in your Gmail folder on the web, that data is encrypted by the service provider.  Their encryption techniques are, generically, quite strong – and that makes them relatively well-protected against outside attack.  But the reality is that for long-term storage the service provider itself retains the encryption key.  That’s the functionality that in the end allows you to log in to you Microsoft Skydrive from any one of several different machines.  Your user name and password combination enables the decryption of the contents of your cloud storage.  This is the type of encryption that almost all conventional users enable – it’s quick, effective and seamlessly integrated into your application.  But if Microsoft does the encryption for you … then Microsoft holds the encryption key. By contrast, for end-point encryption the user holds the encryption key.  If, for example, you use a strong encryption program like TrueCrypt locally on your own hard drive and then upload the encrypted file to Dropbox, the fact that Dropbox further encrypts the data is good but, with respect to the government’s demands, irrelevant.  Even if Dropbox were compelled by a lawful order to give the government its decryption key (as the news story suggests it might) all that it could turn over is your encrypted file – which would still be encrypted gibberish to the government. The problem, such as it is, is that endpoint encryption, also sometimes called gateway encryption, is not nearly as easy to do, not as widely available, and not as seamlessly integrated.  For example, I have PGP (a privacy encryption program) that I can use for email exchanges.  It acts as an endpoint encryption such that even if the NSA were to intercept the message it would be nearly impossible to read.  But nobody really uses PGP in modern business.  It isn’t seamlessly integrated into my email program and so it falls by the wayside. In fact in all the time I've had it, I've only received one encrypted PGP message. Likewise, because I’m cautious, I encrypt files before I upload them to Dropbox.  But then to search them, I have to call them down from Dropbox, decrypt them locally and then do my search.  That’s a multi-step process.  I do it … but many do not.  In the end, then, some of surveillance is simply enabled by the disutility of existing encryption systems – and some are starting to recognize the business opportunities with applications that fill that gap. So that’s the architectural issue.  Sometimes the user – YOU – hold the encryption keys and sometimes your service provider – Google, etc. – does.  Who does makes a world of legal difference.  Intermediate Encryption When the encryption key is held by your service provider it is likely the government can readily get access to the passwords (at least in so far as the government intends to use the password to decrypt messages that have otherwise already been seized). To begin with, as Orin Kerr has noted, third party data holders generally cannot assert a Constitutional protection on behalf of their customers.  No Fifth Amendment claim arise when the government tries to compel passwords from third parties.  As the Court held in Fisher v. United States, 425 U.S. 391, 397-98 (1976) the Fifth Amendment does not prevent compelling information from a suspect’s attorney because compelling the attorney to divulge information does not compel the suspect to do anything.  The same, of course, would be true of my password in the hands of my service provider. Of course, we might argue that a password is “content” of a communication, and therefore subject to the 4th Amendment’s warrant requirement.  But that seems highly unlikely.  There are any number of cases (like this one from 2006 in the District of Columbia) that suggest that the only “content” in an email communication, for example, is the subject line and the body of the message.   The password is, of course, a means of getting at the content – but it is only a means, and I suspect that the way the law will play out is that the subsequent interception and decryption of the content is what requires a warrant – not the collection of the password itself. Finally, we could imagine the service provider opposing the subpoena on its own account.  As one of my more libertarian minded friends has suggested, the provider could oppose the subpoena on the ground that it give the government the capability of impersonating the user and thereby listen in on future confidential communications with friends.  Or, disclosure of the user name and password may trigger a notification responsibility under state data breach laws. After all, many companies pride themselves on never allowing the government any kind of direct access to their servers, so these arguments must be working.  And, of course, by challenging the subpoena and being subject to an order to compel disclosure the service providers will, at a minimum, demonstrate that they have made the effort. But in the long run that won’t work, I think.  If we indulge the assumption that the investigation is tied to some grand jury inquiry (likely in most, though not all scenarios) then resistance to a valid subpoena is quite hard.  It is pretty close to black letter law that the grand jury "can investigate merely on suspicion that the law is being violated, or even just because it wants assurance that it is not." United States v. Morton Salt Co., 338 U.S. 632, 642-643(1950).  This is pretty much a plenary investigative function. As a necessary consequence of its investigatory function, "[a] grand jury investigation 'is not fully carried out until every available clue has been run down and all witnesses examined in every proper way to find if a crime has been committed.' " Branzburg v. Hayes, 408 U.S. 665, 701 (1972) (internal quotation omitted). In United States v. R. Enterprises, 498 U.S. 292 (1990) the Supreme Court went a long way toward ruling out almost any procedural objection to a valid investigative demand by a business record holder.  Beginning from the premise that investigative demands are lawful, the Court concluded that investigative demands from a grand jury must be complied with unless “there is no reasonable possibility that the category of materials the Government seeks will produce information relevant to the general subject of the grand jury's investigation.” It added that the request must not be vague or overly burdensome, but frankly, I can see little argument here to successfully protect passwords.  The requests will be precise and fairly easy to comply with on a technical level – and the “reasonable possibility of relevance” standard is almost certain to be satisfied.  The Court added some other caveats – the investigation can’t be a “fishing expedition” or conducted with “malice” but here, too, the prospects of a successful opposition are limited. To be sure, the analogy is not perfect – some of the investigative demands in question may be FISA court orders rather than grand jury subpoenas, for example.  But that distinction cuts, if anything, against efforts to quash the demand.  FISA orders will have already had some limited judicial scrutiny – something lacking prior to the issuance of a grand jury subpoena – and that will render efforts to challenge them even more difficult. Finally, and to put it more directly, my friend's parade of horribles (that the government could impersonate people, for example) are unlikely to persuade.  I would be interested if any reader were aware of any investigative demand that had been quashed on the speculative ground that the government would subsequently misuse the information provided in an unlawful manner.  I’m aware of none and I suspect that the answer of any court would be “we’ll cross that bridge when we come to it.” And so, my first conclusion is that service providers like Google and Microsoft may well resist the government’s pressure – and they would be wise to do so for business reasons – but in the end they will not succeed. All of which suggests one possible business development would be for the providers to develop disposable one-time keys for each individual transmission, that they don’t retain.  My understanding from my technical friends is that this is currently theoretically possible, but difficult to implement on a large-scale basis.  If, however, they want to retain customer confidence this may become a higher priority for some service providers. The End Point Users By contrast the end point user who holds his own encryption key has a much stronger argument to make.  The courts have yet to definitively determine whether or not an effort to compel that individual to disclose the decryption key constitutes a violation of his Fifth Amendment privilege, but the trend is in favor of the individual’s protection. In general, the answer to the question turns on whether disclosing the decryption key is thought of more like the production of a physical object (such as the physical key to a lock box), which may be compelled, or like the production of a person’s mental conceptions (such as the memorized combination to a safe), which may not be.  The contrasting formulations were posited as useful analogies in Doe v. United States, 487 U.S. 201 (1988). In Doe, the signing of a blank bank consent form was considered more like the production of a physical object. By contrast in United States v. Hubbell 530 U.S. 27 (2000), the documents produced by the defendant in response to a subpoena were organized and selected through his own mental analysis and thus protected from disclosure. Few court cases have addressed the encryption question directly:  Two early cases, United States v. Rogozin, 2010 WL 4628520 (W.D.N.Y. Nov. 16. 2010) and United States v. Kirschner, 2010 WL 1257355 (E.D. Mich. March 30, 2010) thought that the password could not be compelled, while another, In re Boucher, 2009 WL 424718, *1 (D. Vt. 2009), was decided on the technicality that Boucher had already given the government access to his computer once, so he could not object to doing so a second time and disclosing his encryption key. Most recently, in United States v. Friscou, Cr. No. 10-509 (D. Colo. 2012), the court concluded that a password was more like the key to a lock and that a defendant could be compelled to disclose a password on pain of contempt for refusing to do so.  The most recent, and perhaps most definitive case, In Re: Grand Jury Subpoena Duces Tecum Dated March 25, 2011, No. 11-12268 (11th Cir. 2012), reflecting the first appellate court decision on the issue determined that compelled decryption was unconstitutional.  Suffice to say, the final resolution of this question lies in the future, but as I said, the trend is in the individual’s favor. Bottom Line According to Lawrence Pingree, an analyst with Gartner, the entire market for gateway encryption is currently only $50 to $70 million.  Part of that is because the technology isn't that advanced.  But another reason is that gateway encryption conflicts with the business model of many cloud service providers -- encrypted data can't be mined, for example, to assess what adds to push. Nevertheless, to me the bottom line is clear – the business case for easy gateway encryption is strong for privacy advocates and the law will tend to support it as a means of maintaining privacy.  So long, however, as gateway encryption remains technologically inelegant in its implementation, people will default to service provider encryption – which is effective against malicious actors but of limited use in opposing government compulsion. One final note:  As always, I welcome technical correction.  There will always be much I do not know about cyber technology and I'm anxious to learn.  If I've erred, please tell me.

Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare