Cybersecurity & Tech Surveillance & Privacy

Reflections on “It’s not a technical issue. It’s a business model question” (Comey to Senate Judiciary Committee, 12/9/15)

Herb Lin
Monday, January 4, 2016, 7:54 AM

On December 9, 2015, FBI Director James Comey testified to the Senate Judiciary Committee at an oversight hearing, and briefly touched on the subject of encryption (beginning at 26:10 on the C-SPAN video).

Published by The Lawfare Institute
in Cooperation With
Brookings

On December 9, 2015, FBI Director James Comey testified to the Senate Judiciary Committee at an oversight hearing, and briefly touched on the subject of encryption (beginning at 26:10 on the C-SPAN video).

He led off with a statement that the tech companies and law enforcement “have to figure out whether we can maximize both. . . values -- safety and security on the internet and public safety.” If we take the Director’s statement literally, we know it’s not really possible to maximize two related variables simultaneously. If A and B are the variables of interest, and A is at its maximum possible value, B must be at some value less than its maximum possible value. That is, A and B trade off against one another when one of them is near a maximum.

To be fair, I don’t believe for a minute that the Director really believes we can literally maximize both Internet security and public safety simultaneously. Indeed, the government has always framed this point in terms of balancing competing interests.

But to my knowledge, the government has never said explicitly what balancing actually implies—the degree of security available with an exceptional access requirement for encryption will be less than without such a requirement. It doesn’t matter whether it is the government or the provider of the encryption that has exceptional access—the degree of security will be less. As long as some third party can gain exceptional access, that third party could be compromised, corrupt, or dishonest, or the mechanisms to gain exceptional access could be compromised . Many non-government commentators have made this point, but the government has not.

It's a fair point to argue how much less. And the government can argue on policy grounds that such a decrease in security is worth the benefit for public safety. But again, the government has not made this argument.

Director Comey also said that “there are lots of folks who said over the last year or so we're going to break the internet, we’ll have unacceptable insecurity if we try to get to a place for court orders are complied with.” Indeed, much of the technical community believes that the U.S. government is asking them to design a system or an architecture that is less secure than it could otherwise be.

But Director Comey denies being directive. In his testimony, he says that “the government should not be telling people how to operate [and by implication, how to build] their systems.” However, he goes on to say that the government “hopes to get to a place where if a judge issues an order, the company figures out how to supply that information to the judge and figures out on its own what would be the best way to do that.” This statement is a performance requirement for those systems—systems should provide exceptional access. In short, the government may not care how any given company meets the performance requirement, but it cares a lot that the company does meet it.

He also notes in his testimony that “There are plenty of companies today that provide secure services to their customers and still comply with court orders. There are plenty of folks who make good phones and are able to unlock them in response to a court order. In fact, the makers of phones that can't be unlocked a year ago they could be unlocked.” Based on these two statements, he then makes the widely quoted statement that “It's actually not a technical issue. It is a business model question.”

What is the “it” to which Director Comey refers? “It” refers to a system design that meets the exceptional access requirement, and he is asserting—correctly—that technology does allow the building of such systems. But he also called these systems “secure”, as noted in the previous paragraph, which is entirely different. A report issued by the Manhattan District Attorney’s office in November 2015 makes a similar point:

Previous Apple and Google operating systems allowed law enforcement to access data on devices pursuant to search warrants. There is no evidence of which we are aware that any security breaches have occurred relating to those operating systems. Apple and Google have never explained why the prior systems lacked security or were vulnerable to hackers, and thus, needed to be changed. Those systems appeared to very well balance privacy and security while still being accessible to law enforcement through a search warrant.

If the argument I made above regarding the impossibility of simultaneous maximization is right, these previous systems (and the other systems of today that do meet the exceptional access requirement) were not as secure as they could possibly be. And so, some vendors, such as Apple and Google, are choosing to upgrade the security afforded by their system.

Why are they doing so? Director Comey makes clear his views on this point—it’s a business decision. He and the Manhattan District Attorney argue, in effect, that systems that provide exceptional access have demonstrated that they are adequately secure (not maximally secure), and that vendors are making a business decision to upgrade the security they provide (“because their customers want it”) rather than responding to a demonstrated security threat.

I am not privy to the internal deliberations of Apple or Google or any other company that is (re-)designing its systems without exceptional access capabilities. But I can offer some informed speculation.

I think Director Comey is correct in suggesting a business dimension to their decision, but he is wrong to imply that the decision has *only* a business dimension. For example, many individuals may think that the systems they use are secure, but in fact they may not be. Customers in this category are obviously not demanding greater security.

Tech companies that are upgrading security now are, in part, responding to a new threat environment to themselves and their customers, one that government officials do not—and indeed cannot—acknowledge. In particular, many individuals in the tech companies and many of their customers perceive the threat environment very differently pre-Snowden and post-Snowden. For these individuals, the result of the Snowden revelations was to add the U.S. government itself to the list of malevolent threat actors.

More than once, I have heard senior security engineers in major IT companies talk about the need to defend themselves from agencies of the U.S. government in the same way they talk about the need for defense against China. Since the U.S. government’s technical capabilities for surveillance have never been in doubt, they believe that they must upgrade the security of their own systems. Similarly, some privacy-sensitive customers now perceive increased levels of threat originating from the U.S. government, a perception that drives their own desires for increased security, and some tech companies are implementing systems that provide these customers with that option.

The Snowden revelations were overwhelmingly about activities of the National Security Agency, which is not a law enforcement agency. But law enforcement agencies must now cope with the fallout from these revelations—and this fallout is an important driver of the argument playing out today.

I don’t believe that the U.S. government poses a threat to the security of law-abiding citizens. But the government cannot behave as though the security threat today is the same as it was pre-Snowden. That assertion may be true in some narrowly technical sense, but it is a claim that doesn’t account for the enormous negative impact of the Snowden revelations on the trust in the U.S. government possessed by Silicon Valley and many of their technically savvy customers.

In my view, Director Comey deserves credit for not villainizing the tech companies. In his testimony, he said that “Lots of people have designed their systems and their devices so that judges’ orders cannot be complied with for reasons that i understand. I’m not questioning their motivations.” A different FBI director might not have shown such restraint, and as the result of his efforts in this regard (both in this testimony and in remarks in other forums), I believe that the tone of the debate is better than what it was in the crypto wars of the 1990’s.

But I also believe that the government could further improve the tone of the debate by a forthright acknowledgement of the two fundamental points I’ve made above—that allowing for exceptional access will in fact reduce the security available to the public that could otherwise be afforded by technology, and that the government activities revealed by Snowden have in fact helped to drive many tech companies in the direction of greater security than they previously provided.

All of my security colleagues complain that the approaches I’ve imagined to help find a middle ground (some of which I’ve written about in previous Lawfare blog postings) afford less security than what is maximally possible—and they are right. As serious security technologists, they want the maximum possible security for the systems they design.

The government’s unstated position is that adequate security need not necessarily require maximal security, and that the security deficit resulting from the difference between maximal and adequate is worth the benefit for public safety. But the government cannot argue this position without first acknowledging that a security deficit does exist.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare