Cybersecurity & Tech Executive Branch Intelligence Surveillance & Privacy

Cyber Paradox: Every Offensive Weapon is a (Potential) Chink in Our Defense -- and Vice Versa

Jack Goldsmith
Saturday, April 12, 2014, 7:37 AM
As Ben notes, the USG denied a Bloomberg News report that the “U.S.

Published by The Lawfare Institute
in Cooperation With
Brookings

As Ben notes, the USG denied a Bloomberg News report that the “U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence.”  The NYT story on this denial says:
James A. Lewis, a cybersecurity expert at the Center for Strategic and International Studies in Washington, said that the claim that the N.S.A. knew about the Heartbleed bug and stockpiled it for its own purposes was not in keeping with the agency’s policy. “In this case, it would be weird for the N.S.A. to let this one go if they thought there was such a widespread risk,” he said.
I do not know what the NSA “policy” is on this matter.  But there is an important and very hard and not-much-discussed issue lurking here. Public reports suggest that the NSA engineers or discovers or purchases, and then stores, zero-day vulnerabilities (i.e. software defects unknown to the vendor and to others).  Zero-days assist NSA and Cyber Command in their cyber-exploitations and cyberattacks.  (For example, Stuxnet reportedly used four zero-day vulnerabilities.)  Zero-days are useful in building offensive exploits only to the extent that they unknown and unpatched.  But if the NSA stockpiles such vulnerabilities, and if the vulnerabilities persist in generally available software, then another party besides the NSA might discover the vulnerability and use it offensively – including against USG and U.S.-firm and U.S.-person computer systems.  And so the government faces a difficult choice: It can hoard a zero-day for offensive purposes but leave all computer systems affected by the zero-day vulnerable to exploitation or attack; or it can disclose the vulnerability and allow it to be patched, enhancing defense at the cost of a potential offensive tool.  Former NSA Director Michael Hayden described this as a “perennial” question of signals intelligence: “What do you do with a vulnerability, do you patch it or do you exploit it?” (See embedded video, about 2:20.) Presumably the policy that James Lewis is referring to is one that explains how the USG decides which zero-days to keep secret and unpatched and which ones to make public and patchable.  (Note that Former White House cybersecurity advisor and President’s Review Group member Richard Clarke has said that there is no such policy: “There is supposed to be some mechanism for deciding how they use the information, for offense or defense. But there isn’t.”)  There are obviously significant tradeoffs here.  How to think about them? Cory Doctorow recently drew an analogy to public health:
If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.
Doctorow concludes:
If GCHQ wants to improve the national security of the United Kingdom – if the NSA want to improve the American national security – they should be fixing our technology, not breaking it. The technology of Britons and Americans is under continuous, deadly attack from criminals, from foreign spies, and from creeps. Our security is better served by armoring us against these threats than it is by undermining security so that cops and spies have an easier time attacking “bad guys.”
Bruce Schneier has a similar suggestion:
[T]he NSA needs to be rebalanced so COMSEC (communications security) has priority over SIGINT (signals intelligence). Instead of working to deliberately weaken security for everyone, the NSA should work to improve security for everyone.
Richard Clarke argues in the same vein, but with some caveats: “If the U.S. government knows of a vulnerability that can be exploited, under normal circumstances, its first obligation is to tell U.S. users.”  (Emphasis added.) Doctorow’s and Schneier’s absolutist position might be right, but it is hard to know.  Among many other things, we would need to know much more about the value of the vulnerabilities, and the exploits and attacks they make possible, in the overall security equation.  Then we would need to weigh that value against the value of the unaddressed network security threats (broadly conceived, including the threat to the operation of the network), discounted by the probability that others besides the USG could discover and use the vulnerability.  I am no expert on this and I cannot begin to think about how to estimate values in these contexts.  But with that important caveat, Doctorow and Schneier might assign too little weight (no weight, as far as I can tell) to the potential security value of storing vulnerabilities.  That is, it is not obvious that the optimal tradeoff is for the government never to store vulnerabilities. Michael Hayden provides one context where keeping the vulnerability secret might be appropriate, in explaining the concept of “NOBUS” – i.e. “nobody but us.”  Hayden explains (about 2:10 following):
You look at a vulnerability through a different lens if even with the vulnerability it requires substantial computational power or substantial other attributes and you have to make the judgment who else can do this? If there's a vulnerability in here that weakens encryption but you still need four acres of Cray computers in the basement in order to work, it you kind of think “NOBUS" and that's a vulnerability we are not ethically or legally compelled to try to patch -- it's one that ethically and legally we could try to exploit in order to keep Americans safe from others.
I can imagine other circumstances in which it might be appropriate to store and exploit rather than patch a vulnerability.  What if the vulnerability is largely or exclusively found in software used by an adversary?  What if the vulnerability is for some reason unlikely to be discovered during the relevant time frame by anyone except NSA?  (For an elaboration of this last idea, as well as an explanation for the powerful circumstances in which intelligence agencies have incentives to horde rather than patch vulnerabilities, see this interesting paper by Friedman, Moore, and Procaccia.)   What if the offensive value of the vulnerability is enormous, and the security threat the vulnerability helps to address is significant, and the security costs of not disclosing the vulnerability relatively small?  And so on.  I am not saying that such situations arise often.  But neither Doctorow nor Schneier addresses the tradeoffs or the possibility that they might cash out differently in different contexts.  They appear to think, without extended argument, that Network security is an absolute value that always trumps.  Maybe it is or should be, but they need to explain why the tradeoffs should not sometimes be resolved the other way. This discussion is relevant, obviously, to the question of whether Cybercommand should be closely associated with, or separate from, NSA, or, more generally, whether we want the same entity doing cyber offense and cyber defense.  One possible reason why the same entity should be in charge of both is to enable that entity to better understand how offense and defense relate to one another, and thus potentially to enable more intelligent tradeoffs.  But I can easily imagine a very different argument to the effect that, depending on one’s normative commitments, the tasks should be separated and the tradeoffs should be managed by an independent offense chief, or an independent defense chief, or by an independent third party.  But further complications arise.  If offense and defense are separate, would they tell one another about the vulnerabilities they discover or engineer?  What would the sharing rules be?  Who would decide and by what criteria whether to horde or patch the vulnerability?  It is not obvious that separating offense and defense assists with the underlying problem of how to resolve the tradeoff, although of course it might skew the outcomes in certain ways.  These are hard and deep questions of institutional design.  (Note that after an extensive review, and contrary to a recommendation by the President’s Review Group, the White House decided to keep the NSA Director in charge of Cyber Command.) Final point: I assume that the “policy” Lewis alludes to is classified.  (If as Clarke says, there is no such policy, or if there is such a policy and it is unclassified, please let me know.)  Should such a policy be classified?  Why shouldn’t the USG tell us, at least in general terms, how it manages these hugely consequential tradeoffs between different types of national security interests, so that the public can debate them?  Michael Hayden told us something about the “policy” when he described the NOBUS concept, which he noted was “a very important concept to share.”  Is there more to say?  Why not say it?  The USG might argue:  Because revealing the policy to the American public would also reveal it to our adversaries, and allow them to understand the nature or type of vulnerabilities the USG uses in its cyber operations.  But this response just pushes the question about tradeoffs to a different level.  Yes, there are costs to disclosing the policy, but there are also benefits in terms of having a more intelligent and informed discussion of the relevant security tradeoffs.  If we have learned anything since last summer, it is that the USG, as it now acknowledges, sometimes unduly favors the interest in preventing our adversaries from knowing over the interest in allowing a more informed domestic debate.  Often, that tradeoff should be resolved in favor of disclosure.  Is this such a case?

Jack Goldsmith is the Learned Hand Professor at Harvard Law School, co-founder of Lawfare, and a Non-Resident Senior Fellow at the American Enterprise Institute. Before coming to Harvard, Professor Goldsmith served as Assistant Attorney General, Office of Legal Counsel from 2003-2004, and Special Counsel to the Department of Defense from 2002-2003.

Subscribe to Lawfare