Executive Branch Intelligence Surveillance & Privacy

Thoughts on White House Statement on Cyber Vulnerabilities

Jack Goldsmith
Monday, April 28, 2014, 9:58 PM
As Ritika noted, White House Cybersecurity Coordinator Michael Daniel today announced some aspects of the government’s policy on disclosing cyber vulnerabilities.

Published by The Lawfare Institute
in Cooperation With
Brookings

As Ritika noted, White House Cybersecurity Coordinator Michael Daniel today announced some aspects of the government’s policy on disclosing cyber vulnerabilities.  (David Sanger’s NYT story on Daniel's statement has good background and analysis.)  I think Daniel's statement is an admirable one that explains what is at stake here and gives the public guidance on how the USG manages this issue.  I also think that Daniel implies that USG policy falls short of what the President’s Review Group recommended. I had earlier wondered whether – as Richard Clark and Peter Swire maintained – the government had accepted the Review Group’s Recommendation 30, which in brief provided:
US policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on US Government and other networks. In rare instances, US policy may briefly authorize using a Zero Day for high priority intelligence collection, following senior, interagency review involving all appropriate departments.
There are two basic issues here, a substantive issue and a process issue.  The substantive issue is whether and when the government should store a known vulnerability for later use rather than revealing it and allowing it to be patched.  The process issue is who decides. On the substantive issue, Daniel laid out the tradeoffs as follows:
. . . We rely on the Internet and connected systems for much of our daily lives. Our economy would not function without them. Our ability to project power abroad would be crippled if we could not depend on them. For these reasons, disclosing vulnerabilities usually makes sense. We need these systems to be secure as much as, if not more so, than everyone else. But there are legitimate pros and cons to the decision to disclose, and the trade-offs between prompt disclosure and withholding knowledge of some vulnerabilities for a limited time can have significant consequences. Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence that could thwart a terrorist attack stop the theft of our nation’s intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks. Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest. But that is not the same as arguing that we should completely forgo this tool as a way to conduct intelligence collection, and better protect our country in the long-run. . . .
Daniel then laid out the criteria to be considered when assessing these tradeoffs:
  • How much is the vulnerable system used in the core internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems?
  • Does the vulnerability, if left unpatched, impose significant risk?
  • How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
  • How likely is it that we would know if someone else was exploiting it?
  • How badly do we need the intelligence we think we can get from exploiting the vulnerability?
  • Are there other ways we can get it?
  • Could we utilize the vulnerability for a short period of time before we disclose it?
  • How likely is it that someone else will discover the vulnerability?
  • Can the vulnerability be patched or otherwise mitigated?
There is no way to know for sure, but I think this list implies that the government will store and possibly use vulnerabilities, especially over a short term, in a wider array of circumstances than (as the Review Group said) in “rare instances” and only for “high priority intelligence collection.”  I especially think this because Daniels’s carefully worded and vetted statement says that “in the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest” (my emphasis).  Disclosing vulnerabilities most of the time is far short of disclosing them in all but rare instances.  But again, there is no way to know for sure. On the process point, the Review Group suggested that a vulnerability should be used for high-priority intelligence collection only after “senior, interagency review involving all appropriate departments.”  Daniels talked about a process not for the use of vulnerabilities but for the (related but not identical) issue of whether to disclose a vulnerability.  He said that the decision whether to disclose (or, by implication, not to disclose, i.e. to store) a vulnerability should be made in “a disciplined, rigorous and high-level decision-making process” which he described as an “interagency” process.  As I say in the Sanger story, Daniel here implies that store v. disclose is "not an issue that should just be left to the N.S.A. or the F.B.I.”  That might be a change in policy.  But missing from Daniel's statement is the Review Group recommendation that “senior” officials and “all appropriate departments” will be involved every decision not to disclose a vulnerability.  For example, it is consistent with Daniels’ statement (but not with the Review Group’s recommendation) that the interagency process would take place entirely within the intelligence community, perhaps under the auspices of the DNI.  That said, it is also consistent with Daniel's statement that other Departments outside of the Intelligence Community or DOD (such as the Commerce and State Departments) might sometimes have input in the store v. disclose decision.  Perhaps the DNI or some official within the Intelligence Community has the presumptive call, pursuant to general high-level guidance, subject to higher-level review in certain high-stakes circumstances.  There are many possibilities and nuances consistent with what Daniel said, and while his statement seems to fall short of (the relatively wooden) Recommendation 30, there is no way to know for sure. While ambiguities remain, the Daniel statement is nonetheless extraordinary for several reasons.  First, it implicitly reveals quite a lot about some dimensions of the USG's offensive capabilities, policies, and thinking.  Second, it illustrates that – as has been argued on this site a few times – the store v. disclose issue is more complex than the Review Group’s Recommendation 30 suggests.  Third, Daniel makes clear that the USG takes defense of the Internet, and disclosure of vulnerabilities, very seriously, and that it has gone to greater lengths than any other nation to make public its policy guidelines on the issue.   At some point -- I am not sure we have reached it yet -- more transparency will affirmatively harm intelligence collection in ways that outweigh the public confidence and related benefits of further disclosure.  As Daniel said:
Enabling transparency about the intersection between cybersecurity and intelligence and providing the public with enough information is complicated.  Too little transparency and citizens can lose faith in their government and institutions, while exposing too much can make it impossible to collect the intelligence we need to protect the nation.
This is a very tricky tradeoff to manage.  The tradeoff is tricky not just because transparency aids our adversaries.  It is also tricky because disclosure invariably begets further disclosure, and because disclosures of the sort Daniel made -- which reveal a lot about what the USG is up to --will (contrary to Daniel's stated aim) diminish trust in the USG in many quarters, especially since no other country makes disclosures of this type. Life is somewhat more complex in the intelligence community in the post-Snowden world.

Jack Goldsmith is the Learned Hand Professor at Harvard Law School, co-founder of Lawfare, and a Non-Resident Senior Fellow at the American Enterprise Institute. Before coming to Harvard, Professor Goldsmith served as Assistant Attorney General, Office of Legal Counsel from 2003-2004, and Special Counsel to the Department of Defense from 2002-2003.

Subscribe to Lawfare