Cybersecurity & Tech Surveillance & Privacy

Vulnerabilities Equities Reform That Makes Everyone (And No One) Happy

Susan Hennessey
Friday, July 8, 2016, 12:27 PM

National Security Council veterans Ari Schwartz and Rob Knake recently released a paper entitled Government’s Role in Vulnerability Disclosure. The paper, published by Harvard’s Belfer Center, catalogues the existing government mechanisms for determining whether to disclose a newly-discovered vulnerability in hardware and software products and make a series of recommendations for reform. The recommendations include both useful steps, and a few proposals I argue are ultimately counterproductive.

Published by The Lawfare Institute
in Cooperation With
Brookings

National Security Council veterans Ari Schwartz and Rob Knake recently released a paper entitled Government’s Role in Vulnerability Disclosure. The paper, published by Harvard’s Belfer Center, catalogues the existing government mechanisms for determining whether to disclose a newly-discovered vulnerability in hardware and software products and make a series of recommendations for reform. The recommendations include both useful steps, and a few proposals I argue are ultimately counterproductive. More fundamentally, the paper provokes interesting question about the ultimate goals of equities balancing reform and the best vehicle to accomplish those aims.

The authors perform a real service by developing the history and fundamentals of the Vulnerability Equities Process (VEP). Their full accounting is worth reading in its entirety, so I won’t recap it here. At present, the VEP is under renewed scrutiny following disclosures of the FBI’s acquisition and use of tools that rely on previously unknown vulnerabilities. Technology industry representatives and privacy advocates have called for greater clarity regarding the largely-classified process. But because so little is known, the debate is primarily one of untethered speculation. By articulating as much about the process as is possible under classification obligations, this paper grounds an important discussion in facts.

Before evaluating particular reform measures, it is helpful to articulate the ultimate goal—in this sphere the aim is not always clear. The government reports that it discloses approximately 91% of newly discovered vulnerabilities, while the remaining 9% are patched by the vendor or retained for operational use. Broadly speaking, criticisms of the VEP—and responsive reform efforts—take two forms. First, there are critics who agree with the basic premise that, although information security interests trump other security equities in the vast majority of cases, there are some circumstances in which a vulnerability should be retained and exploited for law enforcement or intelligence purposes. Their concern is that there is insufficient public information to evaluate whether the process is fairly designed to reach appropriate outcomes and a lack of clarity regarding who and what is governed by it. Schwartz and Knake’s recommendations are squarely aimed at this group, and thus seek to generate public legitimacy through transparency and accountability.

There is, however, an alternative animating principal for a significant number of vocal detractors. In essence, these critics object as a matter of first principle that it is ever appropriate for the government to withhold vulnerabilities because information security threats are so broadly diffuse and the integrity of information security is central to a great many civil liberties. While there is variation in degree, they basically oppose all genuine balancing processes—instead of a process favoring near constant disclosure—because those necessarily begin from the premise that retaining unknown vulnerabilities can serve net good. Attempts to appease this later group will inevitably fall short, and are likely to do more harm than good.

But all critics—and indeed the US government—appear to agree that the reason to disclose a vulnerability is to patch it and eliminate the threat. Disclosure represents some degree of loss to the security interests served by government use—intelligence and law enforcement. Usually, however, that loss is more than offset by the ubiquitous information security gains of patching. Or, to put it conversely, the loss to information security by not disclosing a vulnerability is so high, that it can only be exceeded by monumental gains to other security interests. But where a vulnerability is disclosed (loss) and no patch is deployed (no gain), then the result is net harm.

The Good

As stated above, the true value of this paper is the development of history and insight to existing process. Beyond that, the majority of recommendations are sensible steps towards better transparency and accountability. Efforts to make the criteria and process standardized and transparent will go a long way to increase public trust. And mechanisms to counter bureaucratic tendencies for secrecy and delay, though periodic review and reporting requirements, serve important purposes as well. The primary challenge with most of these recommendations is in implementation. For example, publishing high-level criteria is a natural starting place for transparency efforts and unobjectionable in theory. But the public only gets genuine insight into the decision-making process where the criteria are sufficiently concrete. It is difficult to articulate the precise degree of specificity—something greater than a blog post but not so narrow as to compromise national security or tie the executive’s hands—to be truly responsive in practice. This is not to say the task is impossible, only that threading the needle here seems more fraught than acknowledged.

The Bad

At least two of the recommendations seem to be reactionary calculations that risk inflicting long-term harm. First, a prohibition on federal agencies entering into non-disclosure agreements with vulnerability researchers and resellers is counterproductive. The suggestion seems modest, requiring only that agencies “obtain the ability to disclose any vulnerability they purchase.” But the practical effect will be to make a significant number of tools cost prohibitive for the government because purchasing on an exclusive basis is exponentially more expensive. There are far more appropriate mechanisms for protecting US interests, including export controls, contracting terms, and aggressive regulation of the legally-questionable vulnerabilities marketplace.

This recommendation seems at least partially calculated to respond to the outrage among many critics who believe that the FBI “circumvented” the VEP by contracting for proprietary tools to unlock Sayed Farook’s iPhone in San Bernardino. The Bureau declined to submit the tool for equity review because it lacked sufficient information. But enshrining the requirement that an agency must purchase exclusive rights serves little purpose while acting as a significant impediment. There are outstanding constitutional questions regarding what information must be known and disclosed regarding forensic tools used against criminal defendant. In San Bernardino the question was moot because the perpetrator was deceased, but courts are already taking up the issue in a number of different contexts, including the FBI’s reliance of an unknown vulnerability in bringing down a child pornography ring. In this realm, those constitutional interests are the proper limitations on government procurement, not artificial restrictions designed to appease reactionary critics of the FBI.

The other problematic recommendation is to transfer the VEP Executive Secretary function from the NSA to the Department of Homeland Security. The basic premise of the recommendation is that by reorganizing to a combined operational directorate, the NSA is, in reality or perception, no longer capable of discharging its information assurance mission. I’ve written elsewhere about why this is a misconception and won’t belabor the point here. But transferring the function out of NSA for purposes of empty virtues signaling only perpetuates this damaging falsehood and achieves no genuine security or transparency aim. If, alternatively, this proposal is based on the substantive impact of moving the function, then its proponents need to make the case for why DHS is better situated to actually perform the functions, including drawing on relevant subject matter experts or communicating decisions to the public or overseers. In the majority of cases, disclosure is an easy decision and in some exceedingly rare circumstances retention is likewise the obvious call. But for the “close calls” where the subtle influence of particular roles might actually impact the outcome, the NSA, FBI, and DOD have far more complex equities and a deeper base of expertise. Operationally and historically, DHS is the least well-situated serve the more consequential functions, and therefore the recommendation raises suspicion that it is part of a larger trend to use DHS as a kind of transparency window dressing. If the move has no impact on the outcome, then false assurances serve to actually thwart meaningful transparency. If it does impact the outcome, then serious concerns regarding suitability, capacity, and stakeholder buy-in persist.

Too Much or Not Enough?

The core idea of formalizing the VEP in an executive order merits the most examination. Considering the reform goals of transparency and accountability, formalizing the equities process makes a lot of sense. The obvious way to create trust in a system that necessarily functions in secret is to make public rules which are binding on the relevant actors. Although there is a limit to formalizing a process where the core is judgment and discretion, there is space for significant improvement here.

An executive order (EO), which has the binding force of law on executive agencies, is an attractive vehicle for achieving this goal. Executive orders have the benefit of constraining the government while retaining more flexibility than statutes. Issuing an executive order governing equities analysis ensures the process is not merely dropped at the turn of a new administration, but preserves the ability of a future executive to chart a different course if needed.

But in some regards an executive order goes too far, and in others it doesn’t go far enough.

There are ways to heighten the formal policy that fall short of an EO. And by elevating the obligation beyond policy considerations, an EO reduces the US government’s flexibility in engaging in the development of tools based on existing vulnerabilities with commercial and foreign government partners. Subject to a binding obligation, the United States could never agree in advance to not unilaterally disclose the products of mutual effort or even information derived from a tool shared by a partner. True, the US government never enjoys limitless discretion in partner relationships. For example, existing obligations under the Homeland Security Act of 2002 require “all agencies of the Federal Government” to promptly report “all information concerning the vulnerability of the infrastructure of the United States, or other vulnerabilities of the United States, to terrorism” to the Secretary of DHS. But issuing an executive order is not a costless endeavor, and the government should only sacrifice flexibility if real transparency goals cannot be achieved through lesser policy process.

The rush to formalize the VEP also sidesteps a lurking question on the separation of powers. A formalized equities process—unequivocally designed to favor disclosure—risks infringing on congressional appropriations powers. Congress appropriates funds for programs designed for the development of law enforcement and intelligence tools separately from funding for activities related to cybersecurity and information assurance. If an executive order effectively mandated that particular research intended for one activity be handed over in service of another, the result effectively transfers congressional appropriations between programs. There are a number of considerations which could properly take place at a policy level, that risk violating constitutional principles when packaged as an executive order requiring government-wide compliance.

It is more likely, however, that an executive order is insufficiently powerful to accomplish the needed reforms for ultimate security goals.

Currently, the emphasis on the equities process is singularly focused on government activity. But amid the frantic accusations that failing to disclose vulnerabilities leaves us all less safe, there is remarkable little discussion on what a company should be required to do when notified of an exploitable flaw. This may be a case where industry groups clamoring for and circulating legislation may get more than they bargained for. If we are going to rethink the equities process, it is foolish to consider disclosure a one-way street with no remedial obligations. And executive orders are weak tools in influencing private industry.

If the ultimate security goal is disclosure of vulnerabilities so that they will be patched—again, without patching the security loss is net negative—then more varied legislative options should be considered. A statutory approach reduces flexibility more dramatically than an executive order, but the challenges are not insurmountable with careful drafting. Importantly, a statute inserts Congress into the decision-making process, eliminating any appropriations objections and building in judicial oversight.

Courts are increasingly being confronted with questions of vulnerabilities equities. In a recent case regarding the FBI’s use of tool purported to exploit a flaw in the Tor browser in order to identify a large-scale child pornography distribution operation, Mozilla filed a motion to intervene. Tor relies on a modified version of Mozilla’s Firefox browser, and therefore the company asserted it had a right be heard on whether the government should be obligated to disclose the vulnerability for purposes of trial discovery. A court for the Western District of Washington denied the motion, tersely holding: “It appears Mozilla’s concerns should be addressed to the United States and should not be part of this criminal proceeding.” Creating a statute around the process would give courts some objective criteria with which to evaluate industry interests or, more probably, provide alternate venues for those interests to be heard avoiding the need to use legal process in service of PR gamesmanship.

Finally, there is a more cynical but important potential gain in pursuing a statutory solution over an EO. It appears increasingly likely that some significant part of the “Going Dark” solution will involve facilitating alternative government access to data while not regulating encryption products. In addressing the problem head on, as opposed to piecemeal and prolonged fights, Congress should consider holistic legislation to both permit and control lawful hacking.

The authors touch on the concept by recommending significant funding increases to allow the government to consistently develop new tools while rapidly disclosing exploits it no longer needs. I am less optimistic regarding the broader USG’s agility in discovering relevant vulnerabilities, the availability of funds, or that new tools will emerge in such a convenient and linear fashion to support the so-called “virtuous cycle” of discovery, use, and disclosure. I agree it would be nice if it worked out that way. But while the optimists wait, pragmatists may find that the ability to fine tune legislation by including carefully balanced vulnerability disclosure obligations could go a long way in crafting a broadly acceptable lawful hacking regime.


Susan Hennessey was the Executive Editor of Lawfare and General Counsel of the Lawfare Institute. She was a Brookings Fellow in National Security Law. Prior to joining Brookings, Ms. Hennessey was an attorney in the Office of General Counsel of the National Security Agency. She is a graduate of Harvard Law School and the University of California, Los Angeles.

Subscribe to Lawfare