Everything You Know About the Vulnerability Equities Process Is Wrong

Dave Aitel, Matt Tait
Thursday, August 18, 2016, 2:46 PM

The vulnerability equities process (VEP) is broken. While it is designed to ensure the satisfaction of many equities, in reality it satisfies none—or at least, none visible to those beyond the participants of the insular process. Instead of meaningfully shaping best outcomes, the VEP provides thin public relations cover when the US government is questioned on its strategy around vulnerabilities.

Published by The Lawfare Institute
in Cooperation With
Brookings

The vulnerability equities process (VEP) is broken. While it is designed to ensure the satisfaction of many equities, in reality it satisfies none—or at least, none visible to those beyond the participants of the insular process. Instead of meaningfully shaping best outcomes, the VEP provides thin public relations cover when the US government is questioned on its strategy around vulnerabilities.

In other words, the US has confused a public relations strategy with a security strategy, to the detriment of the nation.

Because the VEP is a matter of Administration policy—and not a law or Executive order—it will be up to the next president whether to continue it. Rather than continue down a dysfunctional path—or pivot to even worse processes—the new administration should take the opportunity to devise a strategy that works.

If a new policy is going to actually improve the situation, it must be informed by a far more clear-eyed assessment of the strategic importance of undisclosed vulnerabilities to US national interests, the genuine risks attendant in disclosure and nondisclosure, and the practical implications of various processes in this mostly-covert space. Much of the current discussion about the VEP is untethered from genuine data analysis, the technical features of vulnerabilities, and any sense of realistic strategy for the United States government.

This history of the VEP has been covered elsewhere, so we won’t rehash it in detail here. In sum, the Obama administration created, and later “reinvigorated,” the VEP as an internal framework for determining when and whether the US government should publicly disclose newly-discovered software and hardware vulnerabilities—both those independently discovered by federal agencies or in some cases acquired from third-party contractors. The precise applicability of the VEP remains classified but issues surrounding discovery and disclosure are most salient in the context of intelligence and defense agencies. Broadly speaking, each vulnerability is assessed against a set of criteria to balance the public’s “need to know” versus the government’s interest in keeping the information secret for operational use.

From the standpoint of US intelligence, the VEP is a deeply-flawed and problematic framework. Considering exclusively intelligence equities, the best option would typically be to develop, stockpile, and utilize vulnerabilities for as long as possible and to disclose as infrequently as possible. But from an operational standpoint, it takes about two years to fully utilize and integrate a discovered vulnerability. For the intelligence officer charged with managing the offensive security process, the VEP injects uncertainty by requiring inexpert intergovernmental oversight of the actions of your offensive teams, effectively subjects certain classes of bugs to time limits and eventual public exposure—all without any strategic or tactical thought governing the overall process.

Public protestations to the contrary, there should be no confusion: the VEP is, inherently, harmful to intelligence operators. The IC’s adversaries in Russia, China, Iran and North Korea are not—nor will they ever be—hamstrung by similar policies. In fact, this week’s release of the “EQGRP” toolkit probably qualifies as the Russian version of the VEP. So no matter how limited the VEP might be, it will always represent a strategic disadvantage against foreign adversaries, a window into the US government’s most sensitive operations.

And the government’s real desire to open up the vulnerabilities process is doubtful. Michael Daniel’s much-cited blog post following Heartbleed noted that “this case has re-ignited debate about whether the federal government should ever withhold knowledge of a computer vulnerability from the public.” But it is important to recognize where the strategic (as opposed to political) elements of the US government genuinely fall in this debate: There are a great many strategic reasons to not release a vulnerability to vendors, and only a very few considerations weighing in favor of disclosure.

These reasons are not limited to vulnerabilities. When the NSA determines an adversary has penetrated a US company (Sony Pictures, for example), it must undertake equities decisions in what kind of information to share in order to help that company. Whatever the context, disclosing what you know to an adversary always has downsides.

As problematic as the current VEP policy is, astoundingly plenty of US civil liberties groups and think tanks now clamour to make things significantly worse. Misunderstanding and discarding strategic interests, they offer policy proposals premised on an unexamined axiom that the US government should disclose essentially all vulnerabilities and do so at a much faster rate—there even appears to be some underlying uncertainty as to whether the government should be allowed to have an undisclosed vulnerability in the first place.

Herein lies the basic problem: US cyber operations already face a greater level of scrutiny and limitations than our competitors. But single-minded reformists seek still more restrictions. At the same time, US cyber capabilities grow increasingly critical and central to the basic function of democratic interests worldwide. Without a robust investment in these capabilities, the US will lack the ability to solve the “Going Dark” issue and our intelligence efforts will start to run into quicksand around the world.

How are we to resolve this?

Current State of the VEP

At present the VEP exists as a formal interagency process which seeks to determine whether a particular software (or more rarely, hardware or cryptographic) vulnerability should be disclosed or reserved for offensive use according to the “best interests of USG missions of cybersecurity, information assurance, intelligence, military operations, and critical infrastructure protection.” The VEP applies to vulnerabilities in open-source applications, as well as ordinary commercial software. FOIA documents from 2010 shed a bit more light on the process, at least as it existing prior to the 2014 changes.

Let’s cut through the intricacies of the process and its various criteria. In practice, deciding whether to disclose or use a vulnerability actually requires determining two things: (1) Will the vulnerability be used? (2) Is the vulnerability too dangerous?

If the vulnerability will not be used—perhaps it cannot be reliably exploited or does not provide a capability useful for the US government—then the balance obviously tips towards disclosure. [Note: Even here, the “bias towards disclosure” may be less than strategic: vulnerabilities are often linked together and it is important to have backups.] If the vulnerability is “too dangerous”—for example, a vulnerability in software used often in the United States but rarely by US adversaries—then the balance likewise tips towards disclosure.

The problem arises for those cases where it is not clear at the outset whether a vulnerability will be used or that it poses a clear risk—and for everything in the middle the questions quickly become rather subjective. How dangerous is too dangerous? How tenuous can a claim the US “might need” a vulnerability be before it becomes too speculative? How great does a risk that a given adversary finds a vulnerability need to be before the balance tips to disclosure?

There are a number of factors which cannot be ignored:

  1. How much did it cost the USG to find and demonstrate this vulnerability?

  2. What does releasing this vulnerability tell adversaries about the United States’ internal abilities?

  3. Are there things we may not know about this vulnerability that require further study?

Despite being based on the fundamental equities at issue, the Vulnerabilities Equities Process is ill-equipped to resolve these questions, which are a miasma of technical and operational dilemmas. It should be a red flag that the Vulnerabilities Equities Process claims to do what is essentially impossible, at least at the scale required. This means that it is, at some level, empty PR gamesmanship or simply poorly thought out guesswork.

Strategic Reasons in Favor of Disclosure

The most common argument for the USG to disclose vulnerabilities is that it “makes US citizens and businesses safer.” Broken down, the claim is essentially that the US benefits from economic policies that surround a more secure Internet and commercial environment and that US interests are served by thwarting cyber-attacks on US citizens and businesses.

Disclosure of a vulnerability potentially improves software security in two concrete ways: by reducing the total number of vulnerabilities in a piece of software and by encouraging vendors towards overall better code.

It is unclear, however, how well the VEP supports either of these goals. Filing single vulnerabilities, one at a time, has little effect in practice on the ability of hackers to find other bugs. If the NSA was disclosing bundles of hundreds of bugs to vendors at a time, then the argument might have more weight—a concept that might be adopted in better future processes. But the concrete benefit of a zero-day disclosed one-at-a-time is extremely limited.

And there is no real reason to believe the VEP drives vendors towards better overall code security. For example, there is no evidence that any vendor has ever substantially rewritten large volumes of code to be systematically more secure in response to government-reported vulnerabilities. And objectively reporting individual zero-days has done little to drive vendors to invest in anti-exploit mitigations. For example, GCHQ’s information assurance division CESG has reported two vulnerabilities in Firefox, and yet Firefox still uses vulnerable code for memory management and does not run the browser in an isolated “sandboxed” process.

Refocusing vulnerability disclosure might better achieve the ultimate goals. Policies, or even statutorily-imposed obligations, should make clear that vulnerabilities are released not just to fix the single issue, but also to drive the vendor’s software security process to change. But it is worth noting that the biggest voices in this debate—Microsoft, Google, and Apple—all have robust and secure software development lifecycles, and that they do so in spite of, not because of, the VEP.

Additionally, there are three major, non-technical reasons for vulnerability disclosure.

First, disclosure can provide cover in the event that an OPSEC failure leads you to believe a zero-day has been compromised—if there is a heightened risk of malicious use, it allows the vendor time to patch. Second, disclosing to vendors allows the government to out an enemy’s zero-day vulnerability without disclosing how it was found. And third, government disclosure can form the basis of building a better relationship with Silicon Valley. While all of these reasons for disclosure are important, they are also factors that are exceedingly difficult to measure, and they involve nuance that is far harder to communicate to the public than a simplified concept of “leaning towards disclosure”.

Strategic Reasons Opposing Disclosure

There are lots of ways that hacking is useful. But the United States uses it more than any other country to collect intelligence, support warfare, and, increasingly, for law enforcement.

Law enforcement use of hacking as an investigative technique is an inevitable consequences of increased use of end-to-end encryption, device encryption and anonymity programs such as Tor browser. Want to open a locked iPhone? Well, you need a zero-day. Want to see what’s being said in that end-to-end encrypted message? Need a zero-day. Want to know where that Tor user is actually browsing from? Zero-day.

For foreign intelligence, hacking is both safer and more effective than previous forms of intelligence. In the old days, performing a wiretap or intercepting local messages sent between targets in a foreign country required putting HUMINT assets at risk to gain access to individual snippets of information. However complex vulnerabilities equities might seem, it pales in comparison to the balance of when information is worth risking a human life. With hacking, the government is able to get more comprehensive intelligence and to do so without putting anyone in harm’s way. Limiting the government’s ability to use zero-day vulnerabilities for foreign intelligence collection can therefore drive that collection towards less effective, less reliable, and less safe forms of collection, as well as forms of mass or undifferentiated collection which are more threatening to the privacy of innocent citizens on a larger scale.

Temporary Disclosure

In policy circles, an appealingly safe, compromise option has emerged. These groups suggest that the Intelligence and Law Enforcement community be allowed to use vulnerabilities for a short period of time, and then be required to give them to the vendor.

This is a mindbogglingly terrible idea.

Individual exploitable software vulnerabilities are difficult to find in the first place. But to engineer the discovered vulnerability into an operationally deployable exploit that can bypass modern anti-exploit defenses is far harder. It is a challenge to get policymakers to appreciate how rare the skills are for building operationally reliable exploits. The skillset exists almost exclusively within the IC and in a small set of commercial vendors (many of whom were originally trained in intelligence). This is not an area where capacity can be easily increased by throwing money at it—meaningful development here requires monumental investment of time and resources in training and cultivating a workforce, as well as crafting mechanisms to identify traits of innate talent.

And even where the Intelligence Community decides to no longer use an exploit, giving a vulnerability to a vendor is not costless. While the operational security (OPSEC) issues here are sometimes complex, a smart cyber operator assumes that their target has a full packet capture of their network, and acts accordingly. That means when a signature comes out from Microsoft demonstrating a new vulnerability, the US should assume the Russians search past network data for evidence of that vulnerability being used, which can lead to implants that were successfully put onto computers. Strategically, this would be a nightmare, because it is difficult to realize which machines you’ve been caught on until far too late and implant technology requires immense investment. Justifying taking on that kind of risk, should be based on better data demonstrating the practical security improvements of disclosing.

Patching vulnerabilities after they have been used leaks other important information as well. It leaks which vulnerabilities the US found and exploited, and potential exploit techniques which can be capitalized on by global adversaries.

A process designed to “expire” vulnerabilities after use is one designed as an acetylene torch to burn sources and methods.

Draining Our Own Bank

It is often asserted that when the NSA sends a vulnerability to Microsoft, it makes the Internet more secure generally. But this assertion is based not on evidence or data, but on the assumption that there is significant overlap between vulnerabilities discovered or used in the US and those in Chinese, Iran, and Russian. This assertion is not only tenuous, but it widely rejected by technical experts in the field. The prevailing expert opinion is that there is no clear evidence that Russian and Chinese operational zero days overlap with those of the US IC.

But unquestionably, when the NSA sends a vulnerability to Microsoft, it makes Windows more secure against the specific vulnerability that the NSA either bought—or more accurately, licensed—or invested time and money in discovering. In short, each disclosure drains US financial resources at a massive rate and the entire justification for doing so is based on a false premise.

Many commercial entities embrace some future VEP as a kind of universal government bug bounty program. That’s a nicer way of saying that (lots and lots) of US tax dollars will be used to subsidize the security of some of the biggest companies in the world. And those same companies only actually develop and deploy patches—the entire point of disclosure—when they feel like it.

Also note that when the USG reports vulnerabilities to, say, Microsoft, the company then sends the information to a team in India for remediation. It is not all that difficult for the governments of India or China to penetrate those teams. It’s what the NSA and GCHQ would do, probably together.

Financial and Long Term Goals

Of course, in reality, there is not an endless amount of money available for developing new vulnerabilities to be used by our IC, Defense, or LE communities. Beyond that, the integration and maintenance of vulnerabilities involves even more effort than finding them in the first place.

And as we’ve seen this week with the EQGRP release, you never know when you might need a whole new batch of tested and working exploits. Sometimes, in order to avoid attribution, it is necessary to use a completely new toolchain on just a single target! Resiliency is a core component of defense-in-depth. And a US government resilient to hacks and leaks needs a strategic vulnerability development process that embraces keeping many vulnerabilities. It’s a kind of “offense-in-depth” feature of a flexible and strong system designed to continue critical mission functions in the face of compromise. Assuming you don’t need a deep bench of vulnerabilities relies on your opponent’s failure to advance defensive technology. That isn’t so much a strategy as a denial of reality. And refusing to face the advancing capacity of adversaries is going to catch up with the US eventually, if it hasn't already.

And an element of vulnerabilities equities that is rarely discussed is that of offensive vulnerability equities—the process of giving one mission team a monopoly on a particular attack path, target, or vulnerability, or finding ways for different offensive branches of the US government (and other governments) to work together is an important strategic function not addressed by current public policy. The question of intergovernmental support was raised by a number of commentators wondering why NSA would not or could not open an iPhone for the FBI in the highly-publicized San Bernardino case. But the importance of balancing the larger offensive use equities should be understood at a strategic level. And the need for compartmented use further underscores the wisdom of having multiple vulnerabilities to accomplish a given purpose.

The Vulnerability Marketplace

Public information about the vulnerability marketplace is littered with claims that vulnerabilities are sold on a “grey market,” portrayed as a shadowy, quasi-criminal underworld. In reality, like any important supply chain, the vulnerability marketplace is a valuable part of our strategic national mission in cyber. The vast majority of vendors are closely aligned with the mission. Efforts to restrict the US Government ability to license or purchase vulnerabilities drives the marketplace to buyers who can transact under less onerous conditions or exponentially increases the prices. Put simply, it is not in the United States’ best interest for vendors to sell exploits that the US government wants to nations (even allies) other than the US nor to make it more expensive for the government to buy those vulnerabilities—after all, it’s our money they’re spending. And, like it or not, bug vendors have a vote on vulnerability disclosure because they make investments in aid of the mission.

Strategic Reasons Opposing the VEP

At its core, the Vulnerabilities Equities Process is a tactical solution to a strategic problem. The process of properly understanding the operational security and technical details of hundreds of vulnerabilities a year would require an expert staff of thousands. Undertaking the effort while failing to make the required investments puts strategic cyber security goals on a roulette wheel.

The VEP must align with the United States and its allies overall strategic goals in cyber. And this requires focusing less on the details of any one vulnerability and more on the high-level equities at stake. A goal-based approach would consider vulnerability disclosure and use in terms like “a priority for the Law Enforcement community to unlock iPhones” and “Ensuring Microsoft fixes the bugs coming out of the Chinese Word Fuzzing team, but pretend it is our fuzzer effort to obscure IC access” and “the Department of Defense should have first priority for anything customarily used in Iran.” This goal-based method does not exclude private industry interests, but instead situates the equities in a broader context and rests on facts and not assumptions.

Policymakers need to recognize what technologists already know: the VEP is not the solution we need defensively. Most breaches in the US, against citizens, businesses, and the government are primarily accomplished without zero-day vulnerabilities. If collectively we decide that the intelligence community should do more to help defend America online, they should be charged with helping companies develop systemic improvements against phishing or research anti-exploit techniques for major software. But insisting the government disclose zero-days one at a time, following some painstaking process, helps no one and hurts us all.


Matt Tait is the Chief Operating Officer of Corellium. Previously he was CEO of Capital Alpha Security, a consultancy in the UK, worked at Google Project Zero, was a principal security consultant for iSEC Partners, and NGS Secure, and worked as an information security specialist for GCHQ.

Subscribe to Lawfare