Adding Data to the VEP Debate: RAND's New Report
When WikiLeaks shed light on the CIA’s stockpile of software vulnerabilities last week, it revived—but hardly clarified—the debate on whether the government hoards too many bugs. In principle, the interagency Vulnerability Equities Process (VEP) ensures that a flaw is disclosed when the interest in patching it exceeds other governmental interests in exploiting it. Privacy advocates have long suspected that, in practice, the deck is stacked against disclosure.
Published by The Lawfare Institute
in Cooperation With
When WikiLeaks shed light on the CIA’s stockpile of software vulnerabilities last week, it revived—but hardly clarified—the debate on whether the government hoards too many bugs. In principle, the interagency Vulnerability Equities Process (VEP) ensures that a flaw is disclosed when the interest in patching it exceeds other governmental interests in exploiting it. Privacy advocates have long suspected that, in practice, the deck is stacked against disclosure. Every new compromise of intelligence community hacking tools feeds that suspicion, while public examples of the government handing over a bug remain rare. But a new study published by the RAND Corporation suggests the VEP might be working fine, even if it produces little to no disclosure.
The tradeoff that the VEP embodies is easy to express but hard to calculate. The VEP will necessarily never satisfy those who don’t accept the premise that intelligence and law enforcement agencies should do some hacking, but for those who do, each previously unknown or zero-day vulnerability that gets patched is a window closing on a legitimate target. Some vulnerabilities, though, place enough American citizens, businesses, or interests at risk that that intelligence advantage isn’t worth the harm. It’s no coincidence that the VEP’s existence was first disclosed in the wake of allegations that NSA had been sitting on knowledge of the especially catastrophic Heartbleed vulnerability.
Balancing those interests, though, requires answering questions on which little public data has been available. If the core concern is that a vulnerability discovered by NSA will be uncovered and exploited by Russia, China, or criminal hackers, how often are bugs in fact rediscovered? How much time does the actor who firsts identifies a zero-day vulnerability have before that risk of rediscovery becomes intolerable? How does that risk compare to the costs of disclosure, whether measured by the labor invested in developing an exploit or the operational secrets incidentally disclosed? Dave Aitel and Matt Tait have argued forcefully in Lawfare that the VEP “is ill-equipped to resolve these questions, which are a miasma of technical and operational dilemmas.” And whatever its institutional capacity, if rediscovery is rare enough and a bug’s life long enough, a VEP operating in good faith would rarely vote to share from NSA stockpiles.
RAND’s report “Zero Days, Thousands of Nights,” authored by Lillian Ablon and Andy Bogart, brings data to bear on those issues. Their research is based on a collection of exploits developed by a private research group—Ablon and Bogart call them BUSBY, a reference to the Hungarian military headgear—whose members are active in the gray and white markets. Their dataset included 207 exploits developed between 2002 and 2016. That resource in hand, the report examines how many of these targeted vulnerabilities later entered the public domain. Its results should mitigate suspicions that the VEP has been rubberstamping intelligence agencies’ instincts to hoard, as well as temper expectations that a reformed or “reinvigorated” VEP would disclose more than it currently does.
Over the lifetime of the BUSBY dataset, 40 percent of the vulnerabilities that the group had targeted became public—but the risk of an independent discovery within one year was just 5.7 percent, and the risk for the first ninety days was less than one percent. In fact, the average exploit in the collection had a “life expectancy” of 6.9 years. If those figures line up with the experience of law enforcement and intelligence agencies, they suggest that the benefits of prompt disclosure are relatively low.
As Ablon and Bogart acknowledge, this study’s explanatory power has its limits. The dataset is small and potentially idiosyncratic. What’s more, the question that it addresses—how often zero-day vulnerabilities become public—doesn’t quite match the question that most concerns critics of nondisclosure: How often are zero-days rediscovered by adversaries operating, like CIA or NSA, in secret? From one angle, RAND’s proxy measure may overestimate that risk; given that American intelligence agencies have different operational needs than foreign rivals, it’s possible they hunt for bugs in very different spaces. Tait and Aitel have argued that “there is no clear evidence that Russian and Chinese operational zero days overlap with those of the U.S. IC.”
On the other hand, the study doesn’t speak to the concern that hacking tools will be stolen, as they were in the Shadow Brokers and WikiLeaks incidents, rather than reinvented elsewhere. That threat elevates the risk of stockpiling bugs. Of course, a reliable account of either of those phenomena is probably (inevitably) unavailable outside the walls of the intelligence community.
Still, these figures will likely anchor the debate over the VEP until more or better data emerge. Since its existence was disclosed, a handful of proposals to reform the process have been advanced. Perhaps most prominently, Ari Schwartz and Rob Knake, both former National Security Council officials, authored a 2016 report that recommended formalizing the VEP through an Executive Order, expanding Congressional oversight, and making public more information on its operations. They met with a mixed response; Susan Hennessey called the proposal “Reform That Makes Everyone (And No One) Happy.” The RAND results make the argument for reform less obvious. If low rates of disclosure are already driven by reasonable cost-benefit judgments, rather than a thumb on the scale in favor of intelligence interests, greater transparency without higher rates of disclosure may only exacerbate frustration with the VEP outside the national security community.
Absent evidence that the VEP isn’t working as intended, it’s difficult to make a case for urgent and intricate overhaul. Unless more information is yet to surface, the CIA leaks don’t suggest that the government hoards too many bugs; RAND’s latest research effort suggests that they may be stockpiling just enough.