A Role for the Vulnerabilities Equities Process in Securing Software Supply Chains

William Loomis, Stewart Scott
Monday, January 11, 2021, 8:01 AM

The Biden administration has an important opportunity to rebuild and sustain trust in the software ecosystem by reforming the government vulnerability disclosure process into a more transparent and frequently used system.

Headquarters of the National Security Agency in Fort Meade, Maryland (National Security Agency Photo).

Published by The Lawfare Institute
in Cooperation With
Brookings

On Jan. 14, something unusual happened—the National Security Agency (NSA) publicly announced that it had discovered a critical vulnerability (CVE 2020-0601) deep within Windows 10 and reported it to Microsoft for patching. The disclosure was lauded because of the bug’s severity; buried in a cryptographic library, it would have allowed opportunistic attackers to decipher encrypted web traffic and disguise malware as legitimate code from Microsoft or other vendors. The Atlantic Council’s new project on software supply chain security, Breaking Trust, which we co-authored, shows that this kind of vulnerability results in some of the most consequential and sophisticated software supply chain attacks, often perpetrated by state-backed actors.

Government disclosures to industry like this are an important tool to preserve trust in the software ecosystem among users and vendors and to protect against supply chain attacks. The software supply chain presents a significant source of risk for organizations, from critical infrastructure companies to government security agencies—but the state of security in this supply chain doesn’t match up to the dangers it presents. The Biden administration has an important opportunity to rebuild and sustain trust in the software ecosystem by reforming the government vulnerability disclosure process into a more transparent and frequently used system.

In the past, the NSA has elected to disclose to vendors some of the vulnerabilities it discovers and to retain other exploits for the agency’s offensive use, consistently concealing them from the public. For instance, when the NSA first discovered the SMBv1 vulnerability, better known as EternalBlue, the agency elected to exploit it rather than alert Microsoft. Unfortunately, a hacker group known as the Shadow Brokers would publicly leak the vulnerability and several of its cousins in April 2017. While the NSA did notify Microsoft of the leak, the company was left with little time to develop and widely deploy a fix. Further, hundreds of global companies would fail to patch their systems expeditiously, with thousands more unaware there was even a problem with their software. The harm of this delay became cognizable a few months after the release of the patch when the WannaCry worm tore through more than 200,000 computers across 150 countries and caused billions of dollars in damages. Microsoft’s president, Brad Smith would later call for the complete abandonment of the NSA’s stockpiling of vulnerabilities for offensive use following the attack’s widespread harm.

Reducing risk in the software supply chain requires establishing trust between vendor and user. One of the most important ways to do that is for vendors to digitally “sign” their code so that users can verify its integrity and the identity and authenticity of its source. Vendors often use basic cryptographic tools to sign code, most of which can be found in “crypto” libraries—in this case, crypt32.dll. These libraries contain hundreds of basic functions with specialized roles—producing pseudorandom numbers, encrypting, decrypting, hashing and more. In broad strokes, signed code comes with a certificate containing information like the signature, the algorithm used to verify it, verification of the issuer’s identity, and, critically here, parameters for using the algorithm. In this case, the parameters determine the shape and size of an elliptic curve used to generate public-private key pairs, and CVE-2020-0601 was at its root the failure to correctly verify those parameters.

Ideally, common parameters ensure that all users operate off the same “safe” curve (P-256, for example, is the commonly used 256-bit prime curve recommended by the National Institute of Standards and Technology). By not correctly verifying, Windows allowed attackers to use their own similar-but-different curves that systems would mistakenly trust. The signature in a certificate is created by taking a hash of the code-to-be-signed and encrypting it with a private key. It is verified by decrypting with the corresponding public key. With their custom curve parameters, attackers could create their own private keys that worked with the correct public key while still creating valid certificates that should have been flagged for their atypical curves.

If exploited successfully, code-signing vulnerabilities like CVE-2020-0601 let attackers impersonate trusted programs to bypass many modern security tools. For example, in September 2017, the Chinese-linked group APT17 executed an attack on the popular computer clean-up application CCleaner. The attackers gained access to a trusted developer account for the CCleaner software and used the account to push a malicious update for the software out to users, allowing attackers to vacuum up information about each system. The attack’s first phase infected more than 2 million computers across the globe, seeking out systems at 20 targeted technology companies, onto which the attackers downloaded a second, stealthier payload.

Code-signing vulnerabilities are particularly troubling because they create a negative feedback loop: Supply chain attacks sow uncertainty about the authenticity of the very same patches and updates used to fix security holes and thus hinder security improvements, enabling more attacks. Maintaining trust between developers and users is essential to securing software against attack. Code signing is the foundation of many software supply chain practices, offering a guarantee that the code handed off between machines is genuine.

The Breaking Trust report analyzed 115 attacks and disclosures against the software supply chain, like the CCleaner incident mentioned above, to shine a light on the key processes exploited in these incursions. The report’s dataset revealed that, over the past decade, both the volume and the potential impact of attacks that break code-signing processes have increased. These attacks not only are more common than previously understood but also are disproportionately used by state actors to great effect. Key systems of trust in the software supply chain need to be protected from exploitation or systemic abuse by states, and for the vulnerabilities with some of the most disastrous potential, government agencies are most practiced at and equipped for finding them—in fact, they are already looking for them as potential tools for offensive operations.

These systems of trust do more than function for themselves. Other security tools rely on them, and they represent a key process in improving the security of software in the face of novel attacks. The next time a software company issues an update, will users trust that the code they are being handed is authentic, or will they refuse to update, allowing the exploitation of known-but-unpatched vulnerabilities? We recommend tipping the scales toward disclosure to raise the baseline and better protect the software ecosystem.

The decision whether to disclose or exploit a vulnerability is a difficult one for government agencies. Disclosure to industry leading to a timely patch improves national and personal security, but stockpiling and using vulnerabilities augments a government’s offensive cyber capabilities, which in turn can benefit national security through intelligence gathering and disrupted adversary operations. While ideally the utility and exploitability of vulnerabilities would guide these decisions—patch easily exploited domestic networks, and attack resource-intensive adversary ones—reality is messier. Software is sold globally, agencies can misjudge the difficulty of exploiting a certain bug, and a vulnerability might compromise devices used by adversaries and domestic entities alike. History has demonstrated that imposing barriers to the flow of software based solely on national boundaries is impractical at best and counterproductive at worst.

The Vulnerabilities Equities Process (VEP) guides those decisions, yet some of its key decision criteria remain secret, and the policy as a whole is based on an executive order, an important but still mutable form of policymaking. Increasing public and industry faith in the VEP would strengthen the trust between government and industry upon which the disclosure process relies, especially for vulnerabilities in critical cryptography systems. Previous efforts, including a study from Columbia SIPA and draft legislation introduced in the PATCH Act, suggest helpful changes to codify the VEP, enhance its transparency, and provide more rigorous criteria for the challenging-but-necessary evaluation of vulnerabilities, all of which would better balance potential operational value against public harm. These changes include monitoring for adversary abuse of retained vulnerabilities, retaining vulnerabilities only temporarily and for limited use in specific operations or for predetermined periods of time, providing advanced warning to companies about impending disclosures, and reviewing retained vulnerabilities periodically as the risks of attacker use change.

Ultimately, code signing is more than just the technical assurance of author identity and code integrity: It is the manifestation of trust between user and developer. Breaking the trust in that relationship creates a serious risk to users and to national security more broadly. It is essential to develop a more rigorous and transparent relationship between government and industry to sustain and protect this trust. WannaCry and NotPetya are only the most infamous examples of how the security failings of some can impact the whole. Any assault on code signing breaks trust in the software supply chain. The VEP lives at the core of both of these issues. An improved VEP can play a small but important role to reinforce this trust and better protect the key processes that enable the cyber capabilities of the U.S. and its allies. The Biden administration has an excellent opportunity to reform the VEP to these ends.

William Loomis is an assistant director in the Cyber Statecraft Initiative.

Stewart Scott is a program assistant with the Atlantic Council’s GeoTech Center.


William Loomis is an assistant director with the Atlantic Council’s Cyber Statecraft Initiative under the Scowcroft Center for Strategy and Security, focused on the nexus of geopolitics and national security with cyberspace.
Stewart Scott is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Subscribe to Lawfare