Cybersecurity & Tech

Advancing Secure by Design through Security Research

Jen Easterly, Jack Cable
Friday, April 25, 2025, 10:30 AM
It is essential for U.S. policymakers to actively protect and promote the role of security research within an open and transparent ecosystem.
Cybersecurity. (École polytechnique - J.Barande, https://www.flickr.com/people/117994717@N06; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

The People’s Republic of China (PRC) is currently preparing for war by targeting America’s critical infrastructure. As you read this, PRC cyber actors are embedded deep within our most sensitive critical infrastructure, preparing to unleash mass disruption in the event of a Chinese invasion of Taiwan. The PRC’s goals are to prevent the United States from defending our allies by deterring our ability to project power into the Pacific, and to weaken America’s resolve by inciting societal chaos through disruptive attacks against “everything, everywhere, all at once”: transportation, telecommunications, power, water, and more. What has been discovered to date is likely just the tip of the iceberg.

Existential threats demand decisive actions. One of the most effective ways to counter the PRC’s strategy is by securing the technology products that underpin U.S. critical infrastructure. The vulnerabilities China is exploiting are not inevitable; with enough investment, the majority can be prevented, thereby raising our national security baseline. This is a national-scale effort that requires collaboration among technology manufacturers, their customers, and government.

While such an effort will require a multi-faceted approach—including, for example, software manufacturers working to eliminate entire vulnerability classes from their products—one crucial piece of the puzzle involves the role of security research in fostering a more secure and transparent ecosystem. For decades, security researchers have driven manufacturers towards better security practices. While the U.S. has made significant progress towards normalizing vulnerability disclosure, outdated laws continue to deter security research. With the PRC now compelling premature vulnerability disclosures to the Chinese government, it is essential for U.S. policymakers to actively protect and promote the role of security research within an open and transparent ecosystem.

Dismantle Barriers to Security Research

In recent years, the U.S. has made significant progress towards normalizing vulnerability disclosure. In particular, the Digital Millennium Copyright Act’s (DMCA) exemption for good-faith security research, as defined in the Copyright Office’s triennial rulemakings, as well as the Justice Department’s updated charging policy for Computer Fraud and Abuse Act (CFAA) violations, both clarify the important role that security research plays.

Nonetheless, significant barriers remain. The CFAA criminalizes accessing protected computers without authorization, which can have a chilling effect on good-faith security research. Furthermore, contractual restrictions, such as overly restrictive End User License Agreements, often contain anti-reverse engineering clauses that prohibit security researchers from understanding how a piece of software behaves.

Security researchers should be able to test any piece of software for vulnerabilities under standard Coordinated Vulnerability Disclosure (CVD) practices. This involves mutual commitments. Manufacturers agree not to bring legal action against security researchers. Security researchers follow CVD norms, including performing non-invasive testing and only exploiting the minimum amount necessary to prove that a vulnerability is present. As part of this, manufacturers shouldn’t place arbitrary barriers to disclosure—after all, the goal of CVD is to improve the security of the product in a transparent manner.

Fortunately, we don’t have to start from scratch. Thousands of private companies now operate Vulnerability Disclosure Policies (VDPs), and many of those contain safe harbor language committing to not take legal action against good-faith security research conforming to their policies. We must build on these strong norms that have been established by making sure that every company that builds software products operates a VDP.

Incentivize Vulnerability Disclosure by Software Vendors

Looking to other industries, the starting point for safety improvement is always data. In aviation, the National Transportation Safety Board tracks every accident and publishes detailed root-cause analysis. In the automotive sector, the National Highway Transportation Safety Administration publishes detailed statistics on car crashes and their causes. These databases not only inform the public but also enable manufacturers to analyze defects and prevent future incidents. When a plane crashes, it’s treated as an anomaly, allowing the industry to learn from mistakes and improve.

We are, unfortunately, far from this level of transparency in software. The closest analog—the Common Vulnerabilities and Exposures (CVE) database—is a critical resource but an insufficient one. For one, software manufacturers inconsistently disclose vulnerabilities, and when they do, they often don’t provide enough information to understand the scope of the vulnerability. We must normalize vulnerability disclosure and frame it as a positive signal when a manufacturer is transparent about their security posture.

As part of this effort, we must avoid shaming vendors for being proactive in disclosing vulnerabilities. The fact that a CVE exists is not alone a sign that a manufacturer is doing something wrong. To the contrary, we should applaud manufacturers that work closely with security researchers and are candid about their security flaws.

Software manufacturers should be required to file CVEs for all flaws that could have impacted the security of their users, whether or not they required action by users to fix them. CVEs are not solely notifications that users should patch their software—they play an important role in helping build a global set of issues that everyone can learn from. CVEs should be timely, accurate, and complete—in particular, they should include the Common Weakness Enumeration (CWE) field indicating the root cause of the vulnerability.

A Path Forward

To combat PRC threats, strengthen the role of security research, and ensure a more Secure by Design future, Congress should:

Enact changes to anti-hacking laws to exempt good-faith security research.

The security research exemption the Library of Congress has included in its triennial DMCA rulemakings has been effective. Congress should codify this exemption and strengthen its definition of good-faith security research, ensuring in law the right to reverse engineer and test software locally without breaking copyright law.

Additionally, Congress should codify the Justice Department’s CFAA charging policy as an exemption to the CFAA regarding good faith security research. The CFAA is perhaps the biggest barrier to security research today. Countries such as Belgium, France, and Lithuania have reformed their anti-hacking laws such that good-faith security research is not a crime. The United States should follow suit and exempt good-faith security research from the CFAA.

Empower the FTC to require that software vendors of a certain size must have a vulnerability disclosure policy and must file CVEs for vulnerabilities in their products.

While many software manufacturers have published vulnerability disclosure policies and are proactive in filing CVEs, not all follow this same standard of transparency. Congress should empower the FTC to require software manufacturers above a certain size to publish and abide by a vulnerability disclosure policy that meets minimum standards, and consistently file CVEs for vulnerabilities in their products, in line with items 11 and 12 of CISA’s Product Security Bad Practices.

VDPs should, at minimum:

  • Authorize testing by members of the public and commit to not recommending or pursuing legal action for good-faith violations of the policy. This is often referred to as a legal “safe harbor”.

  • Allow any member of the public to report vulnerability in any of the organization’s systems. Restrictions on who can report or what systems they can report on discourage disclosure and may lead to vulnerabilities going unreported.

  • Not arbitrarily restrict public disclosure. CVD should be a mutual process and a culture of transparency is crucial. Organizations can request that researchers give them a reasonable amount of time to fix vulnerabilities, but they should not place blanket non-disclosure requirements on researchers.

* * *

The time to act is now to stay ahead of PRC threats. We must preserve and strengthen the open, collaborative nature of security research in the United States and globally. By lowering barriers for security researchers and establishing transparent records of software vulnerabilities' root causes, we can eliminate entire classes of vulnerabilities in technology underpinning our critical infrastructure. This, in turn, will bolster our defenses against adversaries and secure the technological foundation we rely on.


Jen Easterly is Former Director of the Cybersecurity and Infrastructure Security Agency. Prior to CISA, she was Head of Firm Resilience at Morgan Stanley. A combat veteran, she served multiple deployments in the Army, where she helped create US Cyber Command and commanded the Army’s first cyber battalion. A West Point graduate and Rhodes Scholar, she is a two-time recipient of the Bronze Star and numerous awards, including the George C. Marshall Award in Ethical Leadership.
Jack previously worked as a Senior Technical Advisor at the Cybersecurity and Infrastructure Security Agency.
}

Subscribe to Lawfare