Software Liability Is Just a Starting Point

Trey Herr
Thursday, April 23, 2020, 9:47 AM

To meaningfully change the software ecosystem, liability policies must also establish clear security standards, apply them to the whole supply chain and create incentives for organizations to apply patches quickly.

The offices of the National Institute for Standards and Technology, which could play a role in developing a system of liability for insecure software. (Source: Flickr/Mike, https://flic.kr/p/46dCSF; CC BY-NC-SA 2.0, https://creativecommons.org/licenses/by

Published by The Lawfare Institute
in Cooperation With
Brookings

The Cyberspace Solarium Commission places an admirable focus on the security of the software ecosystem and lays out several policy levers to improve it, including proposing liability for the “final goods assemblers” of software that fail to responsively patch known flaws in their code. Liability is not a new idea, recurring in academic journals and policy discussions over the past five decades. Some vendors have opposed liability since the earliest days of box software, but it could help revise the incentives to rapidly and effectively fix software flaws. The implicit argument in favor of liability is that over time these revised incentives will aggregate to more secure software and make remaining flaws more challenging to exploit.

But liability is just a start. To make meaningful change in the software ecosystem, a liability regime must also:

  • Apply to the whole software industry, including cloud service providers and operational technology firms such as manufacturing and automotive companies. These firms are important links in the software supply chain.
  • Produce a clear standard for the “duty of care” that assemblers must exercise—the security practices and policies that software providers must adopt to produce code with few, and quickly patched, defects.
  • Connect directly to incentives for organizations to apply patches in a timely fashion.

If There Is to Be Liability, Let It Be for the Whole of the Software Industry

The software industry is no longer a handful of firms selling boxed discs. Some of the largest software projects in the world—bigger even than a modern operating system—are managed by firms like General Motors, Boeing, Lockheed Martin and Siemens. And cloud providers like Google, Amazon and Microsoft are responsible for code that impacts billions of users across the planet. The notion of a “final goods assembler” is a useful way to identify chokepoints in the software ecosystem, and the Solarium recommendation rightfully includes assemblers of software and hardware in their analysis. Where liability is applied and a duty of care is enforced, policymakers must ensure it applies to the whole of the software industry, including operational technology and cloud computing, not just the past decade’s information technology giants.

Rebuild Standards to Produce Secure Code and Quickly Patch

Liability is the legal mechanism to hold a goods and services provider accountable for the quality, security and safety of those goods and services. Determining acceptable quality, security and safety requires clear standards. In the jargon of the legal class, this refers to a “duty of care”—the minimum obligation required of a provider whose products might harm their users. The duty of care becomes critically important in defining the standard of behavior expected of final goods assemblers. An effective standard might well create legal obligations to set “end-of-life” dates for software, remove copyright protections that inhibit security research, or block the use of certain software languages that have inherent flaws or make it difficult to produce code with few errors.

But as a 2016 National Institute of Standards and Technology (NIST) report noted, determining which errors are “‘sloppy and easily avoidable’ is not a trivial matter.” Even avoiding simple errors is not an affirmation of sterling quality. A handful of efforts, new and old, try to address this problem. Some look at specific high-impact sectors like power generation and distribution, and medical device manufacturing. Others are more holistic, such as Microsoft’s Security Development Lifecycle, a decade-plus-long effort from SAFE Code and their Fundamental Practices for Secure Software Development, and the still new Framework for Secure Software from BSA. NTIA’s Software Bill of Materials (SBOM) effort is complementary in addressing how organizations track what code they use rather than how it is developed or patched.

Standards for the speed and quality of patches are a more nebulous area. Perhaps the most well-known example is Google Project Zero’s 90-day disclosure requirement. Project Zero, an uncommonly talented team within Google tasked with researching and disclosing vulnerabilities in popular software, has long required a (mostly) mandatory 90-day window for subject firms to issue a patch after receiving the disclosure. The team recently formalized this policy for greater consistency in complex cases and to incentivize firms to issue higher quality patches.

Don’t Forget About Patch Adoption

The speediest patch in the world matters not a whit if it is not applied. An effective liability regime must consider the full lifecycle of software, including incentives for those applying patches. Here are three possible approaches to complement a software liability regime.

First, NIST could develop standards for patch application with appropriate categories for a vulnerability’s severity and exploitability, the type of software and system being patched, and the patching organization’s security maturity using any of several models. Exploitability in particular is a somewhat subjective determination but one made regularly by security engineering teams and researchers.

Second, the Defense Department and the General Services Administration could include service level agreements (SLAs) to measure patch application performance using these standards in all federal technology procurement contracts, perhaps using the above NIST standards. Failure to meet those SLAs would result in a penalty when evaluating a firm’s performance against the contract. This approach would be similar to those proposed by advocates of elevating security as a criterion in federal acquisitions.

Third, insurance firms offering product lines that cover cybersecurity, in whole or in part, could base part of their risk assessment of new customers on self-reported rates of patch adoption—either the NIST set proposed above or some other consensus-based standards. Comparing rates of patch adoption across comparable customers would yield insights on customers’ security maturity as well as indirect measures of the quality of patches from different final goods assemblers.

Counting patches is not a proxy for measuring the security of code. Understanding which organizations lag their peers in applying patches and incentivizing more timely patches will increase their security. There are prospects for using patch application performance to more widely inform the marketplace—especially between prospective business partners and within supply chains. But developing the standards and implementing them for the federal enterprise is a start.

Conclusion

Liability is a subject that gets people riled up. And many have anticipated its return to the cybersecurity policy conversation for several years now. EU-led efforts to establish a certification regime for the security of software, among other information and communications technology products, will benefit from specific and technically informed discussion of what constitutes good code and how to patch effectively. A NIST effort to produce a Secure Software Development Framework could also receive additional investment and become a focus of effort—something positive and timely for the standards organization.

Small- and medium-sized software developers, open source projects, and even academics will benefit from broadly used (and thus more easily automated) secure development practices and tools. Large players in the software development business will benefit as well. At large technology firms like Facebook or Ford, detailed external standards will help internal efforts to unify and enforce secure development practices.

And at the end of the day, users would benefit as well. Software has eaten the world—making it imperative that we improve the quality and speed of repair for code that permeates our lives from dawn to dusk. As a nasty flaw in home internet routers (with a patch available for years but not yet applied by many) showed earlier this year, the security of even the smallest devices we buy is as much about protecting our community as it is about keeping our own homes safe.

The debate over software liability is just a starting point. What’s still needed is to ensure liability applies throughout the software supply chain (including cloud providers and operational technology firms), that it stems from clear standards and that these standards are paired with incentives to drive the adoption of patches once produced. With appropriate legislative measures, help from the plucky boffins at NIST and effort from the Defense Department, Department of Homeland Security and the sector-specific agencies to drive adoption, the product could be that rarest of outcomes in Washington: something truly useful.


Trey Herr is Assistant Professor of cybersecurity and policy at American University’s School of International Service and director of the Cyber Statecraft Initiative at the Atlantic Council. At the Council his team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, cyber safety and growing a more capable cybersecurity policy workforce.

Subscribe to Lawfare