Courts & Litigation Cybersecurity & Tech

Incentives for Improving Software Security: Product Liability and Alternatives

Steven B. Lipner
Tuesday, May 14, 2024, 12:34 PM

Tort liability is the wrong approach to improving software security; process transparency and Executive Order 14028 offer a path forward.

Programming code on screen (Nemuel Sereti, https://www.pexels.com/photo/programming-code-on-screen-6424584/; Pubic Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

The 2023 National Cybersecurity Strategy commits the U.S. government “to develop legislation establishing liability for software products and services” and to establish an “adaptable safe harbor” for vendors who securely develop their software. The strategy correctly seeks to shift liability from the end user (who is in a poor position to effectively and efficiently remedy or mitigate the problem posed by software vulnerabilities) to the vendor (who is in a better position)—but the tort model is not the only way to effect that shift in liability.

There is a significantly better way to reallocate this liability and the associated economic costs. As described in more detail below, this approach builds on another U.S. government initiative that is well along in implementation: Executive Order 14028. Expanding the scope of the executive order’s mandates, curing some of its omissions, and requiring vendor transparency about secure development practices, will likely result in better software security, better information for customers, and market pressure on vendors to continue to improve their practices.

An Introduction to Software Security

The government has been sponsoring research, implementing development programs, and attempting to incentivize better secure development practices by commercial software vendors since the early 1970s. Despite more than 50 years of effort, software security remains a significant challenge for vendors (companies that create software products and services) and for users (ranging from individual consumers to large organizations such as the government). Security remains imperfect, and as properly functioning computer systems have become more critical to users and systems have become more interconnected, the real-world impact of software security errors has become more serious.

Broadly, the engineering problem of building secure software can be broken up into two subproblems:

  • Designing software that includes sufficient controls to restrict access to information and to audits users’ activities to ensure accountability. Responding to the design subproblem leads developers to introduce passwords and other mechanisms for authenticating users, access controls lists and other mechanisms for restricting users’ activities, and audit records to ensure accountability. The scope of design activities also includes protection mechanisms that control access to memory or encrypt data at rest or in transit.
  • Implementing software that is free of errors that could allow a hostile attacker to evade an otherwise secure design and read, modify, or delete data or disrupt system operation.

Principles for secure design date back to the mid-1970s, with the best known being articulated in a 1975 paper by Jerome H. Saltzer and Michael D. Schroeder. In recent years, the Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) have revisited the problem of secure design and emphasized the importance of security by default (one of the Saltzer and Schroeder principles that states that systems should protect information even if their user takes no action to make them secure). These principles drive the definition and selection of the security features listed above such as passwords, access control lists, audit logs, encryption systems, and more.

Through the early 1990s, researchers and policymakers believed that the important aspect of building secure software was secure design—they implicitly assumed that if the design was secure, programmers could and would for the most part implement it correctly and securely. In the 1990s, though, vulnerability researchers began to discover (or rediscover) classes of implementation errors that were both pervasive and dangerous. Memory safety errors, typified by buffer overruns, were the first to be rediscovered and are still pervasive after 30 years. CISA, NSA, and the Office of the National Cyber Director have recently published reports on memory safety—but the fact that such reports are still necessary 30 years after wide awareness of the problem, and 50 years after its discovery, is a testimonial to the difficulty of solving the problem. The CISA-NSA report presents a picture of the limitations of current techniques and depicts significant obstacles that would have to be overcome in making the transition to programming languages that are not susceptible to such errors. Other implementation errors including cross-site scripting and SQL injection were discovered and proved to be damaging and pervasive, although, as the CISA-NSA report points out, memory safety errors retain the top spot, accounting for around 70 percent of reported vulnerabilities in major products.

In response to implementation vulnerabilities, and especially the common examples such as memory safety errors, researchers and software developers have created and adopted tools that scan program source code or test finished products and attempt to detect and report vulnerabilities. However, the implementation errors (especially memory safety errors) come in a wide variety of forms and—as mentioned above and discussed in the CISA-NSA report—even the best tools are prone to false positives that distract developer time and effort from work that actually contributes to improved security, and to false negatives that allow errors to go undetected. Memory safety errors are a problem only with code written in certain programming languages (primarily C and C++), so researchers have created alternatives—such as Rust—that are immune to those errors; the CISA-NSA report advocates adoption of those languages, and some organizations have started down that path. But as the report recognizes, converting a product or service that includes tens of millions of lines of source code into a new language will, at best, be a multiyear effort.

Design or logic errors are also a rich source of software vulnerabilities. Design vulnerabilities can range from architectural omissions—such as designing a system that does not provide for encryption of remote connections—to omissions of specific controls—such as failing to include a function call that would validate access to a specific type of storage object—to low-level errors in programming logic—such as failing to require that a software component check the expiration date of an encryption certificate. 

Design vulnerabilities are less prevalent than implementation-level errors but arguably more difficult to avoid. The Saltzer and Schroeder principles are well understood but often difficult to apply in practice. Most system designers incorporate security features, in large part because the kinds of security features appropriate to different classes of products are widely known or mandated by government specifications. But design-related vulnerabilities almost always result from developers’ failures to incorporate security controls over specific classes of data or operations. Although they are less common than implementation errors, design errors do occur and can have devastating effects. The MITRE list of the 25 most dangerous software weaknesses from the Common Weakness Enumeration (CWE) database includes nine design-level vulnerabilities (numbers 6, 10, 11, 13, 18, 20, 22, 24, and 25). To take a specific example, the Stuxnet malware that attacked an Iranian nuclear enrichment plant exploited four previously unknown (“zero-day”) design-level vulnerabilities in the Windows operating system. 

Eliminating design-level vulnerabilities is a much more difficult task than eliminating implementation vulnerabilities. A technique called threat modeling provides developers with a structured way to enumerate a system’s data and operations and to search for unintended operations that can result in a successful attack. While threat modeling can be effective, detailed threat modeling is a tedious task, and success depends heavily on the talents and intuition of the person or team doing the work. A more structured technique called formal verification attempts to construct a mathematical proof of the correctness of a system’s design (and sometimes its implementation). Formal verification can be extremely effective, but formal verification tools have not yet scaled to support full verification of commercial-scale products, and the process requires highly educated specialists to perform the verification process.

A key consideration that applies to both design- and implementation-level vulnerabilities is that it has proved impractical, or perhaps impossible, to eliminate all vulnerabilities in a commercial-scale software product, or even to estimate the number of remaining vulnerabilities. Software products or components that are small enough and well enough designed to undergo complete formal verification may be an exception to this limitation, but even for those systems, the result is a mathematical proof that the system corresponds to a mathematical model of its requirements and a demonstration that the system resisted a competent security testing team for a defined period of time. 

Limitations on actual security result in part from the discovery of new attack techniques—new classes of attacks rather than individual vulnerabilities—and in part from the limitations of the tools that can be used to detect vulnerabilities. For example, in 2000, several years after the wide adoption of the web, vulnerability researchers discovered the class of vulnerability called cross-site scripting (XSS). Prior to that discovery, web page developers routinely wrote code that could allow a malicious web page to fool a user’s browser into sending commands to another page (such as a malicious retail site directing actions by a credit card processor site) to take unintended actions (such as making a payment). Like memory safety errors, cross-site scripting has proved easy to describe but remains difficult to eliminate. It holds second place on the 2023 MITRE CWE list of the most dangerous software weaknesses. Two more recent classes (of design rather than implementation errors) whose discovery in 2017 also changed the security community’s understanding of threats to computer systems are Spectre and Meltdown. These specific vulnerabilities enabled attackers to exploit features in modern computer hardware to achieve unauthorized information disclosure. Perhaps more important, they revealed a new class of vulnerabilities that appears to be pervasive and that researchers have since explored to find additional specific weaknesses.

Many software vendors understand the need to deliver software that can resist hostile attacks and have adopted practices to integrate security into their software development processes. Those practices incorporate developer training, programming standards, use of security analysis tools, and a feedback loop in which newly discovered vulnerabilities are used to guide updates to training, standards, and tools with the aim of preventing similar problems from recurring. There are a variety of guides to secure development processes including the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF), the SAFECode Fundamental Practices for Secure Software Development, and the BSA Framework for Secure Software. All are similar and, because of the wide variety of security analysis tools, programming languages, software applications, and technology platforms in use, all leave the specifics of secure development (for example, what tool outputs are treated as “must fix” and how exhaustive testing must be) to the individual development organization. The NIST SSDF refers to this decision on specifics as taking a “risk management” approach: The realities of vulnerability trends and technology advise the development organization about which techniques are important, which are less important, and which can be ignored. For example, if a product is implemented entirely in a memory-safe language, the practice of “fuzz testing” is almost certainly not needed to detect memory safety errors (although differently focused fuzz testing may still be used to detect other kinds of errors).

Vendors create secure development processes by selecting specific tools, training, and requirements that implement the intent of each requirement in the guide and are specific to the vendor’s products and processes. Vendors also operate tracking systems to ensure that every tool or process has been applied to every software component and that all detected problems have been analyzed and fixed if necessary. (The prevalence of tool false-positive results makes the analysis necessary.) As mentioned above, vendors analyze reported vulnerabilities and where possible update their secure development processes to address new classes of vulnerabilities. Vendors also seek to make systemic improvements, for example, by replacing older protocols or formats with safer ones and by making the transition to memory-safe languages and to hardware platforms with advanced security features. 

Vendors adopt “bug bars” such as a publicly released example from Microsoft that characterize the effects of classes of vulnerabilities and provide guidance on the urgency of fixing particular problems. If the best tool that the vendor can use to detect a class of vulnerability has a greater than 50 percent false-positive rate, the vendor must face the risk that developers will not trust the tool and may become so “trained” to expect false positives that they will overlook true positives. The bug bar may mandate the use of such a tool, but vendors exercise such mandates selectively to avoid training their development teams to mistrust all tool outputs.

Vendors who seek to make their software more secure face a fundamental trade-off: Given the practical impossibility of creating completely secure software, at some point they must decide that software is “secure enough” to release, even though it will inevitably contain some unknown remaining vulnerabilities. The decision is one of risk management and technical feasibility: If software is not sufficiently secure, vendors will face customer and press complaints, as Microsoft did in the early 2000s, and potentially lose business. If they delay releasing software to search for remaining vulnerabilities, they face customer complaints about missed features and uncompetitive products. A new product version that has undergone in-depth security review and analysis is likely to be more resistant to attack than an earlier version, even though it will not be perfectly secure. 

The Liability Framework

In January, James Dempsey released a paper titled “Standards for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor.” Dempsey’s concept has been the subject of much discussion among a group composed of academic attorneys and a few cybersecurity researchers. His concept also appears to be a leading candidate for consideration as the Biden administration creates the liability regime the National Cybersecurity Strategy calls for, so it’s important to consider it in the light of the technical realities of software security. (This article does not attempt to address Dempsey’s legal analysis.)

To summarize, Dempsey proposes two levels of software security assurance: a floor that would be based on a set of problems (referred to in the paper as “features”) that software should never manifest, and a safe harbor that would be based on best practices for secure development. If software exhibited a vulnerability that was precluded by the floor, its vendor would be liable for harm caused by exploitation of the vulnerability. If software met the safe harbor requirements, it would be protected from liability. Vendors of software that met the requirements of the floor but not the safe harbor would be exposed to liability and required to defend their decisions and products, but they would not be guaranteed liable. 

In concept, the idea of a floor that would penalize vendors that delivered software that included any of a set of “obvious” errors is appealing. But Dempsey’s proposal—and his reference to security errors as features—overestimates the kinds of errors that can assuredly be avoided. While it is possible to enumerate some that are important—Dempsey refers to unchangeable default passwords, and it’s easy to add software that can’t be updated, use of “home-grown” encryption algorithms that have not undergone wide review, and some others—many important classes of vulnerabilities are not clearly identifiable features but result from wide ranges of coding errors that are all too easy to make and all too hard to avoid. The previous section discussed the different kinds of memory safety errors (buffer overruns and the like) and the limitations on the tools for finding and removing them. To take two additional examples, injection errors such as cross-site scripting and path traversal errors can result from numerous obscure forms of text strings and are also challenging to detect reliably. 

It might be possible to define a “floor” that could be applied universally and without requiring endless interpretations, but such a floor would only define minimal practices for software security. Software that just met the requirements of the floor could still be woefully vulnerable, so it is not clear that the “floor” provides much practical benefit to software users.

Dempsey proposes focusing on the developer’s process to define a safe harbor, and this concept aligns with current practices in industry and government. However, although Dempsey hints at the complexities of creating a secure development process for a real-world organization, he fails to acknowledge them or deal with their implications. Dempsey condemns the NIST SSDF as too general and high level to serve as a standard for vendor liability, and the same condemnation would probably apply to the SAFECode and BSA documents cited above.

Dempsey states that instead of being general and high level, process standards for secure development need to be very specific. As an example of a suitable guide from another domain, Dempsey cites the 918-page National Electric Code and he refers positively to the 168-page Microsoft Security Development Lifecycle (SDL) version 5.2 (dating to 2011) as an example of the level of specificity needed. In doing so, Dempsey fails to recognize the role of higher-level guides such as the SSDF: SDL v5.2 is detailed, but a price of that detail is that the resulting guide is limited to a specific set of programming languages, tools, and software platforms, and probably to a specific development lifecycle and organizational culture. 

To take one example, SDL v5.2 includes a list of 32 “must-fix” memory safety errors that can be reported by the static analysis feature of Visual Studio 2008 for programs written in C or C++. That list was important to the security of C and C++ programs in the late 2000s, but Visual Studio 2008 has long been replaced by a newer tool with its own longer list of “must-fix” errors. And a list of errors for Visual Studio 2008 or Microsoft’s new tool is of little value to vendors that use Fortify, Coverity, Veracode, or any of the other static analysis tools on the market or to vendors who write code in languages other than C or C++. 

The role of guidance such as the SSDF is critical, but following the SSDF only starts a vendor down the path to creating an actionable, detailed standard applicable to its own tools, language, and platform. And, as the Microsoft example suggests, such standards are not static. They must be updated to reflect new classes of vulnerabilities and attacks, new languages and platforms, and new or improved tools. The creation and sustainment of such standards requires highly skilled teams expert at security and specific technologies that a vendor employs. In theory, one might create a universal standard that included a separate set of required practices for every combination of tool version, programming language, and compiler version, but the effort, coordination, and review required to create and maintain such a standard would be enormous. And the cost of that effort, coordination, and review would include loss of the agility that enables vendors to adapt to new classes of vulnerabilities and new improvements in tools and techniques.

If every vendor must create their own secure development standard, it is necessary to confront the question of which vendors’ standards are acceptable. Some other government-mandated security schemes operate by licensing evaluation labs that review vendors’ products or processes and render judgments about their sufficiency. It might be conceivable to create an evaluation scheme that would review vendors’ secure development standards and render judgment about the adequacy of their interpreting the SSDF in the context of the specific vendor’s tools, languages, and platforms. But the skills and experience necessary to render such judgments are very rare and are learned primarily by creating and sustaining secure development processes. The engineers who have such skills often prefer to work on creating secure development processes rather than reviewing them. Few certification labs would have the required competence (although many would surely claim to have it), and none would be likely to have the knowledge of the vendor’s code base and development practices required to tell an effective and implementable process from one that only produced documents designed solely to convince evaluators. 

In fact, the government abandoned its 2004 attempt to create a product security evaluation scheme that would consider secure development processes because of the difficulty of the task and skepticism by the U.S. and other governments involved about the availability of talented evaluators. In general, although third-party certification of products’ security provides vendors with a relatively efficient way of knowing that they have “met the rules,” it has proved effective only at identifying correct implementations of very specific and well-defined security features such as encryption.

The paragraphs above have focused on the adequacy or inadequacy of the vendor’s process because that is the more challenging problem in assessing a vendor’s approach to software security. The vendor must also implement the process: run the tools, perform the testing and reviews, find and fix security-related errors. It is simpler to determine whether this requirement has been met than to assess the vendor’s process: The vendor’s bug-tracking system and documentation and source code repositories provide evidence that the vendor has executed the tasks that the process requires. But to deliver secure software, the vendor must do both: create an effective secure development process and implement it rigorously.

Dempsey does not advocate for a certification scheme in his paper. Absent a certification scheme, would vendors’ attestation to best practices be found to be sufficient to claim a safe harbor against liability? Almost certainly not. A much more likely scenario would see a flurry of lawsuits against vendors for purported harms caused by software vulnerabilities whose presence is claimed to “prove” that the vendors did not meet the safe harbor requirements. Given the remaining vulnerabilities in the security of even the best software today, the flurry would probably be more like an avalanche. Even if most such suits were dismissed, they would have the effect of forcing vendors to divert resources to legal defenses and away from building secure software—arguably the phenomenon that establishing a safe harbor should prevent.

In fact, both third-party certification and the need for legal defense of a safe harbor can be interpreted as destructive of software security. The need for legal defense incentivizes vendors to divert resources from tools, training, and process improvement to attorneys, defensive documentation, expert witnesses, and record-keeping in preparation for a “day in court.” Third-party certification incentivizes vendors to spend resources on certification contractors and the documents they consume. In either case, the potential for liability has created a powerful incentive for the vendor to invest in liability defenses. In a world of finite budgets, the software delivered to end users and system integrators may be less secure, not more. 

An Alternative for Incentivizing Secure Software

Dempsey’s proposal for vendor liability is unlikely to be effective in incentivizing the creation of more secure software. Fortunately, the U.S. government has already started down a better path. 

A More Effective Approach to Vendor Incentives and Accountability

In May 2021, President Biden signed Executive Order 14028 on “Improving the Nation’s Cybersecurity.” Section 4 of the executive order mandates that vendors that supply software to the government attest to their adherence to practices that aim to improve the security of that software and to protect the software supply chain (development environment and externally sourced components).

As directed by the executive order, NIST updated the SSDF to reflect the executive order’s direction regarding secure development and supply chain security best practices. The Office of Management and Budget (OMB) published guidance on attestation and OMB and CISA released attestation requirements for vendors.

One might object that self-attestation to adoption of secure development practices is a far cry from legal liability. But attestation to a government requirement that is part of a procurement is not a matter that vendors will take lightly. It seems likely that a misrepresentation on such an attestation could lead to negative technical evaluations, bid protests, or contract termination in addition to penalties under the False Claims Act or debarment from future federal procurements. Vendors and their trade associations are certainly taking the attestation requirements seriously. The recommendations below for transparency of vendors’ processes and the existence of a robust community of independent vulnerability researchers significantly raise the risks of discovery of cases where a vendor has misrepresented an attestation. 

The sheer number of vendors and attestations might raise concerns about the government’s ability to hold vendors accountable for misrepresentations in attestation, but attestation makes the process scalable. If a government agency believes that a pattern of vulnerabilities in a vendor’s software suggests that they are not adhering to the requirements of the SSDF, the government can audit that vendor. While auditing every vendor would be as massive an undertaking as finding competent certification contractors to assess every vendor’s secure development process, auditing any single vendor’s process, including the appropriateness of its risk management decisions, is a much more limited undertaking; government can almost certainly find highly qualified teams to undertake the sample of audits that would be required. It is also likely that the example of a few government audits would motivate vendors to ensure that their secure development processes and implementation practices were well structured and effective.

Further, government audits of vendors’ processes are a better alternative than a liability regime. The government will do a better and more objective job of deciding when an audit is warranted than an unconstrained population of plaintiffs and attorneys, and government auditors and procurement officials will most likely do a better job of deciding when a vendor has attested falsely than the legal system relying on judges, juries, lawyers, and expert witnesses. And thus, the result is appropriate accountability for vendors that minimizes the pressure on vendors to divert resources from software security to preparations for legal defense.

Improving on Attestation and the Executive Order

The combination of the executive order, the SSDF, and the vendor attestation requirements is likely to have a significant impact on the security of software sold to the U.S. government, and since the government relies overwhelmingly on commercial off-the-shelf software, all business and individual software customers will likely benefit from the improvement. In effect, the government mandate offers a realistic prospect of improving software security without diverting vendors’ resources to preparing for litigation, as Dempsey’s proposal would require, or to a certification industry.

However, if the government chooses this path, there are three kinds of improvements that should be considered:

  • Expanding the scope of the software covered.
  • Improving some shortcomings in the attestation process.
  • Increasing vendors’ incentives to improve software security.

The following sections address these areas in turn.

Expanding the Scope of the Executive Order. As written, the executive order applies to software acquired by government agencies. The categories of software that the U.S. government uses are widely employed by commercial organizations and individuals. However, there are probably important categories of software that are not covered—consumer software, software used in critical infrastructure applications, and software for the Internet of Things (IoT). The government probably has the authority to extend the requirements of the executive order to those classes of software. For example, the Federal Communications Commission has recently taken steps to establish a security labeling program for IoT devices. The government should review the software landscape broadly for other domains or applications where software security is important and where the government has appropriate authority. It seems likely that many such steps can be taken without requiring new legislation. 

Improving the Attestation Process. The attestation requirements released by CISA and OMB fall short in several areas of mandating attestation to best practices that are clearly called out in the SSDF. Two particularly unfortunate omissions are the requirement for threat modeling—an imperfect process for identifying design-level security errors, but the best that is available today—and the requirements for vendors to conduct root cause analysis of discovered vulnerabilities and update their processes to prevent the recurrence of similar errors. Beyond correcting those two omissions, it would make sense for the attestation requirements to simply insist that vendors attest to meeting the requirements of the SSDF wherever those requirements apply to the vendor’s products and development model. It makes little sense for CISA and OMB to second-guess the well thought out requirements in the SSDF or to mandate a “checklist” that seems to ignore the vendor’s responsibility for making effective risk management decisions. NIST should also be required to update the SSDF periodically (perhaps annually) to reflect newly discovered classes of vulnerabilities and newly invented approaches to developing secure software.

Creating Market Incentives for Vendors to Improve. The requirement to attest to meeting the requirements of the SSDF will cause vendors to focus on their secure development practices, and many will see significant improvements in product security as a result. But it would be desirable to go further and create incentives that will cause vendors to view software security as a real competitive issue, as they do performance, usability, and features. A simple way to create such an incentive, first suggested by Jeff Williams, is to mandate transparency. The SSDF or attestation form should be updated to include a requirement that software vendors release their secure development process descriptions to the public in detail. The level of detail in the released version of the Microsoft SDL is a good model.

Public release of detailed process descriptions will have several positive effects:

  • It will enable customers to compare vendors’ security practices and use the results of comparisons to influence their purchasing decisions.
  • It will enable public-interest consumer organizations such as Consumer Reports to compare vendors’ security practices and provide informed advice to individual buyers. 
  • It will enable independent vulnerability researchers to provide feedback on omissions or potential improvements in vendors’ practices, helping to raise the bar on vendors’ practices.
  • It will enable government, customers, and vulnerability researchers to compare reported product vulnerabilities with vendors’ self-described practices and thus help to ensure that vendors are doing what they say they are doing.
  • It will enable vendors to compare their practices to others’ and enable vendors to compete on the basis of better security.

In short, process transparency offers a way to keep vendors honest, to eliminate the “market for lemons” aspect of software security, and to create a virtuous cycle in which vendors must improve their secure development practices to keep up with competitors. Ideally, customers will factor vendors’ secure development processes and the opinions of expert third-party analysts into their buying decisions and create a market for improved product security.

***

Product liability is the wrong approach to improving software security. A liability regime would cause a transfer of wealth to liability lawyers, expert witnesses, and/or certification contractors but not incentivize improved software security. The government already has a mechanism—Executive Order 14028, the NIST Secure Software Development Framework, and vendor attestation—that will incentivize vendors to improve software security. A few improvements to the attestation mechanism and a requirement for vendor transparency about secure development processes will result in still greater improvements in software security.


Steve Lipner is the executive director of SAFECode, an industry nonprofit focused on software security assurance. He was previously partner director of software security at Microsoft, where he was the creator and long-time leader of the Security Development Lifecycle (SDL). He serves as chair of the U.S. gGovernment’s Information Security and Privacy Advisory Board and has more than a half century of experience in cybersecurity as researcher, engineer, and development manager. He is a member of the National Academy of Engineering.

Subscribe to Lawfare