Three Questions on Software Liability
Published by The Lawfare Institute
in Cooperation With
Software liability is in the air once more. In Europe, the Cyber Resilience Act (CRA) proposes a form of liability for digital goods through attestation to basic security practices. In the process, it has provoked the rightful ire of open-source advocates who interpret the proposal as a poor accounting of the nature of open-source development and the communities that support it. In the United States, the White House’s 2023 National Cybersecurity Strategy (NCS) has proposed reforming software liability to grapple with how “markets impose inadequate costs on” insecurity and how today’s incentives “have not been enough to drive broad adoption of best practices in cybersecurity.”
The NCS, the CRA, and broader policy discussions are not opening a new can of worms but uncorking a familiar bottle. This is far from the first era of debate over liability for software, and there has been extensive work on the goals and mechanisms of legal accountability in cybersecurity and other domains. Liability is an appealing lever for driving better security outcomes, or at least better security practices. The notional goal of any software liability regime is either to restore harmed parties, to incentivize improved security practices and outcomes, or both. Depending on its formulation, liability can work through shaping incentives rather than requiring regulators to crown one tool, method, or product as the cybersecurity silver bullet. Liability can also shift enforcement off of overburdened regulators and allow harmed parties to directly hold companies accountable for falling short, however that is defined.
As useful as liability appears in the policy toolkit, however, key conceptual gaps still bog down discussions of its application to software. There are three key questions that can help shape today’s software liability conversations with more specificity. Addressing these questions will help U.S. policymakers and advocates of software liability to answer opponents and bring their arguments up to date with how software is built today. This article explains these three questions as encouragement, sharpening the proposed policy rather than challenging its validity.
1. How much security is sufficient?
One way to navigate the maze of software liability is to look at the problem it seeks to solve: stubbornly persistent insecurity. Things are worse than we’d like them to be. If the problem lies at the feet of an inefficient market, liability might just be the tool for that job, or at least a significant one in the toolbox. But that line of reasoning skips over the specifics that must guide liability policy. For example, how large do incentives need to be to drive improvement? The market for software is enormous. In 2021, Gartner estimated that, for enterprise applications alone, the market was $535 billion. If we are dissatisfied with the current state of security, three explanations stick out:
- Current incentive structures lead to insufficient security investment.
- Security investments aren’t leading to the most efficient security improvements possible.
- The desired set of security outcomes is, itself, suboptimal given the cost of achieving it.
The trouble is that we don’t know—or have a good way of knowing—what combination of these is the case as of right now.
There is surprisingly little consensus around measuring baseline questions about cybersecurity. Assuming things are worse than we would like, are they at least better than they were last year? Some journalists report that there are more incidents each year. By that measure, things are getting worse. Some scholars argue that there are also more computers, more people, more code, more vulnerabilities, more cyberspace, and more flowing through it each year—controlling for some or all of that might show things are stable or even better over time, like a digital per capita. Others contend that incident severity is paramount—incident distribution has undefined variance, so the rare but catastrophic is by far most important, and that class’s ever-growing potential tilts the scale toward worse as digital infrastructure underpins more critical functions every day. Troublingly, data is scarce. Congress has only just created reporting requirements for critical infrastructure, and no robust public incident reporting framework exists for industry writ large (though recent rulemaking by the Securities and Exchange Commission could help). Knowing how many incidents have been thwarted by good security is even more difficult, if not impossible.
If the goal of proposed liability frameworks is to drive better security outcomes, it is paramount to understand how to recognize “better” when we see it and what practices, investments, or incentives caused those improvements. As other cyber experts have highlighted, “objective empirical evidence of what constitutes a reasonable security measure” across contexts is necessary to ensure that a liability regime drives behaviors that actually improves outcomes. Understanding the overall economic utility of software would also help calibrate a liability regime, allowing policymakers to assess the damages associated with the loss of that benefit or estimate the potential costs associated with longer development cycles or more circuitous certification and deployment paths.
An outcome-based liability framework will need to define what a cyber incident is and the comprehensive size of the costs that one imposes with reasonable specificity. This is not a trivial task, as incidents are complex tangles of failures both technical and human, and defining the costs associated with things like privacy violations has been a perennial challenge. Even with clear information about the causation, scope, and harms of an incident, fine-tuning penalties is still deeply challenging. Pegging costs too low could reduce liability to just another cost of doing business, and too great a cost might obliterate firms that should otherwise be able to adapt their behavior. Knowing which is which is the same challenge described above: How do we know what improvement looks like?
Meanwhile, a process-based framework (a safe-harbor model, for example) will need to decide what processes are worth the costs. One argument against liability is its potential impact on innovation: If risk grows potentially more expensive, more producers will take fewer chances and innovate less. A thorough understanding of the security benefits of some processes could be matched with an accounting of their financial cost, leading to a reasonable cost-benefit analysis of security processes that could be titrated to sensitivities and needs in different software operating contexts. It would be a shame to see the liability conversation bogged down in the trees of legal finesse while missing the forest of risk-prioritized cybersecurity analysis before it.
As all code will contain some percentage of flaws that might be exploitable, the future of a liability regime is not to produce vulnerability-free software. There will always be compromise and there will always be a spectrum of security maturity among companies—so, what if the current state of affairs is the optimal point, or closer than we think? This question is not to imply that cybersecurity cannot improve, but to emphasize that we really don’t have a way of knowing our current position relative to that point. Take, for example, the “other” goal of liability—making harmed parties whole. What does restitution mean in the context of leaked information being forever available on the dark web? Ingrained in the discussion of software liability is an overarching assumption that the cost of preventing incidents is comparable to or less than the cost of making right an incident’s fallout—the cost of the former including lost innovation, and the cost of the latter including long-tail, enduring catastrophe.
Measuring either basket of harms with accuracy, going well beyond mere financial losses, is tremendously challenging, and reliably preventing incidents even more so. Accounting for the most extreme consequences and their potentially trivial prevention is definitionally impossible. However, if restitution is, in fact, the less costly option, vendors will take it, and behavior won’t change—and it shouldn’t in that instance (given that the cost of restitution fully encompasses harm done). With accurate estimations, that will be the better option, and shaping incentives will remain an ever-frustrating uphill slog against apparently recalcitrant vendors.
Therefore, another question remains: How much should it cost when one is found liable—enough to restore a harmed party or enough to motivate changed behavior? While the question might spark debate about legal mechanisms of enforcement, it also implies a raw economic question: What if those are substantially different numbers? The trouble is that we still have a limited understanding of the return on investment of different security measures and the cost of various cybersecurity harms—and our actual position in the security-efficiency landscape relative to the ideal one is unmeasured.
2. What do we mean when we talk about software liability this time?
Without specificity, any discussion of software liability can fall victim to spiraling questions and definitional debates. Software liability could be as broad as liability for all harms caused by software, including headline-grabbing outcomes of artificial intelligence (AI) systems (see the European Union’s proposed AI product liability directive and recommendation-algorithm product liability claims in the U.S.). In the context of the NCS, software liability really means software cybersecurity liability—liability for harms caused by cybersecurity vulnerabilities. Another natural question, then, is: Whose vulnerability? The creator of the insecure software? Or entities that operate insecure software in a particular context, such as processing customer data while configured improperly? The Biden administration’s proposal would seem to place liability on software creators or vendors—those “best positioned actors” that, according to the NCS, defer too much responsibility for security to their customers.
Even narrowed to cybersecurity, there is still much debate and discussion among courts, academics, think tanks, and even industry about what forms liability could take. “Strict liability” would hold manufacturers responsible for any bad cybersecurity outcome caused by their product, while liability with a negligence standard would hold manufacturers liable only when a bad outcome arose from their failure to uphold a “duty of care” while building a product. A safe-harbor model, embraced by the NCS, would provide general immunity from liability for manufacturers that follow a defined set of secure development practices, regardless of bad outcomes, but loses much of the context on where those practices are in use.
This context presents another definitional challenge—and a perennial problem with comparing software to physical products such as cars and hair dryers. In most cases, the consequences of insecure software are determined far less by the severity of any vulnerability than by the importance of what that software is connected to. The same operating system connected to a home office printer and an electronic health records system yields vastly differing consequences if compromised. A pilot product from a startup with a limited testing user base ought to be recognized as a less critical environment than an active hospital emergency room. How should a potential safe-harbor model account for the different security standards that might be appropriate for different manufacturers? Is it possible, or reasonable, to ask software manufacturers to foreknow the sensitivities of all contexts in which their software might be deployed?
Adding even more complexity, software often affords users much more choice of—and responsibility for—the security of the products they use. Operators must make decisions about the configuration of products and ways to manage security tasks like patching and updating that are essential to product security, sometimes even without parameters or default configurations from vendors. Through practices like secure-by-design-and-deployment principles, vendors can make it easier for users to make good decisions and avoid bad ones, but they still cannot guarantee secure systems. As a collection of security aphorisms assembled by cyber expert Helen Patton recently included, thou shalt not “propose technical solutions to management problems.” Any approach to software liability will have to draw limits around what any vendor of software should be liable for, which can get complicated quickly.
Some software-specific concerns in liability discussions defy neat analogies to physical products: the unavoidable facts that all software is vulnerable at all times, that all software is made of someone else’s software, that many software components are open source and legally offered “as is,” that much software is classed as a service rather than a product, and more. Imprecision about these risks creates an ineffectual, counterproductive regime or traps liability in an endless loop of deferral, kicked around the interagency and Congress without resolution or reprieve for the consumer.
3. What has gone wrong with existing incentives, and how do we do better?
Product liability law is the existing jurisprudence upon which software liability would rest. It is often conceived as a solution to markets for “products that have observable utility and hidden risks.” Such products should, theoretically, tend toward market failures because consumers will be unable to price the risk at time of purchase, leaving little incentive for producers to invest in risk mitigation. Because observable utility and hidden risks describe the challenge of software security to a T, the abstract argument for stronger, clearer liability for software products is compelling—you often don’t know a piece of software is insecure until it has been compromised, and few consumers can audit code or production processes sufficiently by themselves. A reasonable scheme would hold a producer liable if it sold negligently assembled software that ultimately harmed one or many users. If consumers could extract costs from vendors that sold more insecure software, the costs would incentivize companies to invest more in making secure products or, at least, building products as securely as reasonably possible.
Why can’t regulators or the market give consumers better cybersecurity information at the outset? That challenge—defining a set of required reporting metrics on cybersecurity, or standardized tool sets for its evaluation—runs into the same problem that attempts to define a liability safe-harbor are likely to encounter: intractability. There’s simply too much variety in security needs, threats, and practices to unify under a single yet digestible reporting format, and it all changes too quickly to boot, not to mention the questionable feasibility of expecting end consumers to retain the expertise needed to vet all this information. Here, too, the insufficient product choice in many technology verticals likely exacerbates the problem by leaving users locked into a specific software solution or data format and unable to switch in response to anything but an immediate and existential security issue. A liability regime, then, seems a convenient way to create stronger incentives for security while offloading the considerable burden of evaluating risk from consumers to producers.
Here, though, another problem arises: Different degrees of liability for cyber incidents already exist. Why are these incentives insufficient? Courts have levied significant penalties against companies that failed to safeguard consumer data in accordance with regulatory obligations or their own marketing claims. All 50 states have some version of data breach reporting requirements, some of which require firms to meet security standards in protecting data such as the Center for Internet Security Critical Security Controls or the National Institute of Standards and Technology Cybersecurity Framework. This mirrors some physical liability models, which provide graduated scrutiny based on the potential harm—for instance, consider the difference in regulatory oversight for your local McDonald’s and your local nuclear reactor. Class-action lawsuits, investor lawsuits, and action from the Federal Trade Commission and other cyber regulators all provide additional punitive incentives—yet the state of cybersecurity does not satisfy policymakers.
Take T-Mobile, for example. A 2021 breach affecting 76.6 million T-Mobile customers led to the settlement of a class-action lawsuit, fining the firm $350 million and obligating it to spend an additional $150 million on improved cybersecurity—punishment, incentive to improve process, and the budget to do so all bundled together in the second largest penalty of its kind, only smaller than the fallout from the Equifax breach. Yet, less than halfway through 2023, T-Mobile faced two more data breaches, one of which exposed information on 37 million of the firm’s accounts, comparable to the 2021 incident. Crafters of a potential liability regime must address some critical questions here. Was the court-ordered security investment insufficient, or ineffectively allocated? Was the fine—less than half a percent of T-Mobile’s annual revenue—insignificant enough to ignore? Or did the company face more deeply-rooted and systemic issues, and are those issues unique to T-Mobile or widespread?
A more blunt framing might simply ask whether there exists a market failure in cybersecurity and, if so, how best to remedy it. The theoretical argument is straightforward. If a company is looking to acquire a software solution from a vendor, the company’s ability to vet that product for security is limited, creating an information asymmetry—a market failure. Intellectual property considerations around the product might reasonably constrict how much of its inner workings are visible to the prospective customer, and expert analysis is expensive. Moreover, the vendor’s visibility into third-party components within the product is similarly limited. Traditional product markets trickle liability upstream to help manage this—a suit against the final goods provider (FGP) might result in the FGP suing suppliers they believe responsible, and so on. Importantly for software, though, the question of whether liability in the hands of end consumers can address information asymmetries all the way up the component chain is not resolved. Moreover, quantitative evidence of market failure is always difficult to pull out, and more so here given the aforementioned challenges of quantifying cybersecurity harms and return on investment in security processes.
These are hard questions, but their answers will be informative. The insufficiency of the current cyber-liability patchwork might stem from a blend of jurisdictional gaps, resource-strapped enforcers, insufficient penalties, inconsistent standards, suboptimal upstream percolation, and stubborn organizational and industry norms. Technology vendors in particular—through clickwrap terms-of-service agreements that often disclaim what little liability they do face—are likely even more insulated from the potential costs of data breaches and related lawsuits, meaning the creation of even a basic liability for security outcomes laid at their feet could be a step-change. Nevertheless, a new regime for liability must at least examine and understand the reasons that existing incentives for consumer-facing companies have fallen short of driving the cybersecurity improvement we want.
Conclusion
Software liability has been discussed and debated in fits and starts for decades. The Biden administration’s embrace of the concept in the NCS suggests this is a significant moment for the Overton window on cybersecurity policy proposals, but its implementation will remain a challenge in the current, fragmented policymaking landscape. Versions of these three questions will appear and reappear throughout the debate. We offer them as a whetstone for liability advocates to enhance the efficacy of any future liability regime. And, hell, we’re curious too.