WannaCry's Warning: Unpatched Operating Systems and Third-Party Harm

Robert Chesney
Monday, May 15, 2017, 5:45 PM

The most important policy question raised by the WannaCry ransomware fiasco is not the most obvious one.

Published by The Lawfare Institute
in Cooperation With
Brookings

The most important policy question raised by the WannaCry ransomware fiasco is not the most obvious one.

Many people are focusing on the fact that WannaCry takes advantage of an NSA-developed tool for exploiting various Windows OS vulnerabilities—a tool that the Russians the Shadow Brokers put into the public domain not long ago (see Brad Smith’s critique here, and Herb Lin’s persuasive response here). That is certainly relevant and understandable. But it seems to me that the WannaCry episode raises a more-significant and more-central question: are we doing enough to address the widespread use of operating systems (and other software) that are not patched for known vulnerabilities, in light of foreseeable third-party harms?

For those who aren’t following the WannaCry story closely, here’s the key fact: Microsoft had issued the requisite patch to remedy the NSA-discovered vulnerability way back in March, for all versions of Windows that are still supported. The only persons and entities vulnerable to WannaCry, as a result, are those who run versions of Windows that either are out-of-date or pirated, plus those who for varying reasons either chose or just neglected to adopt the March update.

If the only persons who experienced harm as a result of WannaCry were those users, then it might suffice to say they assumed this risk in one manner or another, that they simply must bear the cost now, and that people and entities going forward can adjust their decisionmaking (Should I buy this pirated OS to save some money?) in light of this painful experience. But of course the harms are not all confined in that way. As the debacle at hospitals in the UK drives home, there may be any number of third-parties who may be harmed as well—in some cases quite seriously.

From this perspective, the situation is not unlike the problem of increasingly-ubiquitous IoT devices with little or no security. Famously, that state of affairs makes such devices easy prey for botnet formation and thus integral to the harm suffered by victims of said botnets—as the massive DDoS attacks on Dyn, OHS, and Krebs made possible by the Mirai botnet illustrated. One might also analogize to the harm that customers are at risk of suffering when a vendor suffers a breach and customer data (health data, credit card data, etc.) is stolen (though the third-party impacts at issue with entities crippled by WannaCry may involve much less speculation). At any rate, the point is that there is a growing category of circumstances in which poor security practices by one person or entity causes real harm to innocent third parties.

What tools are available to a government interested in doing something about this dynamic, and why don’t they already suffice to address this problem?

1. Liability-and-Insurance Regimes

Part of the solution, inevitably, lies with litigation risk and insurance. The combination of potential tort liability and the availability of insurance coverage (or lack thereof) can combine to create powerful economic incentives steering individual and organizational behavior. Insurance policies, for example, might not protect a person or organization who fails to patch within a period of time that is reasonable in the circumstances, and certainly shouldn’t protect those who negligently persist in using an operating system that is no longer supported (let alone pirated systems). The issue at hand isn’t whether users can get insurers to compensate them for their own losses, however, but rather whether users might be liable to third-parties for harm caused by the user’s decision to rely on unpatched/unsupported systems. If a hospital opts to take such risks and as a result experiences a disruption of services that causes real harm to patients, for example, the patient might bring a negligence claim. And that’s when things get tricky, for large entities with complex IT environments may delay patching for very good reasons. Most notably, they may delay because they need to time to determine whether the updated system will have unintended, harmful consequences for functionality. Such testing may also require the engagement of decision-making authorities that move relatively slowly, particularly where the testing may be expensive.

But of course it does not follow that endless delay and undue caution are acceptable, either. And thus the dilemma: where is the line that defines due care? It seems unlikely that a one-size-fits-all approach ever will or should emerge in this context, or in related contexts such as determinations of reasonable business judgment or commercial reasonableness. This will leave entities to some degree in a state of uncertainty, inevitably. [Note that I’m speaking now only about those who have currently-supported, properly-licensed systems, not those who choose to roll the dice with unsupported or pirated software; it seems to me that such circumstances virtually guarantee liability, with the possible exception of an entity that has been making good faith efforts to transition from software that until recently was still supported by the vendor.]

2. Regulatory Interventions

Government do not always leave this question to the marketplace, insurers, and trial lawyers. For some industries, government bodies are in a position either to mandate through regulation that certain users meet standards for patching, or to make such standards compulsory through contract terms of the terms of conditional government spending. I believe this is the case already, for example, for DOD’s relationship with defense contractors.

But note that this does not really avoid the definitional challenge described above: how long is too long to delay a security upgrade for entities in the type of highly-complex IT environments just described? It is no easier for regulators or contracting officers to answer that question than it is for judges.

Takeaway: The question of how to distinguish desirable, good-faith delay in patching from reckless and undesirable delay is a much stickier one, on close inspection, than it seems at first blush. On the other hand, the problem is much less acute with respect to those who are unpatched simply because they persist in using out-of-date systems (let alone pirated software).


Robert (Bobby) Chesney is the Dean of the University of Texas School of Law, where he also holds the James A. Baker III Chair in the Rule of Law and World Affairs at UT. He is known internationally for his scholarship relating both to cybersecurity and national security. He is a co-founder of Lawfare, the nation’s leading online source for analysis of national security legal issues, and he co-hosts the popular show The National Security Law Podcast.

Subscribe to Lawfare