Cybersecurity & Tech Executive Branch

Counting the Costs in Cybersecurity

Stewart Scott
Wednesday, October 9, 2024, 10:21 AM
It’s time to measure the cybersecurity outcomes that matter most.
(Photo: Pixabay, Free Use)

Published by The Lawfare Institute
in Cooperation With
Brookings

Nobody seems to know how to tell whether the National Cybersecurity Strategy is effective. In response to criticism from the Government Accountability Office (GAO) that the strategy did not provide outcome measures for assessing the success of its cybersecurity initiatives, the Office of the National Cyber Director (ONCD) said something striking: “[S]uch measures do not currently exist in the cybersecurity field in general.” In short, ONCD agreed that it, along with the GAO and other interested parties, would love to know whether the National Cybersecurity Strategy is working but claimed that no method for such evaluation exists. 

GAO’s report suggested some metrics that ONCD might consider using to evaluate the strategy’s efficacy—such as the number of Cyber Incident Reporting for Critical Infrastructure Act reports provided to the Cybersecurity and Infrastructure Security Agency and the number of government takedown and disruption campaigns. But these suggestions represent the lack of proper evaluative metrics in cybersecurity because they do not measure true outcomes. Additionally, GAO is probably not even the right entity to ask whether the National Cybersecurity Strategy is succeeding in strategically improving cybersecurity. 

ONCD’s statement might reasonably send cybersecurity policymakers into a sort of existential crisis: “What’s the point of doing anything if we can never know whether it works?” The claim, however, represents a remarkable chance to reset the long, convoluted discussionaround metrics for cybersecurity policy. ONCD’s claim should be taken not as fact but as evidence: Policymakers have been muddling two different conceptions of “outcome”: (a) the security of the cyber ecosystem and (b) the damage that results from existing insecurity. This confusion obstructs empirically driven policymaking in the domain. 

Policymakers and practitioners currently lack the capacity to evaluate the cybersecurity ecosystem and assess, using quantitative measures, which policies work and how well. The need for such an understanding is fundamental. The public and private sectors spend tens of billions of dollars on cybersecurity every year yet still suffer massive losses from insecurity that these investments fail to prevent or mitigate. Even with a clear-eyed approach to cybersecurity metrics, good measurement is hard, and sound analysis of measurements is even harder. But continued failure to count the costs of cyber insecurity poses a far greater risk than any of the challenges involved in measurement. 

The most critical measure of the success of cybersecurity policies is the degree to which they reduce the harms caused by cybersecurity incidents. Policymakers should focus on these outcomes. These harms might take the form of financial loss, compromised information, physical damage, or system downtime. Ultimately, policymakers aim to decrease these harms, or even just slow their yearly growth, by making attacks less frequent, less impactful, or some combination of the two.

Two Questions, Two Answers 

The two different versions of “outcome” metrics held in policymakers’ minds are based on two questions: (a) How secure is the cyber ecosystem? and (b) How much damage results from the ecosystem’s current insecurity? 

The first question seeks to predict how much harm could result from the current state of the ecosystem, while the second examines how much harm has been inflicted. The first is a policymaker’s outcome: How has policy shaped the cyber ecosystem? The second is a practitioner’s outcome: What has the state of the ecosystem cost us? 

In terms of the first question, how secure is the cyber ecosystem, one charitable reading of ONCD’s claim—that cybersecurity outcome metrics don’t exist—is that the young office is struggling with the challenge of statistically measuring an abstract concept across a vast ecosystem. ONCD is trying to attach a number to “cybersecurity.” Many measurable factors contribute to cybersecurity, such as discovered vulnerabilities, security staff and budget available to an organization, unpatched software instances, and multifactor authentication (MFA) implementation, among others. While policymakers might consider these factors as outcomes, that is only the case in the sense that policies might change them. They are not outcomes of the cyber ecosystem but rather attributes—a description of the state of the cyber ecosystem itself. If the government had measures of all meaningful attributes, one could imagine such insight contributing to a forecast of cybersecurity losses year to year. And the government might even be able to highlight where some security behaviors—attributes—need wider adoption. These attributes together could be considered a snapshot of the state of cybersecurity.

However, few or none of these metrics are measured broadly, no single one acts alone as a sufficient measure of security, and even if all were measured perfectly, they would only vaguely suggest a probability of compromise at a given point in time. Moreover, the relationships between attribute and harm are far from static, in large part because attackers adapt to and find ways around even the best defenses and constantly discover new vulnerabilities and routes to compromise, all while the underlying ecosystem shifts and grows. Trying to “quantify” cybersecurity before an incident occurs, to highlight systemwide insecurity with point measures, falls somewhere between hard and impossible. It’s like trying to precisely predict the timing and severity of a financial crisis.

The second question, how much harm is happening, is more concrete. It asks not about the cyber ecosystem, but about the magnitude of harm inflicted through it. Answers to this question are outcomes in the purest sense—the quantifiable, realized consequences of the system’s attributes. Outcomes are measurable no matter how well or poorly understood their relationship is to attributes. These are the outcomes that the GAO looks for—some number that, over time, would show cybersecurity getting better or worse by looking directly at its consequences.

So What Are Outcomes?

This difference between true outcomes and attributes is critical—outcomes are distinct from the system that produces them, while attributes describe the arrangement of the system itself. In financial markets, inflation is an outcome influenced by attributes such as interest rates, consumer demand, input costs, and so on, and part of the Federal Reserve’s core mandate is to control that outcome. Policymakers should care about attributes only insofar as they shape an outcome of interest, and that relationship must be relatively well tested and well understood to inform policy. Both attributes and outcomes are therefore key data for evidence-based cyber policymaking, but outcomes are what policy seeks to change, and they are the critical information that makes attribute data meaningful. 

Harms are the outcomes that policymakers such as ONCD ultimately care about and hope to reduce. Many other fields measure harms well. A quick sketch of these key outcomes for cybersecurity policy might include the following:

  • Financial loss—such as ransom payments, lost revenue, and the costs associated with incident response.
  • Physical harm—including loss of life and physical injury. 
  • Compromised information—including stolen intellectual property, compromised passwords, and emails stolen from government networks.
  • System downtime—such as the time that a water treatment plant is taken offline and the time that a hospital operates at reduced capacity. 

These examples overlap and are imperfect, but they capture the essence of the outcome question: How much bad is happening?

Understanding the cyber ecosystem is critical, but its complexity and dynamism requires careful treatment. We can understand inputs to the ecosystem (policy interventions, security requirements, etc.) and outputs from it (the outcomes), but the relationships in between are murky and ever changing. For cybersecurity, if policymakers cannot characterize outcomes, they have no hope of determining whether policy has improved them, let alone improved policy based on empirical data. 

An illustrative sign of deficient outcome data is cybersecurity’s reliance on adoption rates of certain security practices (attributes) as an indicator of progress, often criticized as security theater while touted as a measure of success in itself. Adoption rates embody the philosophy that if we can’t prove that something made us safer, as long as it is reasonable enough to believe that it should improve security, wider adoption is a step forward. The prevalence of some attributes almost certainly drives down harms—for example, more multifactor authentication generally makes compromise more difficult, so more MFA likely means less harm. But the relationship between attributes and outcomes is only a hypothesis without empirical data. We don’t know what degree of adoption of MFA throughout the ecosystem meaningfully reduces harms or by how much. Effort spent pushing MFA adoption is wasted if the ecosystem still falls below some theoretical necessary threshold, and without outcome data, we have no way of identifying that threshold. 

Hard and Necessary

Thinking in terms of inputs, attributes, and outcomes is alluringly simple—a policy input changes an ecosystem attribute (or several), which in theory eventually reduces bad outcomes. However, practicing effective measurement in cybersecurity is messier than that. Veteran practitioners have long noted that answering basic questions for which other fields have well established metrics is nearly impossible—for example, “How secure am I?” “Am I better off than this time last year?” or “How do I compare to my peers?”

Measuring both outcomes and attributes is hard. A 2020 Cybersecurity and Infrastructure Security Agency report highlighted the deep methodological challenges in assessing something as straightforward as the cost of an incident—an outcome. And much of the measurement of attributes that exists deserves healthy skepticism: Take for example the 36 million email breach attempts thwarted daily at the Pentagon in 2018, or Microsoft’s much-touted 300 million daily attacks against its customers (many of which are likely automated scans and phishing emails).

But despite the challenge of measurement, empirically and systemically counting the outcomes of cyber insecurity—the physical, financial, and informational losses—is a key first step in sharpening intuition into effective policy and understanding the current landscape. Outcome measures do exist, and it’s long overdue that we start counting them.


Stewart Scott is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Subscribe to Lawfare