Progress in Cybersecurity: Toward a System of Measurement

Paul Rosenzweig
Friday, May 24, 2019, 10:04 AM

How do we quantify safety and security? That fundamental question underlies almost all modern national security questions (and, naturally, most commercial questions about risk as well). The cost-benefit analysis inherent in measuring safety and security drives decisions on, to cite just a few examples, new car safety devices, airplane maintenance schedules and the deployment of border security systems. In a world where resources are not infinite, some assessment of risk and risk mitigation necessarily attends any decision—whether it is implicit in the consideration or explicit.

Published by The Lawfare Institute
in Cooperation With
Brookings

How do we quantify safety and security? That fundamental question underlies almost all modern national security questions (and, naturally, most commercial questions about risk as well). The cost-benefit analysis inherent in measuring safety and security drives decisions on, to cite just a few examples, new car safety devices, airplane maintenance schedules and the deployment of border security systems. In a world where resources are not infinite, some assessment of risk and risk mitigation necessarily attends any decision—whether it is implicit in the consideration or explicit.

What is true generally is equally true in the field of cybersecurity. Governments, commercial actors and private citizens who are considering new deployments of cybersecurity measures either explicitly or implicitly balance the costs to be incurred—whether monetary or in terms of disruptions caused by changes to enterprise and resulting (temporary) reductions in efficiency— against the benefits to be derived from the new steps under consideration.

The problem with this rather straightforward account of enterprise decision-making is that no universally recognized, generally accepted metric exists to measure and describe cybersecurity improvements. Unfortunately, for too many, cybersecurity remains more art than science.

Decision-makers are left to make choices based on qualitative measures rather than quantitative ones. They can (and do) understand that a new intrusion detection system, for example, improves the security of an enterprise, but they cannot say with any confidence by how much it does so. Likewise, enterprise leadership can, and does, say that any deployment of a new system (say, an upgrade to an accounting package) will bring with it risks that unknown or previously nonexistent vulnerabilities might manifest themselves. Yet, again, they cannot with confidence ask to what degree this is so and measure the change with confidence.

This challenge is fundamental to the maturation of an enterprise cybersecurity model. When a corporate board is faced with a security investment decision, it cannot rationally decide how to proceed without some concrete ability to measure the costs and benefits of its actions. Nor can it colorably choose between competing possible investments if their comparative value cannot be measured with confidence. Likewise, when governments choose to invest public resources or regulate private-sector activities, they need to do so with as much information as possible—indeed, prudence demands it.

Because the problem of measuring cybersecurity is at the core of sound policy, law and business judgment, it is critical to get right. The absence of agreed-upon metrics to assess cybersecurity means many companies and agencies lack a comprehensive way to measure concrete improvements in their security. We should strive toward an end state where investment and resource-allocation decisions relating to cybersecurity are guided by reference to one (or more than one) generally accepted, readily applicable method of measuring improvements in cybersecurity.

This method will, metaphorically, be equivalent to the current concept of generally accepted accounting principles. The key components of the end state will be that the methods for measuring cybersecurity are:

  • Objective.
  • Capable of being quantified.
  • Commensurate and intercomparable across different cybersecurity approaches and systems.
  • Usable for decision-makers in allocating limited resources.
  • Widely agreed upon and generally accepted within the relevant communities.

That, at least, is the overall objective that seems worthy of consideration. To that end, it seems to me valuable to highlight and constructively critique new efforts to advance the metrics objective. Two recent efforts are deserving of significant praise as solid steps:

BSA Framework for Secure Software: BSA (also known as the Software Alliance) is the trade association for the global software industry. Its senior membership list is a who’s who of companies such as IBM, Microsoft and Adobe. (Full disclosure: A couple of years ago, I gave an invited talk to a conference of BSA executives on the subject of cyber liability reform—it was unpaid, but I got a nice trip to Napa out of it. This new framework is arguably related to the talk I gave, so you might think I’m praising BSA for listening to me.)

Recently, BSA released its new Framework for Secure Software. The framework is advanced as a way of assessing and measuring the security of software products along the entire lifecycle of a piece of software—from conception and design to patching and replacement, along with everything in between. Two aspects of this effort are noteworthy and commendable. First, the document is notable simply for its existence. Historically, the software industry (and the hardware industry) have disclaimed any liability for the functioning of their products. At a broader societal scale, there are reasons to think that's the wrong answer, but there is also a good economic reason why the industry might take that view. And so, it is really quite remarkable (and politically brave) for the software industry to articulate any standards at all—for in doing so they will, inevitably, be inviting someone to suggest that those standards are enforceable in law to the disbenefit of industry members. Self-regulation is a good thing—but it carries with it some risks. Good on BSA for moving ahead anyway.

The second notable aspect of the BSA framework is the depth of its detail and its willingness (indeed, insistence) on creating measurable definitions. The framework starts by recognizing that software security comes in two flavors—organizational processes and product security capabilities. In other words, BSA suggests we need to measure both how well a product works and how rigorously it was developed. To cite but one example, if one aspect of our security objective is to prevent SQL injection attacks, the framework suggests that the software product be assessed to the extent it includes documented mitigations for known vulnerabilities and that it be tested to validate inputs and outputs to assure that the mitigations have been properly incorporated. We can actually put numbers to these standards.

Or, to cite another example, the BSA framework makes explicit a security test that has long been failed by too many manufacturers—it suggests that no software should have hard-coded passwords. In most cases we can measure that, as the result is a binary yes-or-no result.

I confess that I lack the technical expertise to validate the substance of the framework. There may be unnecessary components, or there may be some that are missing or ill defined. But as a start to the question of how to measure whether a piece of software is secured in a way that mitigates and reduces risk of malicious failure, this is an excellent start for the discussion.

National Critical Functions Set: The second recent innovation is less about new ways of measuring security than about a new way of thinking about what needs to be measured. The Cybersecurity and Infrastructure Security Agency (CISA) has released a new National Critical Functions Set (NCFS). The goal of the “set” is to describe and categorize “the functions used or supported by government and the private sector that are of such vital importance to the United States that their disruption, corruption, or dysfunction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof.” This new focus is fascinating for at least two reasons.

First, it is an explicit move away from the existing methodology of assessing security. Current governmental structures (often mimicked in the private sector) look at the security of particular assets (i.e., is this database secure?) or organizations. For that reason, much of our infrastructure protection apparatus revolves around 16 sector-specific views. We ask, for example, about the security of the agriculture sector, businesses and organizations within that sector, and particular facilities within those organizations.

In the newer vision, we shift to a more cross-cutting focus on functionality, so that agricultural security is a result of security in the production of agricultural products and the security of the transportation and delivery of all cargo (not just food goods) by air, sea, rail and roadway. A cross-sector functionality framework recognizes that the security of functions that undergird the delivery of products and results is often more important than the security of the products themselves. This is a welcome change and the first step to a new type of “Risk Register” that can combine risk assessment, consequence modeling and a dependency analysis to identify the highest priority risks in need of mitigation. Some (perhaps many) of those will not be cybersecurity risks, but that’s a good thing too—part of measuring cybersecurity investment is also prioritizing that investment with other infrastructure security vulnerabilities.

The second fundamental benefit of the new NCFS is the transformative effect it may have on how we discuss security altogether. For far too long (especially in the cybersecurity field) we have defined security around concepts of detection and prevention. We have afforded almost no priority to assessing the resiliency of our systems, and it may well be that in doing so we are missing a more realistic definition of what cybersecurity means. Instead of asking, “What intrusions did we prevent?” we might instead ask, “How quickly did the system come back on line?” I am not sure that resiliency is the only measure—and in some cases (like, say, control of nuclear weapons systems) it may be a particularly poor measure. But in many cases the problem is not so much the intrusion (say, of ransomware in Baltimore recently) but how quickly, or slowly, Baltimore can resume operations.

By focusing on functions rather than assets, the NCFS is making a strong move to energize the incorporation of resiliency metrics into our risk analysis. After all, the concept of functionality is the very essence of operational resiliency. We ask how the transport system is working, not whether a particular transport hub is operational. As I said, that may not be a universally appropriate question to ask, but it is a very good thing that someone is asking it. And, as with the BSA framework, I am not sure that the new NCFS is completely right—but it is surely much more right than wrong and yet another good step to grounding (cyber)security in scientific analysis rather than qualitative assessments.

It’s not often I can write a piece about cybersecurity that is generally optimistic. But these two new efforts do make me smile a bit.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare