Decreasing Systemic Risk
Recently, Tim Maurer, Ariel Levite, and George Perkovich of the Carnegie Endowment for International Peace released a white-paper with a broad new proposal regarding the offensive cyber operations conducted by nation states, in an attempt to address acknowledged interdependent risk issues within the global financial system.
Published by The Lawfare Institute
in Cooperation With
Recently, Tim Maurer, Ariel Levite, and George Perkovich of the Carnegie Endowment for International Peace released a white-paper with a broad new proposal regarding the offensive cyber operations conducted by nation states, in an attempt to address acknowledged interdependent risk issues within the global financial system. They also released an attendant Lawfare article which provides some context for the paper.
The paper is worth reading, of course, and fits within the vein of many concurrent efforts to draw “lines” around particular areas in cyber which particular groups of policy makers believe are fraught with attendant risks. The paper imagines a cooperative global dynamic, similar to arms control dynamics, which protects those areas of cyber by enforcing a norm against intruding upon another nation’s critical infrastructure, financial networks, CERT, or other specially designated functions.
In this piece, I first critique elements of their proposal, and then recommend a different direction for setting cyber policy efforts—one rooted in a technical background—along with a new proposal for limiting our systemic risk in the financial arena based on existing practice among technical practitioners of offensive information security. I do this in part because many in the community, including myself, do not share the optimism around existing policy efforts in the U.N. Group of Governmental Experts and similar norm-creation thrusts, which we view as unworkable, vague, and going nowhere, as mentioned in their whitepaper.
First a bit of background: for the last fifteen years, after leaving the NSA, I have run a cybersecurity company (Immunity, Inc.), and many of my company’s clients are in the financial space. Immunity started as a tiny company underneath a loft bed in Spanish Harlem, and even today, we do the majority of our work for NYC-based insurance companies, banks, and their various connective tissue.
Of course, the work that we do is “offensive” in nature, be it assessment of a particular new technology about to be deployed or a wide ranging penetration test of the entire Active Directory system of a new acquisition. So we come to matters from an attacker’s perspective at all times. All of these activities are digital hygiene of the sort used to help reduce systemic risk by the financial institutions, which practice them (and are required to practice them by many legal regulation). The Carnegie paper posted also attempts to reduce systemic risk, but with a norm-setting activity between nations which are engaged in offensive cyber work. The norm proposed would include the following:
- A State must not conduct or knowingly support any activity that intentionally manipulates the integrity of financial institutions’ data and algorithms, wherever they are stored or when in transit.
- To the extent permitted by law, a State must respond promptly to appropriate requests by another State to mitigate activities manipulating the integrity of financial institutions’ data and algorithms when such activities are passing through or emanating from its territory or perpetrated by its citizens.
The problems with this proposal start with the recognition that in the nation-state space, it’s clear that while few want to cause massive systemic risk, nobody can agree on what constitutes systemic risk or what trade-offs are worth what level of risk of a calamitous event.
One key feature of the domain of cyber is massive uncertainty with regards to cause and effect. Even internally to a financial institution, we might not know what systemic risk is—and certainly we have no way to provide clear metrics for it. What’s more, actions we take in cyber have almost total uncertainty with respect to their effects. Often, measures we take to protect our systems (like inserting firewalls which perform complex internal filtering) cause other cascading problems we could not have anticipated (like being vulnerable to simple buffer overflows).
This is a corollary to how hard any complex network is to secure, because neither the defense nor the offence can predict the effect of any changes. Modern IT systems have extremely rigorous “change control” for this exact reason. Changes you think should be perfectly trivial can have far reaching and negative effects. The financial system is not only completely not unique in the aspect of having complex emergent behaviors, but also interconnected with all of our other large networks sharing similar principles. It’s easy to then rewrite the Carnegie white-paper as simply a cry for all government-sponsored hacking to be halted.
In addition, the borders defining where the financial system ends are extremely broad. Insurance companies have such massive structures that sometimes the first gig we do for them is sweep the entire Internet looking for the network ranges they truly control. And by their very nature, the circulatory arteries of banks touch almost everything.
Finally, the nature of the cyber domain is that attackers have shaky positions on constantly changing networks. So in order to “persist” in any given cyber system, they have to penetrate it quite deeply and in many diverse ways. In a sense, they must create “offense in depth” in order to stay in position, because getting that first foothold is the most expensive and difficult part. The result is that no team worth its salt ever wants to get kicked out because they were too tentative with where they placed their implants. Any norm that sets a wide proscription against attack for an entire class of companies will thus be unworkable.
While non-practitioners often have clear models in their heads for how data is handled, a professional in this space blends data storage and computation into the same thing, much as physicists blend energy and matter.
And also like physics at the lowest level, monitoring is by definition disruptive. There’s no way to monitor a complex financial system without doing some of the filtering and processing on the CPUs of your target. And of course, you need to hide that processing, and all your data exfiltration, and the additional data you need to exfiltrate to watch the system administrators and people relevant to maintaining your position.
But any time you are changing a system and trying to hide, there is a chance you will destroy that system. Even when you are not trying to hide, there is always that chance: how many times has an anti-virus program blue-screened your Windows box by mistake?
There are also many good reasons to be changing financial data itself. In fact, this is one of the primary ways cyber can affect the physical world. When you steal from someone, you cannot be ignored. There are so many reasons to need to do this to support other missions. For example, hackers may modify financial information to hide covert flows of money to assets, or to remove needed resources from a terrorist cell as it plans an attack. It is probably folly to think nation-states are going to sign off on never modifying any financial data despite the attendant risks to the global system.
One common and painful flaw in much policy writing on cyber is the idea that states are going to control attacks that “emanate from their territory,” a nonsensical mental stance when it comes to cyber in general. In more specific terms, non-state actors are in many cases both the actors and the territory state actors use. Does an attack launched from Amazon’s web services farm by a collective group of hackers based in Beijing and Rome emanate from the United States? It is no accident that Microsoft is calling for acknowledgement of this as part of a Digital Geneva Convention. But Data and Computation do not exist in a particular geopolitical place, and norms discussions which assume they do are positing the world is flat.
The white paper further contends that, “States have already demonstrated significant restraint from using cyber means against the integrity of data of financial institutions.”
The authors are clearly guessing that this is true, but it may not be. Just because we haven’t seen published reports of particular attacks does not mean there is an international norm against doing them, or even that they have not commonly taken place. A better way to say that there is international restraint of naturally covert activity would be to show that various states have unilaterally said they will abstain from certain actions. Absent that, we cannot reliably say there is any such restraint, let alone a broad history of restraint in this area. In this case, it is highly unlikely any such self-regulation on covert action exists.
Even if it did, the inability of States to control non-state actors is clearly evident and effort expended on developing norms of State behavior to protect a fragile international financial system may be better put towards building a more resilient and protected network.
Another aspect of the Carnegie proposal is that “States would also be expected to implement existing due diligence standards and best practices, such as those outlined in the 2016 CPMI-IOSCO Cyber Guidance.”
Standards are rarely specific enough to be prescriptive in any real sense and that Cyber Guidance is 32 pages of “please do the basics” from any practitioner’s perspective. They recommend having penetration testing done, vulnerability management processes in place, and being ready for forensics when those things do not work.
Financial systems are often so different from institution to institution that nothing more specific would make sense to offer—especially when creating international norms which need to apply to many sizes of institutions. Putting a standard into a norms discussion is not productive but it does generate a false sense of concreteness to the proposals, in my opinion.
Not that the regulators and secondary insurance markets would ignore any norms process. While banks have a reasonable idea of what constitutes “personally identifiable information” thanks to many regulations, any additional regulations that required them to treat “protected from SIGINT modification” data as a new class of special information would incur impossibly onerous costs.
It’s also hard to define modification. Is write-protecting information for a moment so that a transfer cannot go through considered modification? Is delaying changes to information “modification”? These kinds of things can be just as risky in many ways as simply overwriting information, and I have seen live attacks that cause massive systemic issues using them talked about at public conferences.
The technical confusions just go on and on. To name just one, what if I simply change the inputs that a broker sees, by overwriting pixels on their screen, without changing any stored data, and then they change that data by making false trades? Are we forced to define influence? Do we fall into the trap of creating norms based on intent?
Norm-setting used to be about starting broad and narrowing down. In cyber, it may mean starting with the most technical example, and broadening slowly.
What I’m proposing is not just a new way of creating a cyber norm around reducing systemic risk in the financial system, but a new path for creating norms in the cyber domain—one that starts from the most technical and practical standpoint, and evolves into broad principles rather than the other way around.
To that end, I propose a completely different approach to this particular problem. Instead of getting the G20 to sign onto a doomed lofty principle of non-interference, let’s give each participating country 50 cryptographic tokens a year, which they can distribute as they see fit, even to non-participating states. When any offensive teams participating in the scheme see such tokens on a machine or network service, they will back off.
While I hesitate to provide a full protocol spec for this proposal in a Lawfare post, my belief is that we do have the capability to do this, from both a policy and technical capacity. The advantages are numerous. For example, this scheme works at wire speed, and is much less likely to require complex and ambiguous legal interpretation.
In other words, the way to reduce systemic risk in cyberspace is very similar to what practitioners have done for twenty years. Let’s use the technical facets from capture the flag games and penetration tests to form a whole new policy language, rather than trying to futilely port our old language to this new domain.