Cybersecurity & Tech

Going the Extra Mile: What It Takes to Be a Responsible Cyber Power

Max Smeets
Wednesday, May 11, 2022, 8:01 AM

The United Kingdom is aspiring to become a responsible democratic cyber power, but this is not without cost.

The Palace of Westminster, which houses the Parliament of the United Kingdom. (Jorge Láscar,https://flic.kr/p/PZYa9t; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In 2021, the U.K. government declared in its Integrated Review of Security, Defence, Development and Foreign Policy its intention to become a “responsible democratic cyber power.” What exactly the U.K. envisages is unclear. But, at a minimum, the U.K. likely seeks to operate strategically, clearly connecting its cyber “means” to political “ends”; act in accordance with international law, most principally by minimizing collateral damage; and carefully coordinate its military endeavors with its intelligence activities. It wants to be more active but, equally, distinguish itself from other active powers in cyberspace—like Russia and North Korea—that are behaving less responsibly.

While this aspiration may seem uncontroversial and obvious, it is not without cost. Being a responsible power means the U.K. will have to go the extra mile in terms of how it targets computer systems or networks, minimizes collateral damage, and tests its capabilities. The result is a longer, costlier and potentially frustrated process, which could ultimately reduce effectiveness and performance.

The Targeting Process

Acting responsibly in cyberspace begins at the targeting stage of a cyber operation. Two types of targeting processes can generally be distinguished. The first is tool centric and generally follows this pattern: “I have this capability. Against whom can I use it, and what can I use it for?” This often leads to opportunistic activity. A responsible cyber power would more often limit itself to the second type of targeting process, a target-centric mission: “I would like to target this entity. How can I do it?” 

The planning of these target-centric operations is more difficult. Once the target is identified, states can rely only in part on the cyber capabilities they have on the shelf to choose from. As James McGhee, the legal adviser for U.S. Special Operations Command North, described: “[O]perational use requires much more than simply loading [cyber capabilities] and sending them on their way. Our operators must first know and understand the target network, node, router, server, and switch before using any cyber capability against them.” As a result, “[i]t is generally impractical to use offensive cyber operations because, contrary to the speed at which they are carried out, planning these operations generally takes more time than planning conventional, kinetic operations.”

The preparatory process of a target-centric operation can be not only tedious but often also futile. As former National Security Agency and CIA Director Michael Hayden reflects in his memoir, “To attack a target, you first have to penetrate it. Access bought with months, if not years of effort can be lost with a casual upgrade of the target system, not even one designed to improve defenses, but merely an administrative upgrade from something 2.0 to something 3.0.”

Confounding these difficulties, a responsible cyber power would also need to put a review process in place involving stakeholders from the government, military and intelligence organizations, to make sure all parties have an opportunity to raise any concerns with a proposed operation before execution, including negative operational spillovers. This process of alignment can be a drawn-out process, by which point the window of opportunity may have passed.

Minimizing Undesired Collateral Damage

A responsible cyber power would minimize undesired collateral damage when targeting a computer system or network. There are two types of first-order collateral effects. The first form of direct collateral damage happens when computer systems and networks are unintentionally infected. This most commonly happens with self-replicating malware, such as the Morris worm (the world’s oldest internet worm). The second form of direct collateral damage happens when a payload is unintentionally executed on computer systems and networks, which can for example lead to unexpected data exposure or data modification.

Minimizing the first type of collateral damage can be complicated given that systems and networks are often connected in complex ways. Preventing unintentional payload execution, the second form, can also be challenging as it requires intimate knowledge of the known target configuration. 

This means that the attacker has to put controls in place to limit propagation and restrict behavior, such as making sure the malware triggers are based only on known target configurations (often obtained through detailed reconnaissance).

Testing, Testing and Testing

Given how a responsible cyber power targets and minimizes collateral damage, a great deal of testing of exploits and malware must be carried out before use. To test how important programs could stall, a computer may crash, or an exploit may fail to work, a state will need to set up testing infrastructure to emulate targeted computer systems. 

Indeed, this thorough testing was key to the success of the Stuxnet worm, which infected Windows machines worldwide—including systems in the United States. Yet it only led to the physical destruction of the uranium enrichment centrifuges at the Natanz facility in Iran. To avoid getting caught (especially with the first attack) and to minimize collateral damage, the cyber powers required thorough testing to see whether the bug could do what it was intended to do. The U.S. therefore had to produce its own P1s centrifuges, perfect replicas of the variant used by the Iranians at Natanz. At first, small-scale tests were conducted, using borrowed centrifuges stored at the Oak Ridge National Laboratory in Tennessee that were taken from Muammar Gaddafi in late 2003 when he gave up Libya’s nuclear program. The tests grew in size and sophistication—obtaining parts from various small factories situated all over the world. As David Sanger reports, at some point the U.S. was “even testing the malware against mock-ups of the next generation of centrifuges the Iranians were expected to deploy, called IR-2s, as well as successor models, including some the Iranians still are struggling to construct.” 

Uneven Playing Field

Cyberspace is not a level playing field. An actor that does not mind acting irresponsibly in cyberspace will find it much easier to conduct cyber operations. This is in part what makes countries like Russia and North Korea particularly dangerous in the cyber domain, as WannaCry and NotPetya have shown. For countries that hold themselves to a higher standard, that do care about operating responsibly in the cyber domain, the ratio of resource input and effect output is much higher to achieve (intentionally) minor effects. The ideal of responsibility therefore entails a certain handicap—having to go the extra mile—which could thwart the other component of the U.K.’s ambition—to be a cyber power. It is this tension that makes the U.K.’s ambition so noteworthy, admirable and critical to the future of cyberspace.


Max Smeets is a senior researcher at the Center for Security Studies (CSS) at ETH Zurich, director of the European Cyber Conflict Research Initiative, and author of “No Shortcuts: Why States Struggle to Develop a Military Cyber-Force”, published with Oxford University Press and Hurst in May 2022.

Subscribe to Lawfare