Cybersecurity & Tech

The Subversive Trilemma in Cyber Conflict and Beyond

Lennart Maschmeyer
Wednesday, August 3, 2022, 8:01 AM

Cyber Operations can be used in many strategic contexts, yet because they rely on secret exploitation they are invariably subject to the trilemma. Consequently, cyber operations tend to become less effective—and thus less attractive options—the more important and urgent the strategic goals are that an actor pursues.

Laptop keyboard (rupixen, https://pixabay.com/images/id-4337491/; Pixabay, Free for commercial use).

Published by The Lawfare Institute
in Cooperation With
Brookings

In a Lawfare post, Jason Healey provides a thoughtful critique of my recently published theory on the “subversive trilemma.” The key argument I make (summarized here) is that cyber operations are instruments of subversion that hold great strategic promise but suffer from a set of three interlinked operational constraints: speed, intensity of effects, and control. Because under a given set of conditions actors can typically only improve one of these variables at the cost of losing out on the remaining ones, these constraints pose a trilemma for sponsors of cyber operations and limit their strategic value. 

Healey contends that this trilemma applies to a wider set of activities than I acknowledge, and that secrecy needs to be included as a fourth constraint—expanding the trilemma into a quadrilemma. In support of this wider argument, he makes five individual points that offer helpful interventions to clarify the scope of the theory and its utility. Below I consider each in detail. In doing so, I specify the conditions for the trilemma applying beyond cyber operations: reliance on some mechanism of secret manipulation. I also show why the same characteristics that predestine cyber operations as means of surprise also limit the strategic scope of its impact. The trilemma limits the strategic value of cyber operations by rendering them too slow, too weak, or too volatile as means of attaining strategic goals in most circumstances. Consequently, cyber operations tend to become less effective—and thus less attractive options—the more important and urgent the strategic goals are that an actor pursues. As the analysis below reveals, rather than challenging the theory, Healey’s key points cement its utility.

First, Healey notes the trilemma applies not only to subversive operations but to “almost any malicious cyber activity.” I agree—and argue that, by definition, malicious cyber activity employs mechanisms of subversion. Subversion means secretly exploiting systems to manipulate them toward producing outcomes that are not intended by their designers or users. This mechanism, rather than the strategic objective behind its use and the strategic role of the operation involved, poses the operational challenges I discuss that result in a trilemma. 

Second, following the first point, Healey asserts this trilemma would also apply to military operations—this assertion misses a key point. The constraints of the trilemma are caused by the challenges of secretly exploiting systems (the subversive mechanism). Military operations rely on a different mechanism: projecting force using different kinds of weapons. The choice of targets might (and should) exploit weaknesses in adversary defenses, but weapons have a specific destructive capacity determined by the laws of physics that remain constant independent of the target and can be calculated precisely, allowing for the prediction of the effects on targets of different types. Furthermore, this destructive potential also retains its lethality whether the victim knows it’s coming or not. 

In short, Military operations are not inescapably subject to the trilemma in the way that subversion is because they rely on force projection rather than secret exploitation. Of course, speed, effects intensity, and control over effects are relevant in military operations, yet neither are they constraints posed by a mechanism of exploiting targets nor do they interact in the same way. Consider time: Yes, better planning for a military operation may enhance the effectiveness of strikes, yet it does not affect the destructive potential of weapons used. Neither does it impact one’s control over the effects in the same way. You own the weapon, you control it—and your adversary has to wrest it from your control through force of their own. Even in a clandestine military operation that strives to remain hidden from the enemy, one’s troops retain control over the weapons and their capacity to produce effects on targets when the adversary discovers them. The ability to produce effects does not depend on control over target systems in the way it does in subversion. The logic of force projection is fundamentally different from that of subversion, and so is the logic behind the trade-offs involved.

Healey does point at an important issue though. While force projection depends on material capabilities with preset destructive potentials, an actor’s total mass of capabilities (and thus the total destructive potential) does not predict the outcome of battle. Rather, the way actors employ their forces at the operational level is a far better predictor of battle outcomes—underlining the importance of strategy in guiding the choice of means in the pursuit of ends. Successful strategy, as Luttwak has shown, often involves paradoxical action in choosing inefficient methods of action such as attacking in bad weather or at night or taking a long and difficult pass road rather than the straight approach. Such paradoxical action aims to catch the enemy by surprise, and crucially, as Luttwak underlines, achieving surprise requires both secrecy and deception. Here, the linkage to intelligence and subversion becomes clear, and Jon Lindsay and Erik Gartzke accordingly define deception as secret and sly manipulation of the adversary to unwittingly provide benefits to the attacker. 

Since they involve mechanisms of secret manipulation, as in subversion, it is reasonable to expect wider strategies of deception and means employed to be subject to the trilemma. Hence, cyber operations are one specific instrument of subversion, just as subversion is a specific subset of intelligence operations. Overall, they are means of implementing strategies of deception, which are as relevant in intelligence contests as they are in military clashes. Meaning Healey is right that the trilemma applies far beyond cyber operations, but he misses the condition for its applicability: the reliance on secret manipulation. 

A Quadrilemma?

Healey’s third criticism focuses on the trilemma itself, arguing for an expansion into a quadrilemma. Secrecy must be included, Healey argues, because it is not a binary defining characteristic but a “threshold adversaries must achieve to be successfully subversive.” I agree that secrecy is a threshold, and a precondition for subversion to work. The trilemma conceives of secrecy exactly in this way—namely as the maintenance of access to a targeted system until the moment of producing the intended effect through that system without alerting the victim that subversion is taking place. This form of secrecy corresponds to what is known as a clandestine approach in intelligence operations, meaning to hide the activity itself taking place (vis-a-vis a covert approach, which means obscuring the identity of the sponsor of an operation). Keeping things clandestine requires efforts to avoid discovery. These efforts, combined with the efforts involved in gaining access to adversary systems and manipulating them, I argue, in turn constrain speed, intensity, and control. That is the essence of the trilemma. Without these efforts, subversion is generally impossible. Subverting a system by attempting to exploit a known vulnerability that has been patched will not work. Neither will phishing emails that do not hide their malicious intent. 

The example Healey uses to support his claim underlines this need for secrecy. To emphasize the variation in secrecy, Healey compares the SolarWinds campaign, which “maximized secrecy,” to the widespread intrusion of Microsoft Exchange servers by the hacking group Hafnium, which “cared little for secrecy.” However, there are three problems with that example. First, it is not clear by which measure these operations differed in their level of secrecy. Is Healey considering variation in covertness, or clandestine-ness? Second, the trilemma theory is concerned with active effects operations, like those that produce an effect on a target. Yet the two examples Healey chose are both espionage operations, involving passive information collection rather than active interference with a target system. These operations are of a different kind and beyond the scope of the trilemma theory. Third, even when expanding the scope of the theory to include espionage, these operations still underline the two key challenges that produce the trilemma: dependence on vulnerabilities in target systems and the corresponding need to avoid the victim’s discovery of the activity before one has produced the intended effect.

The hackers behind the intrusion into the Exchange servers (likely associated with the Chinese government) initially pursued espionage on a small scale using previously unknown vulnerabilities. But cybersecurity firm Volexity detected the group’s efforts and published key details of the campaign, including the vulnerabilities used. Consequently, the hackers expanded their efforts and included additional vulnerabilities—in this case using valuable zero day vulnerabilities that had not been discovered by the software vendor (Microsoft) at the time—to target a broader group of victims. Accordingly, there is no variation in secrecy concerning the type of vulnerability exploited. The only change is an expansion in the scale of targeting, with the associated increased risk of being discovered. Accordingly, Microsoft did discover this campaign swiftly and issued a patch to remove the vulnerabilities targeted by the hackers. Hence, the trilemma theory captures these dynamics—and it is not clear what adding in secrecy as a variable would explain that cannot be otherwise explained.

Large-Scale Attacks and Technological Change

Healey’s fourth point concerns the scale of operations, suggesting a distinct category of “one-to-multitude” attacks that have unprecedented utility. This argument aligns with prevailing expectations that cyber operations expand the scale of intelligence operations to such an extent that they provide strategic value that was previously unattainable, transforming conflict and competition. I agree that cyber operations enable potentially vast scale of effects, but a large scale does not equate to large utility. That is the crux of the trilemma: The larger the scale, the greater the risk of control loss. The resulting unpredictability and volatility limit strategic value—and the greater the scale of effects, the greater the potential negative implications for national security when things do not go as planned.

As such, one-to-multitude attacks fit directly into the trilemma, both illustrating its utility and underlining the strategic limitations of cyber ops. The larger the scale, the less clear the contributions to specific strategic goals become. Consider the scenario of Russia shutting down the internet, as Healey has previously suggested. What is the benefit? What strategic goal except for creating mayhem does it fulfill? That is not clear at all. Healey suggests “it is a very 1914 kind of argument to suppose that adversaries will not burn down the internet to hurt a despised rival, preserve a regime, or defend a core national interest.” This assertion is worth unpacking: How exactly would burning down the internet hurt a despised rival without hurting oneself as much, how would it preserve a regime, and what national interests would it fulfill—and under what circumstances? To assess strategic utility, examining technological possibilities is a useful and necessary starting point, yet it is crucial to consider relevant strategic goals and specify realistic circumstances. 

The same applies to the technological changes Healey outlines (the “Internet of Things” and the rise of artificial intelligence) as potential game-changers, yet their consequences are far from clear. I expect the trilemma to continue to apply because it is intrinsic to the mechanism of exploitation that cyber operations depend on, even with these technological changes. It is certainly possible that technological change alters the weighting of the trade-offs involved, but technological change is unlikely to supersede these trade-offs. There have been many predictions that technological change will help society to overcome limitations—yet they rarely play out as expected. A more careful examination of the evidence on practical challenges is needed, rather than speculation about future developments based on opportunities opened by the technology.

Geopolitics and Variation in Strategic Contexts

Fifth, Healey suggests geopolitics could challenge the theory. Specifically, he hypothesizes that states “haven’t been willing to play for higher stakes” and may opt for riskier operations to strive for more intense effects. That is a clear plausibility, yet the crucial point is that the constraints of the trilemma apply regardless of strategic context and intent. Subversion is not defined by its goal(s), but by its mechanism of secret exploitation—and cyber operations share both this mechanism and its constraints, regardless of the goals pursued. Consequently, you may want to go to war with cyberattacks, yet mounting attacks that reach the threshold of an armed attack against a specific target and in ways that contribute to your strategic goals is exceedingly difficult even if you let go of all restraint. Pursuing more intense effects does not negate the trilemma, but exacerbates it.

Accordingly, the volatility of cyberattacks becomes an even greater problem in the “extreme scenarios” Healey draws on to illustrate the point: The United States’ plan to shut down Iran’s air defenses and other infrastructure codenamed Nitro Zeus, and Israel’s Operation Orchard, which involved an air raid against Syria. These are useful examples, but they underline the uncertainty surrounding cyber operations and their effects. Since Nitro Zeus was never implemented, there is no way to assess what the actual effects would have been. The U.S. intended to shut down Iran’s air defenses, communications, and power grid, but there is no way to know which of these objectives the plan would have achieved in practice. 

In contrast, Israel did carry out Operation Orchard and successfully bombed targets in Syria without losing any of its planes. The role of cyber operations in this mission is unclear, however. Some aviation analysts have suggested Israel relied on classic electronic warfare and “just simply and with brute force electronically attacked and shut down the air defence” rather than carrying out a targeted cyber operation to infiltrate and disrupt Syrian systems in preparation for the attack. Others speculate that this operation saw the first use of a shadowy electronic warfare platform called Suter. Indeed, the second part of the article Healey cites also refers to this system—and makes a crucial point: While Suter is speculated to allow “hijacking” of enemy radar systems through some form of exploitation, “whether Suter technicians can actually accomplish the hijacking task without alerting the enemy operators is, not surprisingly, a question that has no definitive answer.” This uncertainty further illustrates the limits of control the trilemma predicts—and its consequences for strategic value. Accordingly, Healey is right to point out that in these (hypothetical) cases, cyber operations would fulfill a support role for the projection of force, following a different strategic logic. Yet they do not escape the trilemma. Rather, they cement its importance. 

The trilemma likely privileges cyber operations for the pursuit of lower stakes goals. Healey is exactly right to point out the importance of surprise in cyber conflict since it is the essence of strategies of deception. Yet the price of achieving surprise is the operational constraints the trilemma imposes on the means of its pursuit. Because these constraints limit strategic value, the attractiveness of cyber operations as means of pursuing strategic goals declines with the perceived importance and urgency of attaining them. As I concluded in the original article, due to their operational constraints, cyber operations are most likely to offer advantages in “long-term, low-stakes competition between adversaries” rather than in urgent crises where all is at stake—limited means are best suited for the pursuit of limited ends. 

Finally, the example of SolarWinds being used as a signaling device, rebooting all affected systems just after a Putin speech is another creative hypothetical—but here the strategic benefit is unclear. What would be the strategic goal? Undermining public confidence? Or sending a warning of some kind? It is a big question under what circumstances such symbolic disruption would have been worth more than the intelligence value of the data that could be exfiltrated from systems (and access to which would likely be lost with such disruption). 

The key point remains: We can speculate about a lot of possible cyberattacks that might be successful in different ways, yet the empirical record of their actual use documents their strategic limitations. Healey is right to argue that fully understanding the strategic implications of the trilemma requires further research, and I do examine this question in an upcoming article in the Journal of Strategic Studies. However, analysis and policy should build on evidence rather than speculation.


Lennart Maschmeyer is a Senior Researcher at the Center for Security Studies at ETH Zurich. He holds a PhD in Political Science from the University of Toronto and an M.Phil in International Relations from the University of Oxford. His current research focuses on the nature of cyber power and the relationship between operational constraints and strategic dynamics in cyber conflict. Lennart is also working on a second project compiling a dataset of threat intelligence reporting to identify potential sources of bias in the data and how these impact prevailing threat perceptions. He is a Fellow at The Citizen Lab.

Subscribe to Lawfare