Active Cyber Defense and Interpreting the Computer Fraud and Abuse Act
In the cybersecurity field, the term “active defense” is often used in a variety of ways, referring to any activity undertaken outside the legitimate span of control of an organization being attacked; any non-cooperative, harmful or damaging activity undertaken outside such scope; or any proactive step taken inside or outside that span of control.
Published by The Lawfare Institute
in Cooperation With
In the cybersecurity field, the term “active defense” is often used in a variety of ways, referring to any activity undertaken outside the legitimate span of control of an organization being attacked; any non-cooperative, harmful or damaging activity undertaken outside such scope; or any proactive step taken inside or outside that span of control. As most Lawfare readers know, activities outside the legitimate span of control are quite controversial from a policy standpoint, as they can implicate the Computer Fraud and Abuse Act, or CFAA, which criminalizes both gaining access to computers without authorization as well as exceeding authorized access.
This logic suggests to many that “hacking back”—which might well be defined as a counter-cyberattack on an attacker’s computer—would violate the CFAA. That is, even if A gains unauthorized access to B’s computer, any action taken by B on A’s computer would violate the CFAA since A would not have given B authorization for access. This article will offer some technical commentary on the implications of interpreting the CFAA that way.
Consider a scenario in which a honeypot is deployed that attracts attackers interested in compromising B’s network and associated devices. When the attacker A implants himself in a honeypot system belonging to B, A’s tools are generally sending some data back to A’s home systems (e.g., reconnaissance data) and is using B’s resources to do so.
Under this set of facts, nothing should prevent B from corrupting some or all of the packets that are being sent back to A. The packets sent back to A won’t ever execute anything by themselves. But A will store and ultimately run queries on these corrupted packets—and depending on the methods used for these queries, they may cause some trouble in A’s machines.
In this scenario, A is proactively pulling data from B, and B is manipulating the flow or content of the data using B’s resources. A is the party utilizing B’s resources to create data packets and pull them back to A’s computers, and B never actually proactively executes anything against A.
Some may ask, how does B know it is not sending bad packets to innocent intermediary machines that have been recruited as bots? It doesn’t—but it does not matter as long as the intermediary machines are not querying the data. It is more likely that A pulls the corrupted packets from their bots to their own operational computers, where A then runs analysis.
Work that one of us (Falco) has been conducting at MIT aims to illustrate the technical feasibility of the approach described above. But it remains unclear whether the CFAA can be properly interpreted in the manner described above, though we believe that the argument described is at least colorable. To the best of our knowledge, this interpretation has not been tested or even advanced in the courts.
Note that the scenario involving manipulation of the packets being exfiltrated to A’s computer is somewhat different from one in which A exfiltrates a poisoned PDF file from B that executes when the file is opened on A’s computers. Although in both cases, B has triggered a kind of digital booby trap whose damaging potential is activated and released through an action of A, the packet manipulation scenario entails an active, direct, and continuing connection to A’s computer that is initiated by A. By contrast, in the poisoned PDF file scenario, there is not necessarily an active connection between A and B at the time the program in the PDF file executes. The honeypot scenario is thus a conservative expansion of the CFAA’s prohibitions on unauthorized access—indeed, more conservative than allowing a poisoned PDF file even for beaconing purposes.
Should such an interpretation be desirable from a policy standpoint, there are two possible routes—a legislative route and a nonlegislative one. The nonlegislative route is based on Justice Department interpretations of existing law. Such interpretations are not legally binding, but they establish the envelope within which prosecutorial activity generally takes place. Further, they do not affect civil liability directly but might well have influence in civil cases.
In the honeypot scenario postulated, no unauthorized access of A by B occurs. This scenario may implicate CFAA Section 1030(a)(5)(A), which criminalizes “knowingly caus[ing] the transmission of a program, information, code, or command, and as a result of such conduct, intentionally caus[ing] damage without authorization, to a protected computer.” We advocate that the key phrase of interest here—“knowingly causes transmission”—should be reasonably interpreted to exclude B’s modification of B’s own computing environment in this kind of scenario.
A second route is legislative. The main legislative attempt to revise the Computer Fraud and Abuse Act to allow certain limited defensive measures undertaken outside the borders of one’s own network appears to have come from Rep. Tom Graves (R.-Ga.) in May 2017, when his office released a discussion draft for what his office termed the Active Cyber Defense Certainty Act. However, no further action has been taken on that bill to date. A legislative approach has many advantages, including greater legal certainty for users of information technology. On the other hand, a legislative approach is often politically more difficult when it changes existing policy—a clarification along the lines described above that explicitly allows the interpretation we advance would be a highly minimalist change to existing policy, and thus, in principle, may be easier to pass.
We believe that the technical work described above raises important policy questions like: What rights do I have within my own computing environment? Does the concept of “enter at your own risk” apply to cyberspace? How does this challenge norms for acceptable defensive behavior? At the same time, the work described above does not speak to the broader questions regarding the wisdom of hack-back strategies. We hope this piece prompts a broader conversation about the law and ethics of similar automated defensive mechanisms.