Cybersecurity & Tech

Still More on Loud Cyber Weapons

Herb Lin
Wednesday, October 19, 2016, 8:48 PM

In my first post on this subject, I quoted a news story in fedscoop saying that

The development of “loud” offensive cyber tools, [that could be definitively traced to the United States and thus] able to possibly deter future intrusions, represent a “different paradigm shift” from what the agency has used to in the past.

Published by The Lawfare Institute
in Cooperation With
Brookings

In my first post on this subject, I quoted a news story in fedscoop saying that

The development of “loud” offensive cyber tools, [that could be definitively traced to the United States and thus] able to possibly deter future intrusions, represent a “different paradigm shift” from what the agency has used to in the past.

I then asked why such tools were needed, when one could accomplish the same thing by a phone call to the government of the target that described something that only the true attacker would know.

I now understand better the need for such tools, and my understanding has very little to do with the reason offered in the article.

In general, a tool used for offensive cyber operations involves a penetration mechanism and a payload. It’s often been observed that tools for exploitation (intelligence collection) and for attack (destruction or degradation) make use of the same techniques for penetration and differ primarily in payload (that is, in what is done to the target after penetration has been achieved).

As the fedscoop article points out, intelligence operations need to be low, slow, and most of all, quiet—a successful intelligence operation is one that the adversary never knows has happened. This invisibility is particularly important if a given intelligence operation is to be persistent and thus able to yield useful information for a long time.

By contrast, a military operation (such as one conducted by U.S. Cyber Command) is supposed to be noticed by the adversary—if it has no effect, it’s been unsuccessful. A cyber operation for attack has to presume that the adversary takes notice that something bad has happened, and thus will be alerted and motivated to find the cause of the bad thing that just happened to it.

Thus, the adversary may well find remnants of the attack tool—and knowledge of the details of this tool may provide it with signatures that it can use to identify other similar tools. If the attack tool shares important similarities with intelligence-gathering tools, that knowledge puts those latter tools at risk of being discovered. For this reason, it is desirable for attack tools to be as different as possible from intelligence-gathering tools. Their payloads will be different, by definition. But significant differences in penetration mechanisms (and tactics) as well will inhibit discovery of intelligence-gathering tools.

So it’s not the “loudness” of the attack tool that is important—it’s the fact that it is used with the intention of causing damage, and the adversary is likely to notice that damage. If the adversary never noticed, that would be just fine, but the attacker can’t make that presumption. The important characteristic is that attack tools be as different as possible from intelligence-gathering tools so as not jeopardize the latter’s covertness.

I don't know if this is the complete story (for example, there may be institutional issues that divide US Cyber Command and the National Security Agency or legal issues such as the Title 10/Title 50 separation), so the development of separate toolkits may be desirable from those point of view), but I still haven't been able to find a good reason that is related to attribution.

I welcome further comments on this subject.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare