Further Reflections on NOBUS (and an Approach for Balancing the Twin Needs for Offensive Capability and Better Defensive Security in Deployed Systems)
In a previous post, I commented on the Nobody-But-Us (NOBUS) view of the world. My original post says that the real technical question raised by NOBUS is how long nobody-but-us access can be kept for a given proposed system.
Since then, I’ve received comments from a number of people who have cited one example or another that NOBUS is fundamentally flawed as a technical concept. No, it isn’t. All the examples prove is that particular instantiations of NOBUS have been flawed.
Published by The Lawfare Institute
in Cooperation With
In a previous post, I commented on the Nobody-But-Us (NOBUS) view of the world. My original post says that the real technical question raised by NOBUS is how long nobody-but-us access can be kept for a given proposed system.
Since then, I’ve received comments from a number of people who have cited one example or another that NOBUS is fundamentally flawed as a technical concept. No, it isn’t. All the examples prove is that particular instantiations of NOBUS have been flawed. They don’t prove the general case in any way.
Still, apart from refusal to deal with the substance of the technical question regarding NOBUS, these comments raise other issues, and this note is a response.
To me, the technical argument against NOBUS has three distinct parts.
Part 1 is that if NSA finds a vulnerability and uses it, others can find it and exploit it as well. There is some evidence to support this part. An emergency patch issued by a vendor (e.g., http://www.theregister.co.uk/2010/08/02/emergency_microsoft_update/), and there have been many over the years, at least suggests that a found vulnerability can be exploited on a short time scale. We don’t know if the NSA was previously aware of the vulnerability in any given case, but it may have been. Further, a strong logical case for Part 1 can be made as well—use of a vulnerability potentially (usually?) reveals it, and then others can exploit it because its use reveals its existence.
Part 2 is that if NSA finds a vulnerability and stockpiles it (and does not use it—that’s what stockpiling means), others can still find it and exploit it as well. That’s true in principle, but I have never seen evidence to support it. Of course, what is actually in the NSA stockpile is secret, so it’s unlikely that we would know one way or another if such evidence exists. In any case, if Part 2 is operative, NSA is acting as a no-op with respect to the security of deployed systems—the outcome of Part 2 is the same as if NSA did nothing at all, or indeed if the vulnerability finders at NSA did not exist.
There’s one caveat to my Part 2 analysis I’ve occasionally heard discussed but I haven’t seen in writing. The vulnerability stockpile is going to be kept secret—but it too has some likelihood of being compromised. In the long run (i.e., given infinite time), every vulnerability in the stockpile will be revealed. This point suggests the need for another risk analysis—how long does it take for a given vulnerability (kept secret in the stockpile) to be revealed? Knowing that number may provide useful information for NSA’s knowing when to reveal it to the vendor—keeping a vulnerability in the stockpile indefinitely makes no sense, but keeping for a time that is short compared to the expected time of compromise but long enough to be potentially useful for offensive operations might be a reasonable operative rule that balanced the need for maintaining offensive capabilities against the need for improving security.
Part 3 is that if NSA implants a vulnerability—i.e., puts in a vulnerability that is not already present--others can find it and exploit it. Part 3 has a more serious implication than Part 2—here NSA is affirmatively weakening the security of a system to some degree. The degree by which it is weakening the security is a function of the risk analysis I proposed in my original note. My original note argued that if the expected time for disclosure of the secret knowledge required to gain access is long, then that weakening of security may be acceptable. But there’s no question that insertion of a vulnerability does weaken security.
On the other hand, I would argue that a deliberately implanted NSA vulnerability intended to grant NOBUS access would be specifically designed with features to protect the NOBUS aspect of it, and thus others would find it more difficult to exploit as compared to the usual vulnerabilities in question, since these vulnerabilities are generally introduced by accident and thus aren’t deliberately made hard to exploit. Of course, this is an empirical question, but in any case, no one has submitted any evidence that this has happened.
As in my original post, this post does not take a position on the policy desirability of NOBUS access—it merely tries to separate the relevant technical dimensions of NOBUS from the value judgments about its desirability.
Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.