Can Technologists Talk Lawfare?
I'm a computer scientist. My focus is on network security and network measurement. Cyber-crime, high speed attacks, network intrusion detection, and detecting censorship are all part of my research.
Published by The Lawfare Institute
in Cooperation With
I'm a computer scientist. My focus is on network security and network measurement. Cyber-crime, high speed attacks, network intrusion detection, and detecting censorship are all part of my research. Research highlights in the past couple of years includes work on Bitcoin, mapping China's "Great Cannon" attack tool, understanding how the Chinese detect Tor, and evaluating the extent of the Heartbleed vulnerability. I've never written a law review article or a legal brief, although I have needed to put my name to a couple of declarations and do have a hobby of following computer-security cases on PACER.
I'm very far from the sort of person who typically reads Lawfare (although I encourage others in my field to read it too), and with a few exceptions, I'm pretty far in terms of background and training from the people who write it.
But it seems to me that a serious dialogue between Lawfare types and people like me is critically important. FBI Director Jim Comey has said we need a serious national conversation on encryption, and though I largely disagree with his policy ambitions, I completely agree with this call--and not just on encryption but on a lot of other issues too.
Policy matters. We technologists can diagnose problems, but solutions are often social, and it is policy, not technology, that can solve social problems. By explaining the technical issues which can affect policy decisions, I hope to make a policy difference, particularly in the areas of surveillance and encryption. By engaging with policy folks of very different sensibilities than my own, I hope both to educate them about the technical costs and opportunities of different policy options and also to learn about the non-technical constraints that they face.
My particular focus on surveillance arises from a simple observation: that the technologies deployed for surveillance by the NSA, the tools deployed by the Chinese to censor the Internet, and the intrusion detection systems (IDS) used by thousands of companies all share a common heritage and function. They may not be morally similar, but they are technically similar: They all seek to examine the network traffic, reconstruct the communication, extract the content, and analyze the resulting semantics.
These systems may differ in some technical details, with slightly different priorities and invoking different technical policies, but under the hood they not only perform the same tasks but often even use the same hardware. Its simply a matter of what is best termed the physics of computation: there may be only one or a few conceptual ways to solve a particular problem. Operators of these systems may even think the same way: Lawrence Berkeley Laboratory's IDS deployment record three months of raw traffic. So by understanding censorship and IDS systems, this knowledge directly transfers to an understanding of surveillance systems.
This is also why I believe its essential defense to go dark, since I believe we have more to lose otherwise. The NSA has no monopoly on the tools of surveillance, but just some privileged vantage points. The NSA may have been the most aggressive about collecting certain data, but is certainly far less aggressive about using it. I actually don't worry about the NSA. I worry very much that our surveillance adversaries (China, Russia, France, Israel, etc) will adopt the NSA's techniques without adopting the NSA's relative restraint.
One reason I look forward to this conversation is that I tend to possess a lot of strategic empathy. Whenever there is a competition or a conflict, I attempt to understand the motivation of all sides. This is why I admit that although I find the NSA's Internet surveillance systems offensive, if I had the agency's mandate I'd build the same thing. Its also how I predicted the NSA's use of packet injection before its public disclosure: if I was in their position I would deploy that as well.
My own community not only sometimes fails to exercise strategic empathy, it also has other intellectual weaknesses in confronting policy questions. People think of computers as magic. For far too long, my field has enabled and encouraged this belief, because this belief makes us the magicians even if that is completely divorced from reality. This is a mistake.
This belief drives much of the encryption debate. Among computer scientists, there is no debate: we have a hard enough time building secure systems without deliberately adding weaknesses. The other side seems to think that if we are wizards we should simply magic harder and solve the problem. We have helped cultivate the mythology against which we now struggle politically. Only by demystifying the magic can we help people understand the constraints under which we operate.
Finally, this conversation will work best if it really is a conversation. So I'd like to urge Lawfare readers to be in touch, to ask me questions, to critique and question the ideas I advance, and to suggest possible topics for future posts. You can email me at nweaver@icsi.berkeley.edu. Similarly, I'm eager to meet with government institutions that are interested in my areas of expertise (Bitcoin, Surveillance, Chinese censorship & attack tools), and I'm happy to arrange to give talks or informal meetings whenever I'm traveling on the East Coast.