Congress
Criminal Justice & the Rule of Law
Cybersecurity & Tech
Foreign Relations & International Law
Surveillance & Privacy
What David Cameron Doesn't Get
Last week British Prime Minister David Cameron gave an extraordinary speech in which he urged the the banning of private communications, that is communications to which the government could not listen into when legally authorized to do so.
Published by The Lawfare Institute
in Cooperation With
Last week British Prime Minister David Cameron gave an extraordinary speech in which he urged the the banning of private communications, that is communications to which the government could not listen into when legally authorized to do so. Cameron is not the first government official to do so; GCHQ Director Robert Hannigan urged the same last fall, as did FBI Director James Comey in October.
On the surface, such arguments make sense. Seeing armed men storming the editorial offices of Charlie Hebdo, killing the cartoonists who offended them gives rise to terror. The fact that these acts occurred in the center of Paris creates the overwhelming sense that nowhere is safe---and in some sense, that is an accurate assessment. It is the price that open societies pay for their openness. Free societies will always have soft targets, and as long as guns are available and bombs can be made from easily purchased equipment, an open society will be susceptible. It is hard to remove the sense of unease and dread after the news from Paris, but as Jeremy Shapiro has pointed out here, "societies often overreact to such outrages." Prime Minister Cameron is doing exactly that and, in the process, he is ignoring what is really important.
It is tempting to believe that if only we understood the terrorists' communications better, if only we had all their communications and could read them perfectly, we could prevent such horrors in the future. It is tempting to think about banning the private use of encryption and of requiring that government always be able to access private communications under legal authorization. But if communications are the problem, the national-security threat comes not from two armed gunmen, but from the dangers we face through unsecured communications and computer systems.
In December, Germany's Federal Office for Information Security reported that attackers had gained access to a steel mill's control system, compromising components until a blast furnace could not be shut down. Massive damage ensued. We know of other such events, including, of course, Stuxnet, the remote attack on Iranian nuclear facility. The US, which is quite reliant on computer control systems to manage its physical infrastructure---dams, power grid, oil and gas pipelines, etc.---is particularlyvulnerable to such attacks. Nor are the vulnerabilities limited to large infrastructure. With the coming Internet of Things, whether it is automobiles (self driving or simply networked), sensors controlling the flow of traffic, medical devices, the continued ability to hack such devices means that increasingly there is an ability to remotely cause physical harm to single or small groups of individuals.
Security risks are not limited to physical harm. The loss of industrial information and intellectual property--- "the greatest transfer of wealth in history" according to General Keith Alexander---pose long-term, serious national-security dangers. Deputy Defense Secretary William Lynn III has written, "Although the threat to intellectual property is less dramatic than the threat to critical national infrastructure, it may be the most significant cyberthreat that the United States will face over the long term," while President Obama has called cybersecurity "one of the most serious economic and national security challenges" confronting the US. Unsecured infrastructure and systems is where our real security risks lie, and not in the threats arising from two balaclava-clad men carrying AK-47s (horrifying though they are). There are many aspects to the security solution, including, of course, the ability to listen to the plans of our enemies, whether they be other nation states or non-state actors. But the more crucial one is securing our systems and making ourselves less vulnerable. That's where communications security comes in.
Security is hard, and people slip up all the time. Even those with secure communication devices sometimes use unsecured personal devices when they should not. The communications security problem would be simple if we could draw a bright line between the government systems that need securing and the private-sector ones that government, under legal access, could monitor. But that's an inaccurate model for the US economy. The fact is that so many parts of the nation's economy---and the same is true of the UK, France, Germany, etc.---are in the private sector that securing them means allowing them the means to secure their communications.
The way to do this is through securing end to end using encryption (indeed, a recently leaked US government document shows encryption's use is crucial). That's why in 2000, the US government loosened export controls on cryptography in order to permit its wider use, both abroad and at home. Although communications privacy certainly figured in this decision, the underlying concern was security.
That was not the only reason for the change in export control regulations. Another user of private-sector secure communications technology is the government. Over the last two decades, it has been increasingly the case that the US government, and specifically the Department of Defense, relies on commercial-off-the-shelf equipment (COTS) for its information technology needs. There are any number of reasons for this, including legal requirements (the Clinger-Cohen Act), speed of innovation, military coalitions that don't have the longevity of NATO (and hence the US government is not keen to share its secure communications technologies with partners). The result has been the military uses commercial communication products. Sometimes these communication products were deployed straight from Radio Shack shelves to the Iraq War, sometimes they were incorporated into secure DoD communications equipment. Either way, the crucial aspect was that because these tools were to be used in secure communications by the US military, the government supported the development of secure civilian communications products (for more details on this, see my article in the Journal of National Security Law and Policy).
One idea that periodically appears is that secure communications tools could be developed with access for government under proper legal authority. Such an idea, Clipper, was tried in the 1990s. Clipper was sharply criticized on privacy grounds. It had even more problems from a security vantage point. The issue, as a well-known report observed is that, "All key-recovery systems require the existence of a highly sensitive and highly-available secret key or collection of keys that must be maintained in a secure manner over an extended time period." That's a security nightmare. In centralizing the key storage, you've created a rich target. Such rich targets become particularly valuable for attackers. One example of this is the Chinese hackers who gained access to Google's database of US surveillance targets, nicely discovering which of their agents had been unmasked by US intelligence.
The difficulty with requiring that communication systems be built with intercept capabilities is that they naturally become more complex. Complexity is the bane of security. There is also a well-known example of failure here: the Athens Affair, in which parties unknown accessed the private communications of one hundred senior members of the Greek government, including the Prime Minister and the Head of the Ministry of Defense, for ten months through a switch that had been built to accommodate legally authorized surveillance.
Prime Minister Cameron's theory is that the government should be able to access communications under legal authorization, and otherwise communications would be secure. But complexity and experience show that in practice, the situation doesn't work out that way. Given the risks posed to infrastructure and industry---both crucial aspects of any nation's security---the security argument works in favor of permitting secure communications broadly. This doesn't mean that law enforcement won't be capable of wiretapping terrorists and criminals, but it does mean they will have to work harder to do so.
Prime Minister Cameron said, "The first duty of any government is to keep our country and our people safe." That is absolutely correct. Secure communication systems makes it harder for intelligence services to listen in to terrorists, but secure communication systems also makes it harder for criminals, non-state actors, or other nation states, to listen in to citizens. Trading government's ability to listen or hack in today by causing tomorrow's communications systems to be insecure ultimately undermines security. Our focus should be on securing private-sector communications---not the reverse.
On the surface, such arguments make sense. Seeing armed men storming the editorial offices of Charlie Hebdo, killing the cartoonists who offended them gives rise to terror. The fact that these acts occurred in the center of Paris creates the overwhelming sense that nowhere is safe---and in some sense, that is an accurate assessment. It is the price that open societies pay for their openness. Free societies will always have soft targets, and as long as guns are available and bombs can be made from easily purchased equipment, an open society will be susceptible. It is hard to remove the sense of unease and dread after the news from Paris, but as Jeremy Shapiro has pointed out here, "societies often overreact to such outrages." Prime Minister Cameron is doing exactly that and, in the process, he is ignoring what is really important.
It is tempting to believe that if only we understood the terrorists' communications better, if only we had all their communications and could read them perfectly, we could prevent such horrors in the future. It is tempting to think about banning the private use of encryption and of requiring that government always be able to access private communications under legal authorization. But if communications are the problem, the national-security threat comes not from two armed gunmen, but from the dangers we face through unsecured communications and computer systems.
In December, Germany's Federal Office for Information Security reported that attackers had gained access to a steel mill's control system, compromising components until a blast furnace could not be shut down. Massive damage ensued. We know of other such events, including, of course, Stuxnet, the remote attack on Iranian nuclear facility. The US, which is quite reliant on computer control systems to manage its physical infrastructure---dams, power grid, oil and gas pipelines, etc.---is particularlyvulnerable to such attacks. Nor are the vulnerabilities limited to large infrastructure. With the coming Internet of Things, whether it is automobiles (self driving or simply networked), sensors controlling the flow of traffic, medical devices, the continued ability to hack such devices means that increasingly there is an ability to remotely cause physical harm to single or small groups of individuals.
Security risks are not limited to physical harm. The loss of industrial information and intellectual property--- "the greatest transfer of wealth in history" according to General Keith Alexander---pose long-term, serious national-security dangers. Deputy Defense Secretary William Lynn III has written, "Although the threat to intellectual property is less dramatic than the threat to critical national infrastructure, it may be the most significant cyberthreat that the United States will face over the long term," while President Obama has called cybersecurity "one of the most serious economic and national security challenges" confronting the US. Unsecured infrastructure and systems is where our real security risks lie, and not in the threats arising from two balaclava-clad men carrying AK-47s (horrifying though they are). There are many aspects to the security solution, including, of course, the ability to listen to the plans of our enemies, whether they be other nation states or non-state actors. But the more crucial one is securing our systems and making ourselves less vulnerable. That's where communications security comes in.
Security is hard, and people slip up all the time. Even those with secure communication devices sometimes use unsecured personal devices when they should not. The communications security problem would be simple if we could draw a bright line between the government systems that need securing and the private-sector ones that government, under legal access, could monitor. But that's an inaccurate model for the US economy. The fact is that so many parts of the nation's economy---and the same is true of the UK, France, Germany, etc.---are in the private sector that securing them means allowing them the means to secure their communications.
The way to do this is through securing end to end using encryption (indeed, a recently leaked US government document shows encryption's use is crucial). That's why in 2000, the US government loosened export controls on cryptography in order to permit its wider use, both abroad and at home. Although communications privacy certainly figured in this decision, the underlying concern was security.
That was not the only reason for the change in export control regulations. Another user of private-sector secure communications technology is the government. Over the last two decades, it has been increasingly the case that the US government, and specifically the Department of Defense, relies on commercial-off-the-shelf equipment (COTS) for its information technology needs. There are any number of reasons for this, including legal requirements (the Clinger-Cohen Act), speed of innovation, military coalitions that don't have the longevity of NATO (and hence the US government is not keen to share its secure communications technologies with partners). The result has been the military uses commercial communication products. Sometimes these communication products were deployed straight from Radio Shack shelves to the Iraq War, sometimes they were incorporated into secure DoD communications equipment. Either way, the crucial aspect was that because these tools were to be used in secure communications by the US military, the government supported the development of secure civilian communications products (for more details on this, see my article in the Journal of National Security Law and Policy).
One idea that periodically appears is that secure communications tools could be developed with access for government under proper legal authority. Such an idea, Clipper, was tried in the 1990s. Clipper was sharply criticized on privacy grounds. It had even more problems from a security vantage point. The issue, as a well-known report observed is that, "All key-recovery systems require the existence of a highly sensitive and highly-available secret key or collection of keys that must be maintained in a secure manner over an extended time period." That's a security nightmare. In centralizing the key storage, you've created a rich target. Such rich targets become particularly valuable for attackers. One example of this is the Chinese hackers who gained access to Google's database of US surveillance targets, nicely discovering which of their agents had been unmasked by US intelligence.
The difficulty with requiring that communication systems be built with intercept capabilities is that they naturally become more complex. Complexity is the bane of security. There is also a well-known example of failure here: the Athens Affair, in which parties unknown accessed the private communications of one hundred senior members of the Greek government, including the Prime Minister and the Head of the Ministry of Defense, for ten months through a switch that had been built to accommodate legally authorized surveillance.
Prime Minister Cameron's theory is that the government should be able to access communications under legal authorization, and otherwise communications would be secure. But complexity and experience show that in practice, the situation doesn't work out that way. Given the risks posed to infrastructure and industry---both crucial aspects of any nation's security---the security argument works in favor of permitting secure communications broadly. This doesn't mean that law enforcement won't be capable of wiretapping terrorists and criminals, but it does mean they will have to work harder to do so.
Prime Minister Cameron said, "The first duty of any government is to keep our country and our people safe." That is absolutely correct. Secure communication systems makes it harder for intelligence services to listen in to terrorists, but secure communication systems also makes it harder for criminals, non-state actors, or other nation states, to listen in to citizens. Trading government's ability to listen or hack in today by causing tomorrow's communications systems to be insecure ultimately undermines security. Our focus should be on securing private-sector communications---not the reverse.
Susan Landau is a Professor of Cybersecurity Policy at Worcester Polytechnic Institute. Previously she was a Senior Staff Privacy Analyst at Google and a Distinguished Engineer at Sun Microsystems, and has taught at the University of Massachusetts at Amherst and at Wesleyan University. She is the author of Surveillance or Security? The Risks Posed by New Wiretapping Technologies (MIT Press, 2011) and co-author, with Whitfield Diffie, of Privacy on the Line: The Politics of Wiretapping and Encryption (MIT Press, rev. ed. 2007).
Susan Landau is Professor of Cyber Security and Policy in Computer Science, Tufts University. Previously, as Bridge Professor of Cyber Security and Policy at The Fletcher School and School of Engineering, Department of Computer Science, Landau established an innovative MS degree in Cybersecurity and Public Policy joint between the schools. She has been a senior staff privacy analyst at Google, distinguished engineer at Sun Microsystems, and faculty at Worcester Polytechnic Institute, University of Massachusetts Amherst, and Wesleyan University. She has served at various boards at the National Academies of Science, Engineering and Medicine and for several government agencies. She is the author or co-author of four books and numerous research papers. She has received the USENIX Lifetime Achievement Award, shared with Steven Bellovin and Matt Blaze, and the American Mathematical Society's Bertrand Russell Prize.