The Limits of The Panopticon
The modern world is a virtual panopticon, a surveillance state which records huge quantities of data about our every action and provides an easily retrievable metadata record of our movements, our contacts, our purchases, and our activities. This panopticon has two potential functions: prospective and retrospective. The prospective function is to stop the bad guys before they do the bad thing. The retrospective function allows intelligence and law enforcement authorities to comb through data following an event and piece together what happened.
Published by The Lawfare Institute
in Cooperation With
The modern world is a virtual panopticon, a surveillance state which records huge quantities of data about our every action and provides an easily retrievable metadata record of our movements, our contacts, our purchases, and our activities. This panopticon has two potential functions: prospective and retrospective. The prospective function is to stop the bad guys before they do the bad thing. The retrospective function allows intelligence and law enforcement authorities to comb through data following an event and piece together what happened. The emotional resonance of the prospective function is highest following an event like the Paris terrorist attacks – it allows proponents of increased surveillance authorities to say “If only…”
Details are still emerging but we are now starting to get a picture of how the Paris attackers operated, and how, despite being known to authorities, they evaded detection. These attacks demonstrate that, whatever the investigative benefits of the retrospective panopticon, the prospective panopticon is inherently limited. Put simply, the prospective panopticon failed in Paris (as it fails in every successful terrorist attack), and the reasons why probably can’t be fixed.
To begin, consider the retrospective panopticon. This panopticon is so powerful that even the CIA can’t reliably evade it, with at least two operations attributed through cellphone metadata records. Once a suspect is identified, it is relatively straightforward to reconstruct movements, associates, patterns of activity, and many communications. This explains the incredible success of French and Belgian authorities in the aftermath of the attacks at quickly identifying suspects and known associates and mapping their movements.
But the success of this retrospective panopticon only highlights the failure of the prospective panopticon. If the individuals in Paris were known to authorities, why weren’t they stopped?
First, the sheer volume of “known radicals” –at least 5000—makes prospective monitoring impossible. How does one effectively monitor 5000 individuals and identify who among them will pose an actual threat? After all, most never will. It didn’t matter that Salah Abdeslam used his own name and credit card when booking his hotel room. Abdeslam was simply one of thousands identified as maybe or maybe not posing a threat.
Even reducing the volume of targets may be insufficient. Assuming the authorities were able to focus on 500 or 50 individuals instead of 5000, the communication patterns of a terrorist cell are remarkably similar to those of any family or group. Unless authorities are aware that an individual is actively (rather than potentially) dangerous, electronic monitoring may provide little prospective benefit, unless they can intercept the contents of a communication that makes a threat clear.
But the communication content of an even minimally proficient terrorist provides little value. Human codes are often employed. We now know that final coordination took place using unencrypted SMS, but unless one already has already identified the terrorist cell and at least some basic details of a plot, tracking an SMS that says "On est parti on commence" (which roughly translates to “Let’s go, we’re starting”) provides little actionable intelligence.
Which bring us to cryptography, or rather the lack thereof. The Paris attackers consisted of individuals who knew one another and lived in the same neighborhoods. Meeting in person was not only feasible but unsuspicious. You don’t need PGP when you have IRL (In Real Life). Afterall, low-tech methods –face-to-face meetings and using couriers—provide the best protection against electronic surveillance. And even if physically followed, it is not unusual for someone to meet a brother, friend, or cousin in a loud public space.
This is all to say, providing more electronic surveillance powers to the security services won’t fix the actual problem. Sure, it ups the power of the panopticon’s retrospective analysis but that wasn’t what failed here. To the contrary, the retrospective panopticon appears to be working quite well under current authorities. For example, data retention rules such as those proposed in the United Kingdom will not help prospective investigations; you still need to know who your suspect is to know which data to examine.
It would be a mistake to allow proponents of increased surveillance authority to use a failure of the prospective panopticon to argue for more power relevant to the retrospective panopticon. The limits of the prospective panopticon cannot be wished away by cryptographic backdoors and the like. These are inherent limitations of the system. Therefore, rather than using Paris as an argument for increasing surveillance authority, security services should look to where they can overcome the panopticon’s failings. For example, authorities should redouble their focus on human intelligence. Here, the relatively large cell size (perhaps 20 or more based on the number of arrests) made the conspiracy more vulnerable to infiltration, and the security services have all the necessary legal authorities to carry out this mission.
The electronic panopticon is attractive because it is often effective and inexpensive by design. But Paris presents an opportunity to acknowledge its limitations and recognize that making it “stronger” will not actually make it more effective.