Criminal Justice & the Rule of Law Cybersecurity & Tech Surveillance & Privacy Terrorism & Extremism

Shane Harris on Total Information Awareness and Colorado

Benjamin Wittes
Thursday, July 26, 2012, 11:26 PM
As soon as I saw this Wall Street Journal oped yesterday from Holman W. Jenkins, Jr., I thought to myself--and said to Ritika--we need to hear from Shane Harris on this. Now we have. Jenkins had suggested that the Total Information Awareness program might have flagged the Aurora shooter before he struck.

Published by The Lawfare Institute
in Cooperation With
Brookings

As soon as I saw this Wall Street Journal oped yesterday from Holman W. Jenkins, Jr., I thought to myself--and said to Ritika--we need to hear from Shane Harris on this. Now we have. Jenkins had suggested that the Total Information Awareness program might have flagged the Aurora shooter before he struck. He writes:
Would Total Information Awareness have stopped James Eagan Holmes? You perhaps remember the fuss. That program by the Defense Department was curtailed when the Senate voted to revoke funding amid a privacy furor in 2003. The project had been aimed partly at automatically collecting vast amounts of data and looking for patterns detectable only by computers. It was originated by Adm. John Poindexter—yes, the same one prosecuted in the Reagan-era Iran-Contra scandal—who said the key to stopping terrorism was "transaction" data. For terrorists to carry out attacks, he explained in a 2002 speech, "their people must engage in transactions and they will leave signatures in this information space."
The Colorado shooter Mr. Holmes dropped out of school via email. He tried to join a shooting range with phone calls and emails going back and forth. He bought weapons and bomb-making equipment. He placed orders at various websites for a large quantity of ammunition. Aside from privacy considerations, is there anything in principle to stop government computers, assuming they have access to the data, from algorithmically detecting the patterns of a mass shooting in the planning stages? . . . Sadly, the political blowback of 10 years ago, and the hush-hush with which data mining has proceeded since then, mean we have not been treated to a public exploration of the critical question: whether one sort of data mining originally envisioned—the kind not keyed on a specific individual—can yield actionable warnings. Psychiatric evaluations of dangerousness, we're often told, are unhelpful because too many fit the pattern who never engage in violence. Can monitoring masses of transaction data help find the real risks? Or would this also lead (as some experts surmise) to unmanageable numbers of false positives? After the Aurora theater massacre, it might be fair to ask what kinds of things the NSA has programmed its algorithms to look for. Did it, or could it have, picked up on Mr. Holmes's activities? And if not, what exactly are we getting for the money we spend on data mining?
In response, Harris---the author of a history of data-mining and the TIA program called The Watchers and a features writer at Washingtonian magazine---has written a blog post that leaves little doubt that TIA would, in fact, not have stopped Holmes. The simple problem, Harris writes, is that the "imagined algorithm doesn’t exist":
It is beyond the capacity of today’s technology to peer into an ocean of anonymous electronic information—phone calls, e-mails, credit card receipts—and then detect a pattern of activity that reliably leads to one specific person planning a crime. Even transactions that in hindsight appear to be meaningfully connected to an individual will look to a computer like a mash of data, and any pattern it finds will not be unique. It will show up connected to many other people. Jenkins enumerates the various steps Holmes is believed to have taken in the days before his crime, all of which left an electronic trace, and wonders why computers couldn’t figure out what the alleged killer was up to. “Holmes dropped out of school via e-mail,” Jenkins notes. Well, I doubt he was the only student to drop out. And doing so over e-mail isn’t novel or especially alarming. “He tried to join a shooting range with phone calls and e-mails going back and forth.” Presumably, so have a lot of people. Aren’t those two standard ways to inquire about joining a shooting range, or any other club? “He bought weapons and bomb-making equipment.” Holmes purchased his firearms legally, like a lot of other people, so that doesn’t raise suspicion. As for the bomb-making equipment, this points to an issue that pattern analysts have struggled with for years. How do you know if someone buying, say, large quantities of fertilizer is trying to build a bomb or start a gardening business? “He placed orders at various websites for a large quantity of ammunition.” I haven’t purchased rounds over the Web, but I understand it’s quite easy and, I have to presume, rather common. None of these events, on their own or collectively, could have led investigators to preempt the killings in Aurora. They’re not specific enough, and absent an initial connection to a suspect, they’re practically useless. Buying guns and chemicals that could be used to make a bomb, as scary as that sounds, is not proof of terrorism or mass murder. Put Holmes’s data points together and you’d also have a plausible picture of a man who dropped out of school and decided to take up sport shooting, or a guy buying chemicals to start a cleaning service.
 The problems with Jenkins's hypothesis go further, Harris argues:
A computer cannot distinguish between innocuous behavior and sinister plotting just by looking at a list of receipts. And, Jenkins might be surprised to know, that is not what Total Information Awareness proposed to do, either. Jenkins is correct that TIA was going to sift through “vast streams of data looking for red flags,” but he is oversimplifying this process and leaving out a crucial step. TIA was going to sift through data looking for transactions based on previously designed templates of behaviors one might see leading up to an act of terrorism. The TIA researchers convened a panel of terrorism and security experts to devise various plots. In one—flying a plane into a nuclear reactor—they listed all the steps a terrorist cell would have to take that might leave an electronic record, including traveling to the reactor site to conduct reconnaissance, researching the facility, transferring money for the plot, finding housing for the attackers, communicating with their handlers, and so on. Now, if these terrorist plotters were to leave a clear, predictable signature of electronic transactions—and that’s a big if, one TIA never fully tested in the real world—that would only be the first step in preventing their attack. After the initial detection, human analysts would enter the picture, and they’d have to look for connections to other known or suspected terrorists. And in that laborious work, they’d have a greater chance of success than an investigator trying to find the next movie theater gunman. That’s because the terrorists would probably have come into the United States from a foreign country, which would mean their names would already be in the federal government’s immigration databases. At that point, intelligence analysts across the government would have access to their names, and they could try to connect the suspects to a terrorist group. As far as we know, Holmes never appeared on any government list, other than for a reported traffic incident.
Oh, well. It was a nice idea.

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare