Terrorism & Extremism

Ghosts in the Machine: Targeting Homegrown Violent Extremists with Artificial Intelligence-enabled Investigations

Walter Haydock
Tuesday, November 22, 2016, 7:30 AM

In June, a gunman slaughtered 49 people in Orlando, Florida, pledging allegiance to the leader of the Islamic State of Iraq and the Levant (ISIL) in the process. The following month, a supporter of the New Black Panther Party mercilessly killed five police officers while they were protecting protesters.

Published by The Lawfare Institute
in Cooperation With
Brookings

In June, a gunman slaughtered 49 people in Orlando, Florida, pledging allegiance to the leader of the Islamic State of Iraq and the Levant (ISIL) in the process. The following month, a supporter of the New Black Panther Party mercilessly killed five police officers while they were protecting protesters. The year prior, an avowed white supremacist gunned down nine parishioners in a church in Charleston, South Carolina. Just in September, a man inspired by a variety of jihadist groups detonated bombs in New York and New Jersey.

All of these tragedies had a common thread: their perpetrators radicalized gradually, and at least in part via the Internet. The last case—of Ahmad Khan Rahami—is most instructive. His bombing spree followed a series of trips to Afghanistan and Pakistan, time spent reportedly studying at a Taliban-affiliated madrassa, a violent assault against a family member, and even an allegation by his father that he was regularly viewing extremist propaganda online. These behaviors warranted continued, although perhaps not intense, scrutiny by law enforcement authorities. Whatever Rahami’s father said, it clearly was enough to cause the Federal Bureau of Investigation (FBI) to open an “assessment,” the lowest level of investigative inquiry, on him in August 2014. Having nothing to substantiate any connection to terrorism, the FBI closed the assessment the next month. Without any apparent intervention, Rahami continued on this “slow-burn” path to radicalization and eventually conducted a terrorist attack.

Over a two-year period earlier this decade, the FBI opened 82,325 assessments of all types. This sizeable number does not even include more intensive—although less numerous—preliminary and full investigations. Although the current number of terrorism-related assessments is not publicly known, one can infer that it is currently a sizeable, and possibly increasing, fraction of the total number. This is evident from the fact that in 2015 the Bureau shifted hundreds of agents from traditional criminal investigations to ISIL-related cases, which saw a significant spike during that time. Thus, it is likely that the FBI opens thousands of assessments on possible extremists every year. As a result, some observers have asserted that there is a “surveillance gap” between the number of potential terrorists and the Bureau’s ability to monitor them all.

In addition to traditional Human Intelligence (HUMINT) operations using undercover agents and informants, the FBI also conducts a variety of clandestine collection activities online. By posing as extremists themselves, law enforcement officers can identify those on the path to radicalization before they strike. These techniques allow the Bureau to identify potential threats quickly, sorting out paper tigers from those with the intent and capability to do actual harm. Although useful, these methods require significant manpower and expertise on the part of agents. Such labor-intensive tasks cry out for automation.

I have a suggestion to close the surveillance gap: we can use computers running artificial intelligence algorithms to do some of this surveillance for us. Computer programs already handle mundane customer service requests, control household appliances, and execute other increasingly complex processes. In the field of cybersecurity, artificial intelligence scripts are already autonomously plugging security gaps in their own networks and identifying new vulnerabilities in other systems. A next logical step is combining modern online HUMINT techniques with emerging artificial intelligence technology.

By partially automating these operations, the FBI could periodically “check in” on subjects of investigative interest to determine if they are headed down the road to violence. In this proposed model, computer programs would replace some of the government employees whose job it is to pose as terrorist sympathizers and facilitators online. Employing these tools—Artificial Intelligence Targeting Personas (AITPs)—against potential terrorists could significantly reduce a major burden on the FBI. If this sounds Orwellian, give me a chance. I want to try to convince you that, in the long run, doing so might be a win not merely for effective counterterrorism but also for civil liberties.

Technological Issues and the State of Artificial Intelligence

A key metric for determining whether an AITP is ready to deploy against human targets is if it passes the “Turing Test.” In 1950, British mathematician Alan Turing proposed this eponymous method for evaluating artificial intelligence. Without delving into the details of the test, a computer program “passes” if it can fool a human judge into believing that it is itself human after a given period of conversation. The current state of artificial intelligence is the primary technological barrier to implementing the proposed AITP system. In a 2014 competition, the winning program fooled only 30 percent of human judges after a five-minute conversation. Skeptical observers have noted the programs’ unsophisticated responses as well as occasional complete breakdowns.

AITPs would certainly require human help in overcoming some obstacles. Even for ones that could conceivably be effective against terrorist targets, ubiquitous Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHA) would prevent AITPs from registering for e-mail and messaging service accounts on their own. AITPs would almost certainly suffer attrition from the automated tools that social media companies use to fight spam on their services. Finally, both automated and human de-confliction would be necessary to ensure that AITPs do not accidentally target each other. For an AITP to appear genuine, it would itself have to exhibit some characteristics of its targets, such as viewing extremist propaganda and communicating with known terrorists.

Acknowledging these challenges, however, is not to concede that AITPs would be doomed to fail. Especially in the field of “narrow” artificial intelligence, which is most relevant to AITP operations, advances are likely to continue at a steady pace. Additionally, any human participating in a true Turing Test knows that he is potentially talking to a machine. AITPs would not face such a handicap, as their targets would be (at least initially) unaware of their non-human status. Finally, such AITP-enabled investigative techniques would help circumvent, and perhaps even ameliorate, the “going dark” phenomenon. There would be no need to intercept and decipher the encrypted communications of terrorist suspects. Being the intended recipients of transmissions from AITP targets and thus in possession of the necessary decryption keys, law enforcement personnel controlling these personas would be able to read communications from their targets in plaintext.

As far as identifying targets, the FBI and other law enforcement organizations could determine how to deploy AITPs in a variety of ways. Although humans should always vet initial reporting from concerned citizens, tip lines, and “walk-in” sources, AITPs could conduct vital follow up work on leads that initially seem innocuous. In addition to collecting names, dates of birth, and phone numbers when conducting “manual” investigations, FBI agents could also collect the social media and messaging application identifiers of potential AITP targets. Observation of extremist chat rooms and social media feeds could reveal additional subjects for future automated monitoring.

Upon identification of a suitable target, the FBI would then open an assessment and deploy one or more AITPs to commence communication. Initially, the personas would use non-alerting approaches to begin contact with their targets, discussing things of obvious interest to them based on the circumstances of the initial contact. Even if the target of an AITP identifies the ruse, he might still be deterred from further outreach efforts to actual extremists. The existence alone of such personas could thus help create early “off-ramps” for potential terrorists just beginning to radicalize. Upon establishing a baseline of rapport, an AITP could move the conversation towards more relevant topics—anything indicative of a predilection towards violence—using algorithms to imitate actual human behavior in terms of response frequency, spelling errors, and other characteristics.

If the target swears off violence or exhibits no signs of operational planning for an extended period of time, then AITP operations would cease and the FBI would follow existing policy for continuing or terminating the assessment. Obviously, if the target never engages the AITP or says anything alerting, the FBI would have no grounds to continue investigating. But this is no different than in traditional HUMINT operations. To be clear, I am not proposing relaxing any rules with regard to initiating assessments (although the bar is already admittedly low) or elevating them to full investigations. I am merely proposing automation of the initial stages of such efforts in an effort to save manpower and allow more frequent check-ins with subjects of potential interest. If a target expresses any potential intent to harm others, the AITP could flag him for further attention—exactly as a human would. FBI agents could then follow up with additional investigative measures, such as traditional electronic surveillance, “in-person” approaches using recruited human sources, formal interviews, or search warrants.

Privacy and Civil Liberty Concerns

In evaluating any proposed new investigative regime, a responsible national security professional must ensure that it complies with constitutional, statutory, and policy requirements. Through the below analysis, I seek to demonstrate that, although potentially controversial, the FBI’s use of AITPs in counterterrorism investigations faces only minor hurdles with regard to current restrictions on surveillance. Whether employing these techniques are politically supportable and sustainable, however, is a separate matter; this article represents an effort to catalyze the necessary debate.

Constitutional Issues

With regard to the Constitution, the use of AITPs would most likely draw scrutiny on First and Fourth Amendment grounds. Analysis of other surveillance programs is instructive. The U.S. government’s previous bulk collection of domestic telephone metadata under the auspices of section 215 of the USA PATRIOT Act “raise[d] concerns” from the Privacy and Civil Liberties Oversight Board (PCLOB) due to its potential “chilling effect” on the free speech rights of Americans. Although AITPs would not be nearly so sweeping in their collection as the aforementioned incarnation of the section 215 program, they would be similarly automated and able to scale rapidly. That the public’s fear of expansive AITP-enabled investigations would infringe upon their constitutionally-protected free speech rights through a similar chilling effect is a feasible argument. Current policy, however, already prohibits the FBI from “investigating or collecting or maintaining information on United States persons solely for the purpose of monitoring activities protected by” the Constitution or federal law, and I recommend no change to these requirements. To further ameliorate any First Amendment-related concerns this proposed regime might engender, I suggest that the FBI permanently delete all logs of communications derived from AITP-enabled assessments upon closing them. The only exception would be for any content that a human agent or analyst affirmatively identifies as evidence of investigative or national security value.

With regard to the Fourth Amendment, one legal scholar writes that the “Supreme Court has interpreted the Constitution to impose few, if any, restraints on the government’s authority to plant or send covert informants and spies into our lives.” Although this conclusion is potentially hyperbolic, it is true that in the 1952 case On Lee v. United States, the Supreme Court found that federal agents using a human informant to record the conversations of the defendant electronically did not conduct a “search” in the context of the Fourth Amendment. It is conceivable that the Court might judge that the government’s deployment of a massive, semi-autonomous network of “informants” against potential terrorists would, in fact, trigger the Fourth Amendment’s requirement for a warrant, but that would require a departure from prevailing constitutional jurisprudence. Federal authorities do not require probable cause to communicate with targets in the conduct of investigative activities, and thus there is no obvious reason why their computer programs should either.

Also of note is the fact that in order to authorize additional, follow-on searches or electronic surveillance based on information from informants, the Supreme Court has found that the government must establish that these sources have previously produced reliable information. This requirement would presumably apply to AITPs as well. The ability to satisfy this demand would be entirely dependent on information derived from the target himself. Assuming he has “taken the bait” and has not determined that he is communicating with a government-directed computer program, the target could be expected to provide accurate information.

In short, if it is constitutional for a government agent to take an investigative step with respect to an individual—such as initiating communication with him while pretending to be a terrorist—it is presumably constitutional for said agent to deploy an AITP to do the same thing.

Relevant Statutes

The use of AITPs in counterterrorism assessments would not appear to run afoul of any current federal laws. The Electronic Communications Privacy Act (ECPA) restricts the government’s interception of communications in transit, not its ability to engage in such communications itself. Similarly, the Foreign Intelligence Surveillance Act (FISA) would probably not present any obstacles. Its major impact would be to allow for and regulate more aggressive investigative actions that are permissible when the government’s follow-on surveillance is primarily for foreign intelligence purposes. The Wiretap Act (often referred to as Title III), which pertains to the government’s ability to monitor communications in traditional criminal cases, would also not seem to impede the FBI’s deployment of AITPs. Although Congress could certainly assert its authority and further restrict these types of operations, no current federal law appears to prohibit the use of automated investigative personas.

Proposed Policy Changes

The FBI would have to modify slightly its investigative policies in order to deploy AITPs as I have described. Under its current Domestic Investigations and Operations Guide (DIOG), assessments can only remain open for a maximum of 90 days. In order to target “slow-burn” radicalization as discussed previously, the FBI would have to amend these guidelines. It could perhaps create a new category of “automated assessments” specifically for AITP operations, lasting for a year or longer. AITP operations also might, in some cases, meet the definition of “undisclosed participation” as defined in the DIOG, potentially triggering other procedural or legal requirements. Although the FBI’ exact procedures for authorizing such undisclosed participation are not publicly available in their entirety, there is nothing to indicate they would have to change in order to allow AITPs to target potential terrorists. Finally, using “data mining”—targeting certain characteristics or patterns of behavior—to deploy AITPs more precisely and effectively would require additional approvals under current policy. The FBI’s Sensitive Operations Review Committee (SORC) would have to authorize these efforts, and notify Congress whenever they take place.

Computers as Honest Brokers

Despite the fact that the use of AITPs would almost certainly be lawful and constitutional, privacy advocates might still raise objections to the tactics I propose. The main foreseeable points of contention would be the scalability of automated HUMINT operations and the longer allowable horizon for FBI assessments. Both of these aspects are unfortunately integral to my proposal. Despite their potential drawbacks, however, civil libertarians might see other benefits from the program. Automating surveillance against certain suspects could potentially help protect Americans from unlawful discrimination. Computer programs have no inherent racial or religious biases, although it is true to that they can reflect the biases of their designers, operators, or society at large. Instead of relying on the instincts—noble or otherwise—of investigators, the FBI could conduct its assessments in a strictly objective manner, judging AITP behavioral models by their success rates in leading to full investigations, arrests, and prosecutions. With such hard data, it would be very difficult for agents to justify the monitoring of individuals for any reason other than their disposition towards criminal behavior. Additionally, for those concerned about voyeuristic government employees eliciting personal secrets from unwitting targets on the Internet in the course of investigative efforts, it might mollify some to know that AITPs are soulless machines with no interest in salacious details.

Conclusion: Pushing the Limits of Policy and Technology

As former National Security Agency and Central Intelligence Agency Director Michael Hayden advocates in his recent book, national security professionals must “play to the edge.” To do so requires using all means available to protect America while never exceeding legal, constitutional, or ethical limits. Deploying artificially intelligent informants against people for whom there is not necessarily probable cause to suspect of wrongdoing—and who will most likely be American citizens—is sure to garner controversy. The problem of homegrown violent extremism, however, requires innovative tactical, technological, and policy solutions. Americans, as always, must strike an acceptable balance between privacy and security. Using rapidly advancing technologies such as artificial intelligence to monitor those susceptible to radicalization is one such way.

The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Government


Walter Haydock is a product manager at PTC, where he leads cybersecurity strategy for the ThingWorx Industrial IoT Solutions Platform. Previously, he served as a professional staff member for the House Committee on Homeland Security and as a Marine Corps Intelligence and Reconnaissance Officer.

Subscribe to Lawfare