Foreign Relations & International Law Lawfare News

How Content Removal Might Help Terrorists

Joe Whittaker
Sunday, June 30, 2019, 10:00 AM

In recent years, counterterrorism policy has focused on making social media platforms hostile environments for terrorists and their sympathizers. From the German NetzDG law to the U.K.’s Online Harms White Paper, governments are making it clear that such content will not be tolerated. Platforms—and maybe even specific individuals—will be held accountable using a variety of carrot-and-stick approaches.

Published by The Lawfare Institute
in Cooperation With
Brookings

In recent years, counterterrorism policy has focused on making social media platforms hostile environments for terrorists and their sympathizers. From the German NetzDG law to the U.K.’s Online Harms White Paper, governments are making it clear that such content will not be tolerated. Platforms—and maybe even specific individuals—will be held accountable using a variety of carrot-and-stick approaches. Most social media platforms are complying, even if they are sometimes criticized for not being proactive enough. On its face, removal of terrorist content is an obvious policy goal—there is no place for videos of the Christchurch attack or those depicting beheadings. However, stopping online terrorist content is not the same as stopping terrorism. In fact, the two goals may be at odds.

Earlier this month, Bennett Clifford and Helen Christy Powell raised this point on Lawfare, drawing on data gathered from the (partially) end-to-end encrypted platform Telegram. They suggested that de-platforming terrorist supporters online may have undesired and unintended consequences. While their argument focused on the “supply side” of terrorism online (i.e., analyses of the online environment), this research is also useful for considering the “demand side”—that is, how identified terrorist actors have used the internet as part of their eventual activities. The findings are part of my forthcoming doctoral thesis, which seeks to analyze the online behaviors of the more than 200 Islamic State actors who have been charged with a crime, successfully traveled to Iraq or Syria, or successfully conducted an attack in the United States.

One may be inclined to think that terrorists who can use the internet pose an unprecedented threat. After all, at the height of the Islamic State’s physical and virtual powers, it was easy to gain access to sophisticated propaganda and instructional material, such as bomb-making guides, which were found to be directly related to plots. When the Islamic State was still in control of Raqqa, a team of “virtual entrepreneurs” gave operational assistance to a number of attackers in the United States and around the world, including British hacker Junaid Hussain. It seems logical that removing terrorist users and the content they propagate from social media platforms would reduce the spread of their ideas and their access to potential recruits.

However, this may not be the case. In my sample, the success of an attempted terrorist event—defined as conducting an attack (regardless of fatalities), traveling to the caliphate, or materially supporting others actor by providing funds or otherwise assisting their event—is negatively correlated with a range of different internet behaviors, including interacting with co-ideologues and planning their eventual activity. Furthermore, those who used the internet were also significantly more likely to be known to the security services prior to their event or arrest. There is support for this within the literature; researchers at START found that U.S.-based extremists who were active on social media had lower chances of success than those who were not. Similarly, research on U.K.-based lone actors by Paul Gill and Emily Corner found that individuals who used the internet to plan their actions were significantly less likely to kill or injure a target. Despite the operational affordances that the internet can offer, terrorist actors often inadvertently telegraph their intentions to law enforcement. Take Heather Coffman, whose Facebook profile picture of an image of armed men with the text “VIRTUES OF THE MUJIHADEEN” alerted the FBI, which deployed an undercover agent and eventually led to her arrest. Coffman is no outlier; many in this cohort were recklessly displaying their ideology at the expense of operational security.

The hostile environment of suspensions and content removal has caused the Islamic State’s cyber caliphate to relocate to more secure parts of the internet, particularly Telegram. These sites and services offer far greater operational security by way of end-to-end encryption and also by not cooperating with law enforcement subpoenas and takedown requests. This, as Clifford and Powell note, comes at the cost of recruitment potential; the group now has a fraction of the reach that it had in its heyday on Twitter. This reduction of the group’s access is no mean feat and has sizable benefits. Virtual entrepreneurs often made contact with their targets on Twitter before moving their communications to safer platforms. Furthermore, Islamic State supporters online have repeatedly stated that the large platforms like Facebook and Twitter are central to their propaganda strategy.

When looking at the Islamic State cohort in the United States, unlike other online behaviors, there is not a significant relationship between the use of end-to-end encryption and event success. Terrorists who used it were just as likely to be successful as those who did not. Given that other online behaviors were correlated, this finding suggests that encrypted communication offers some operational security compared to, for example, disseminating propaganda on a mainstream platform or selecting a target online. It does not mean that encryption predicts success, simply that it may be a safer form of acting online.

Herein lies the problem: Many online behaviors may be aiding law enforcement to detect would-be terrorists. Over half of the actors in this sample shared their ideology on an open or semi-open platform—for example, using Twitter, where they have little control over who sees their posts. Oftentimes, nonideological followers report this behavior to law enforcement. It is important to disaggregate different types of terrorist offenders. While it is true that many of the Islamic State’s current online supporter-base may be technologically adept and security conscious, those who plot offline events are by and large—to be blunt—not the sharpest tools in the shed. Forcing them to think about operational security by making mainstream platforms uninhabitable for them may make the job of security services more difficult.

This is not an easy problem to resolve. Despite these findings, there are clear and tangible benefits to driving groups from mainstream platforms and impairing terrorists’ ability to spread propaganda, network, and access instructional material. Audrey Alexander and William Braniff suggest a policy of “marginalization,” rather than widespread removal of content. A wide range of stakeholders can drown out and counter extremist content online. Take, for example, the content featuring Anwar al-Awlaki on YouTube. Some of this content was clearly inciting violence and was rightfully removed, but much was also nonviolent. Awlaki’s status as a designated terrorist, though, prompted calls for every video to be removed. Google ultimately decided to remove all of the Awlaki videos, but another option would have been to keep the videos up and put them on “limited features,” which removes the content from recommendations, demonetizes it and disables comments. A range of strategic communications could also be targeted at those who watch the videos—although there is not a strong knowledge base as to the effectiveness of these campaigns. Finally, if law enforcement is looking to build a case file on a potential suspect, YouTube is more compliant to subpoenas than many other platforms.

It is not necessarily a mistake to make the internet a hostile environment for terrorist content—the issue is too complicated and there are too many variables to fully account for the trade-offs involved. Even if it was clear, policy is going in the opposite direction. However, we ought to acknowledge the costs to pursuing this strategy. Policymakers often make it seem like the removal of terrorist content from the internet is synonymous with stopping terrorism, but this may not be the case.


Joe Whittaker is a doctoral candidate at the Cyber Threats Research Centre at Swansea University and the Institute of Security and Global Affairs at Leiden University. He is also a research fellow at the International Centre for Counter-Terrorism. He specializes in terrorists’ use of the internet and the role of social media technologies.

Subscribe to Lawfare