Terrorism & Extremism

Whose Responsibility Is It to Confront Terrorism Online?

Seamus Hughes
Friday, April 27, 2018, 7:00 AM

This week, Facebook and YouTube announced new data on removal of terrorist content on their platforms. Facebook also released its internal document clarifying what content stays online and what is deleted.

Credit: Flickr/rulenumberonew

Published by The Lawfare Institute
in Cooperation With
Brookings

This week, Facebook and YouTube announced new data on removal of terrorist content on their platforms. Facebook also released its internal document clarifying what content stays online and what is deleted. YouTube, under Google’s broader efforts, also stated that it is getting “faster” at takedowns with an increased number of human reviewers vetting questionable content. Some of those in counterterrorism policy circles and Capitol Hill are fervent advocates of technology behemoths stepping up enforcement of their terms of service and will likely praise these new releases. Ultimately, however, the responses detailed by Facebook and YouTube are another iteration of a decade-long strategy, as the government continues to delegate online counter-terrorism responsibilities to private industry.

As a former congressional staffer, a member of the intelligence policy community, and now an academic researcher, my career has traced the growth of terrorist use of the internet. During my time as a staffer, I wrote letters on behalf of the chairman of the Senate Committee on Homeland Security and Governmental Affairs calling for technology companies to remove terrorist videos from their servers, and I examined how terrorist groups like al-Qaeda and al-Shabaab have adeptly used social media to radicalize and recruit Americans to their cause. As a researcher, I’ve studied the online environment as well—but I’ve become concerned that this singular focus on the internet ignores the importance of peer-to-peer terrorist recruitment.

Over the past decade, foreign terrorists command and capability in the digital sphere has drastically evolved. But our responses to this have not adapted with the same efficiency.

The government's ability to adapt to this changing environment is sharply limited by its outsourcing of the responsibility to prevent and confront terrorists’ use of the internet to private companies. To figure out how best to respond to evolving threats, it is useful to reflect on how the relationship between governments and private tech companies evolved and how this process of delegation came to be.

Historical Overview

In 2007, a bipartisan group of senators and members of the House became increasingly concerned with terrorist use of online platforms. At the time, the most striking example was a series of YouTube videos depicting the so-called “Baghdad Sniper.” These videos, usually set to music and spliced into quick clips, showed an Iraqi insurgent attacking U.S. soldiers in Iraq. The videos raised questions for congressional staffers—chiefly, “Is an American company comfortable with grotesque videos of U.S. military officers being killed on its platform?” The answer, after an influx of public letters, was a resounding “no.” As a result, YouTube announced updates to its community standards to address videos with violent imagery. Shortly thereafter, YouTube also implemented a “terrorist flag” feature which allowed users to identify extremist content for review and ultimately removal. This was one of the first instances in which private companies took initiative to police their own sites after being subject to public pressure.

At the same time, debates surrounding the efficacy of content removal continued within counterterrorism communities. One side posited that companies should remove terrorist propaganda from mainstream websites due to the material’s perceived ability to radicalize its viewers. The other side argued that the risk of radicalization from accessing content was low and that allowing it to remain online would aid law enforcement and intelligence agencies. To a large extent, these arguments remain defining characteristics of contemporary debates within the government regarding the nexus of technology and terrorism.

In the midst of this schism, one side of the debate tried to tip the scales in their favor. A surreal and entirely off-the-books meeting between a high-ranking intelligence official and a congressional colleague of mine occurred at a D.C. park. The colleague, who had been advocating for content removal, was facing pressure from his executive branch counterparts on the operational side. While sitting on a park bench, the intelligence official made clear that parts of law enforcement and intelligence communities preferred such material to stay online. In their view, it acted as a useful honey trap to track terrorists—and taking down online content could endanger both operations and lives.

While the meeting may have caused a temporary pause, it ultimately did not guide Capitol Hill’s view on confronting terrorist content online. As terrorism receded into the background noise of larger news stories and public pressure ebbed, efforts by both Congress and technology companies came almost to a standstill. Government approaches were essentially limited to awareness training, which focused on demonstrating to primarily Muslim-American communities around the country how groups like al-Qaeda use the internet to target young people, so that parents could protect their children. The awareness trainings, while important, were too sporadic to be the silver bullet.

With the rise of the Islamic State, the issue of terrorists’ use of the internet quickly forced its way back onto the political agenda. For policymakers, advocating content removal is a relatively easy ask with little blowback; it allows them to look tough on terrorism while requiring little concrete follow-through on the government’s part. In January 2016, high-ranking intelligence officials traveled to Silicon Valley to encourage the major technology companies to do more to police problematic content on their platforms. The White House later brought together advertising marketers from Madison Avenue, technology experts from Silicon Valley, and producers from Hollywood to tackle the question of how to respond to onslaught of Islamic State propaganda. Like many things developed within the cocoon of the National Security Council at that time, there was little interagency buy-in, and this initiative eventually failed to produce tangible results from a lack of sustained coordination.

In the U.S. and abroad, pressure on the technology companies mounted as the Islamic State remained a critical threat in the eyes of many Western countries and their constituents. Out of a mixture of a sense of responsibility and public pressure, likely combined with a desire to prevent hasty regulation by Capitol Hill, some technology companies responded with show of force. The companies colloquially known as the “Big Four”—Microsoft, YouTube, Twitter, and Facebook—announced the formation of the Global Internet Forum to Counter Terrorism. In partnership, these companies developed a “hashing” database to share information to flag and moderate extremist materials. YouTube and Facebook greatly increased the number of human reviewers for terrorist content, while Twitter, the platform of choice amongst jihadists in 2015, also took concerted steps to make its platform less hospitable to violent extremists. With the precedent set by Twitter’s biannual transparency reports, major tech providers, including Facebook and YouTube, now provide regular updates on their progress and methods of combating violent extremism in regular press releases.

Going Forward

Responses driven by industry, largely at the behest of the U.S. government, have certainly demonstrated some significant successes. Major social media providers continue to remove terrorist content from their websites at a much faster rate than before. However, this approach may not be nimble enough to respond to current developments in how terrorists utilize the internet.

While the Islamic State still uses mainstream sites like Twitter to push its propaganda, it has largely shifted to niche platforms in response to content removal. Many of these platforms, such as justpaste.it, lack the personnel and budget to remove extremist content. Some have no interest whatsoever in addressing terrorist use of their platforms: As many of these companies are based outside of the United States, the Damocles sword of possible regulation does not hang over their head.

Public pressure also has less power when free expression at all costs is embedded in the culture of these companies. Currently, the most active platform for distributing Islamic State propaganda is the messaging application is Telegram, whose CEO, Pavel Durov, has been ardent in his company’s founding principles: “I think that privacy, ultimately, and our right for privacy is more important than our fear of bad things happening, like terrorism.” As demonstrated by the Russian government’s recent, botched attempt to ban the use of Telegram, Telegram’s resistance to calls for censorship only builds their reputation among their users.

There are persistent questions that come with allowing industry to set their own standards. For years, Americans have faced the question of whether they are comfortable with the standards that large technology companies and social media providers use for content removal. But with the online terrorism environment rapidly changing, the question is now whether those standards can transfer to newer and smaller companies, or companies with different political interests and outlooks.

What’s more, newer companies will likely not have the same level of sophistication in understanding the threat as do larger companies such as Facebook, which employ hundreds of counterterrorism analysts. How will these companies handle content that walks the fine line between advocating violence and providing the mood-music of extremism? At the moment, a Salafist imam from Michigan, Musa Jibril, is the second-most-cited radical preacher by Islamic State followers online. He trails only Anwar al-Awlaki in his prominence—but Jibril’s online sermons never cross the line of calling for overt violence. Should his material stay online? Should that assessment change if he is discovered to have influenced, even if indirectly, attacks in the West—and if it does, will that mean that are our content standards are being set by of the news of the day?

These are critically important questions to answer if the U.S. government continues to address the future online dynamics of terrorism through ad-hoc delegation of counterterrorism responsibilities to tech giants. As a result of this approach, companies like Twitter, Facebook, and Google have ample leeway to determine their standards for content removal. But the U.S. government should not assume that other social media providers with different interests will earnestly adopt these standards, especially if they lack the wherewithal or interest to do so. As terrorist groups adapt to the changing landscape of the online space, it is worth asking whether U.S. government policies that depend on overwhelming initiative from the tech sector will be able to adequately respond—and whether the government’s ceding responsibility to the private sector is the right way forward.


Seamus Hughes is a senior research faculty member and policy associate at the University of Nebraska at Omaha-based National Counterterrorism Innovation, Technology, and Education Center (NCITE)

Subscribe to Lawfare