Foreign Relations & International Law Lawfare News

Marginalizing Violent Extremism Online

Audrey Alexander, William Braniff
Sunday, January 21, 2018, 10:00 AM

Editor’s Note: The call to take down terrorist-linked content on the Internet is both sensible and limited in its effectiveness. Terrorists use many different aspects of the Internet for many different purposes, and taking down propaganda and hostile accounts is not enough to stop the effectiveness of their strategies. Audrey Alexander and Bill Braniff, of GWU and Maryland respectively, call for a different approach. They argue for going after more portions of the terrorists' online ecosystem, expanding the campaign, and thinking more broadly about the problem.

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: The call to take down terrorist-linked content on the Internet is both sensible and limited in its effectiveness. Terrorists use many different aspects of the Internet for many different purposes, and taking down propaganda and hostile accounts is not enough to stop the effectiveness of their strategies. Audrey Alexander and Bill Braniff, of GWU and Maryland respectively, call for a different approach. They argue for going after more portions of the terrorists' online ecosystem, expanding the campaign, and thinking more broadly about the problem.

***

Despite years of increasing attention from a vast array of stakeholders, the fight against extremism in the digital sphere lacks dexterity. Confronted by political pressures, particularly in the wake of attacks, Western leaders call for social media providers to ramp up content removal and account deletion efforts against those who espouse extremism. This approach fails to appreciate, and therefore mitigate, the manifold ways that extremists use technology. It also fails to leverage the equally diverse ways that technology can be used to degrade extremism online, and to mobilize the spectrum of actors beyond social media companies who can take a proactive role in marginalizing extremism online. The emphasis on “takedowns” is reactive, viable only for the most well-resourced technology companies, and comprises a one-dimensional response to a multifaceted challenge.

As an alternative, those wishing to minimize extremist exploitation of technology can make the “marginalization strategy” the organizing principle of their efforts, availing a number of different actors a range of methods that complement the tactic of content removal. Virtual insurgents and violent movements represent a complex, enduring challenge that requires an agile, persistent, and pragmatic response. As seductive as deleting terrorist’s online content may seem, digital “whack-a-mole” will not suffice. By calibrating existing measures to challenge all aspects of the online extremist ecosystem over time, states and their non-governmental partners can more sustainably cope with changing conditions in the digital domain. In the marginalization paradigm, a swathe of empowered actors helps to depreciate the influence of violent extremists by progressively undermining, drowning out, and sidelining radical perspectives.

Today, the Islamic State remains opportunistic and agile in the digital sphere, despite some companies’ more aggressive enforcement of their terms of service. Even if the primary social media service providers could remove all the Islamic State’s propaganda from their sites, this would not necessarily keep other extremist groups at bay, nor prevent the group’s supporters from breaching the parade of start-ups that lack the resources to police their platforms.

Extremist actors like the Islamic State use technological services, including but not limited to social media, for digital proselytizing (propaganda production and dissemination), recruitment, networking, and facilitating terrorist plots. Different actors are best situated to marginalize the effectiveness of each of these phenomena, and an over-reliance on takedowns can inadvertently undermine those efforts.

Marginalize Tools and Processes that Enable Radicalization and Mobilization

Social media platforms that aid in the wide dissemination of extremist propaganda gain the lion’s share of attention from governments and the traditional media. Even measures that target a broad range of social media platforms, like Germany’s new hate speech law (NetzDG), will likely face critical challenges in implementation and enforcement. They may also struggle to marginalize the vast array of tools and processes that enable radicalization, like file-sharing platforms and messaging apps. Starting in January 2018, the law will work to regulate some social media providers by leveraging fines up to €50 million against companies that fail to remove hate speech within a set time frame.

Governmental efforts like this inefficiently traverse the dynamic apparatus of communications used by violent extremists and their respective organizations. The Islamic State and its sympathizers, for example, adeptly navigate across a “vast ecosystem of platforms, file sharing services, websites and social media.” The political focus on the compliance of major tech companies, like Google and Facebook, eclipses other critical aspects of the virtual threat, like encrypted messengers, web archives, mobile security applications, protected email providers, VPNs, and proxy services.

In this climate, smaller organizations with fewer resources, like the file-sharing platform justpaste.it, cannot design, pilot, or employ productive methods to moderate extremist content without more substantive support from governments, academics, civil society organizations, and other stakeholders in the private sector. It is also unclear if file-sharing companies and many other tech platforms qualify as social media, meaning they may be unaffected by narrow legislation despite their salience in online extremism. By remaining vigilant about the mediums that matter and pushing for programming that empowers more players, policymakers and practitioners can marginalize extremists online by supporting industry-led self-regulation across the spectrum of tools that help facilitate terrorist recruitment.

The Global Internet Forum to Counter Terrorism (GIFCT), a partnership initially forged by Facebook, Microsoft, YouTube, and Twitter, shows great promise in developing technological solutions, supporting research, and promoting knowledge-sharing across the ecosystem of technologies exploited by extremists. At present, Tech Against Terrorism, a UN-mandated project that supports the GIFCT, strives to help tech companies develop useful terms of service and share information to prevent terrorists’ exploitation of the tools that providers offer.

While governments cannot abdicate their role in marginalizing the salience of online extremism, political leaders might laud the participation of corporations in this type of industry-led forum, which encourages adaptive, productive, sustainable, and transparent initiatives. A standing body like the GIFCT also provides governments with a partner for coordinating efforts to frustrate extremists online. This would allow law enforcement and intelligence professionals to monitor behaviors that can lead to arrests and prosecutions with less risk of losing visibility of extremist communications that do not rely on encryption, like propaganda amplification and the identification of potential recruits, because of a poorly timed takedown.

Due to the diverse nature of platforms, and the opportunities afforded by each tool, the challenges confronting tech companies are just as idiosyncratic as the extremists that use them. A reliable infrastructure that encourages businesses to help each other and themselves could foster an environment more conducive to such nuances. In a collaborative industry framework, those companies concerned with recruiters’ utilization of messaging features may explore different methods than social media providers seeking to reduce propaganda amplifiers or file-sharing services involved with the introduction of new content into the online ecosystem.

Marginalize Exposure to Extremists and Extremist Propaganda with Alternative Methods

A data-driven and less emotive appraisal of power dynamics would inspire a range of methods that could help marginalize communities’ exposure to violent extremists and their propaganda online. By definition, extremist actors represent the ideological periphery of society. Even though the Islamic State is notorious for staking its claims on Twitter, the vast majority of accounts are not extremist in nature. In the span of nearly two years, Twitter suspended 935,897 accounts for the promotion of terrorism, the defining trait of the company’s counter-extremism policy. If all of these accounts were simultaneously active and belonged to unique individuals (instead of repeat offenders and bots), they would represent only 0.0028 percent of Twitter’s 326 million monthly active users. More realistic calculations (based on data from the “The ISIS Twitter Census,” a Brookings report by J.M. Berger and Jonathon Morgan) suggest pro-Islamic State accounts comprised less than 0.0002 percent of Twitter users in early 2015, a peak period for the group’s online activity.

The connectivity and engagement of extremist accounts are more concerning than their sheer prevalence. While proportionally small, vocal Islamic State sympathizers on Twitter fight to be heard, injecting radical rhetoric into mainstream conversations with “hijacked hashtags” and “Baqiya family” shout-outs that help users rebound after suspensions. While some rally on the site in the face of takedowns, touting their strengthened resolve on new accounts, other adherents migrate to more hospitable platforms like Telegram, a messaging app that offers encryption technology. Both outcomes are troubling. Individuals who remain online despite numerous takedowns become influencers and have often used their status to mobilize foreign fighters or actual plots. The movement from open to encrypted platforms can frustrate counterterrorism efforts. And due to the propensity for extremists to dwell on expulsions to validate polarizing narratives about the persecution of Muslims, takedowns can reinforce conspiratorial victimhood narratives that are crucial for radicalization to violence.

Under the marginalization directive, those tasked with countering violent extremists in the digital sphere should make considerations to mitigate the effect that perceptions of victimhood might have on potential recruits. Strategies that avoid individual takedowns but reduce connectivity between extremist networks and the general public—for example, by thwarting hashtag-hijacking campaigns or blocking known extremists from entering new networks—may be more effective over time at further marginalizing an already minuscule percentage of social media users. Once isolated webs are distanced from the general population, simultaneous takedowns of Baqiya families could marginalize their online resilience by depriving one end-user of the shout-outs of others in that family.

Even if media providers could more strategically remove content and suspend accounts, these methods yield an array of unintended consequences. Given the recurring nature of online extremism, exacerbated by ideological “echo chambers,” social media providers could benefit from procedural and algorithmic safeguards that prevent this trend. Twitter’s revocation of the blue verification checkmark on prominent white nationalist accounts could serve as one model for delegitimizing radical voices.

Tech companies could also combat the normalization of violent extremism among users by de-prioritizing extremist rhetoric relative to content absent of those narratives. Further, NGOs can develop and employ bots, use multi-platform distributors like Hootsuite or Po.st, and leverage social-media plug-ins to identify extremist rhetoric and enter the conversation with automated or targeted responses. They can draw from libraries of counter- and alternative narratives, which can be evaluated over time, such as the content produced by the Peer to Peer: Facebook Global Digital Challenge program. In sum, echo chambers can be identified, marginalized, and disrupted. These steps could mitigate the number of online answer-seekers who find solutions in social networks that accept terrorist violence as a legitimate mechanism for change, and redirect those individuals to counter and alternative narrative content providing access to a different path towards empowerment.

By integrating methods that complement takedowns, states and their partners might implement alternative counter-extremism measures that connect with the intended audience in new and innovative ways. Strategic advertising techniques could reach vulnerable populations and their friends and family to help people disengage by making resources like support groups, job postings, mental health counseling, and intervention programming accessible. The window of opportunity for disengagement widens when individuals are less exposed to persuasive extremists and their propaganda online and have greater access to empowering alternatives and a supportive community. To its detriment, the West’s emphasis on takedowns undercuts simultaneous efforts to target potential extremists with counter-speech, off-ramps, and supportive programming.

As a second-order benefit, individuals who remain sincere in their pursuit of extremist material and accounts, despite mechanisms designed to divert them elsewhere, will stand out from the pack. This factor may aid law enforcement in the allocation of limited resources with respect to terrorism investigations.

Tech Against Terrorism’s recently-released Knowledge Sharing Platform (KSP), referenced above, could support a wide array of tech companies in developing suitable and sustainable methods to marginalize virtual communities’ exposure to extremism across various platforms. (START has proposed developing a similar platform for NGOs involved in counter-narrative work). In short, the KSP aggregates information and recommendations for companies eager to prevent or address extremists’ use of their platforms. In 2018, the KSP will “include a Threat Alert Service and an Image Hashing database, that will assist companies with Content Regulation.” In addition to utilizing such features, social media providers exploring and piloting measures that complement takedowns may eventually provide experience-based recommendations on the KSP to help guide other companies’ approaches. Furthermore, similar resources, like a URL database, might also have tremendous utility for tech companies striving to moderate extremists on their platforms.

Ultimately, an orientation towards marginalization could drastically transform the capabilities that the public and private sector leverage to stymie communities’ exposure to extremist actors and content.

Marginalize the Effects of Extremist Violence and Polarization

While currently underutilized, the news and entertainment industries can also play a dynamic role in marginalizing the effects of extremist violence and polarization in the digital sphere. The U.S. government’s “Madison Valleywood Project served as an initial example, as it implored tech and entertainment companies to assist in the fight against terrorism with counter-narratives and enforcement of their respective terms of service.

On a technical level, inextricable links exist between mass media and social networking sites. Social media providers’ optimization of news distribution allows violent extremists to leverage mass media when coverage supports their claims for resistance. One study found that news stories represent a sizable portion of the URLs shared by English-language sympathizers on Twitter. A brief review of the study’s dataset suggests that Islamic State supporters disseminate articles that validate narratives regarding the persecution of Muslims. Violent extremists’ sense of victimhood and their efforts to polarize political discourse online could be marginalized using some of the methods described above, particularly the promotion of voices that celebrate moderate perspectives and engagement.

Next, in the wake of violent activity, terrorism-related or otherwise, tempered reactions by news producers, opinion-leaders, and elected officials are critical in reducing the polarizing effects of strategic violence on communities, both online and offline. START research has found that the portrayal of Muslims in the media and political discourse affects how viewers think about domestic and foreign policy, often exacerbating the polarization that extremists so desperately desire. Instead of aggrandizing and legitimizing violent extremists with sensational, episodic coverage, a commitment to resilience in the face of adversity may be more beneficial. In the wake of the mass shooting in Las Vegas, speculation regarding the shooter’s possible links to terrorism cycled between news outlets and social media feeds. Alternatively, one source focused on the efficient response of a local trauma center after the attack. While less attention-grabbing, similar thematic coverage could reduce the polarizing effects of extremist violence in the future without sacrificing journalists’ ethical commitment to cover newsworthy events.

Conclusion

As terrorists fight for their cause in the digital sphere, governments and their allies should embrace a broader and more comprehensive approach that marginalizes the effects of extremism online. The political and tactical reliance on content removal inadvertently aggrandizes terrorists, validates extremists’ narratives of abuse, and polarizes vulnerable communities. To develop more appropriate and sustainable responses to terrorists’ use of digital communications technologies, entities tasked with countering violent extremism must explore additional methods that complement takedowns. By marginalizing the tools that enable radicalization and mobilization, reducing exposure to problematic actors and content, and mitigating the polarizing nature of terrorist violence, policymakers and practitioners can depreciate the hold that extremists have online.


Audrey Alexander is a Research Fellow at the George Washington University's Program on Extremism. She focuses on the role of digital communications technologies in terrorism and studies the radicalization of women.
William Braniff is the Executive Director of the National Consortium for the Study of Terrorism and Responses to Terrorism (START) and a Professor of the Practice at the University of Maryland. He previously served in the U.S. Army, the National Nuclear Security Agency, and at the Combating Terrorism Center at West Point.

Subscribe to Lawfare