Foreign Relations & International Law Lawfare News

Gray Media Under the Black and White Banner

Audrey Alexander, Helen Christy Powell
Sunday, May 6, 2018, 10:00 AM

Editor’s Note: Although Internet comments have made great strides in trying to combat extremist content online, they have a long way to go. In particular, much of what jihadists use for propaganda is non-violent or otherwise doesn't neatly fit material that can easily be identified as terrorist-related and removed. Audrey Alexander and Helen Powell of George Washington University's Program on Extremism call for a more holistic approach that goes beyond image takedowns.

***

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: Although Internet comments have made great strides in trying to combat extremist content online, they have a long way to go. In particular, much of what jihadists use for propaganda is non-violent or otherwise doesn't neatly fit material that can easily be identified as terrorist-related and removed. Audrey Alexander and Helen Powell of George Washington University's Program on Extremism call for a more holistic approach that goes beyond image takedowns.

***

Despite stringent efforts to confront extremists’ exploitation of social media, digital communications technologies continue to serve as a critical means by which Islamic State (IS) sympathizers promote the organization and its vision. In particular, image-based file-sharing remains an integral part of IS’ online communications. From battlefield pictures and press releases to screenshots and memes, images afford innumerable opportunities to jihadists eager to design, package, or disseminate content advancing IS’ cause. These file types illustrate the complexity of extremist networks and communications, and upon analysis, can also inform more holistic approaches to countering IS online.

Western counter-extremism policymakers and their counterparts in the private sector are keenly aware of IS’ prowess in the digital sphere. Regardless of good intentions, the reactive efforts of countries often struggle to marginalize IS and its elusive network of global supporters sustainably. Although the identification and removal of extremist content is the common denominator among existing initiatives, the parameters of such efforts are not always clear. Understandably, deciding what is permissible and what is not in the context of takedowns is a formidable task when IS’ propaganda strategically blurs black and white.

For some content curators, the scope of terrorist material may be narrow, referring only to strategic products produced by IS. In other instances, relevant actors might opt for a broader definition, including unofficial media produced and disseminated by virtual supporters. Spanning the spectrum of official and unofficial content, the proliferation of violent imagery and rhetoric varies, further complicating efforts to parse out items deemed worthy of deletion. Ultimately, nuanced consideration of these dynamics shows that the current emphasis on content removal is not a sufficient method to combat the organization online in its entirety.

A mixture of government pressure and industry-led regulation instrumentalized takedowns as the primary method to reduce the flow of extremist communications. Twitter and Facebook examine content, including images and other file formats, to discern what materials violate their respective terms of service and standards. Accompanied by other commendable actions, the Global Internet Forum to Counter Terrorism (GIFCT), originally a partnership of Facebook, Microsoft, YouTube, and Twitter, collaborates on a hashing database to help flag problematic content for removal. The various members identify and share the digital fingerprints of “violent terrorist imagery or recruitment videos” to also prevent users from uploading the same files on other platforms.

The nitty-gritty logistics of this process are not public, though each company independently contributes to the database and strives to enforce its respective terms of service. Regardless, this image-oriented, information-sharing model is striking a path for the entire industry’s response to terrorists’ online activity. Tech Against Terrorism, a UN-mandated project that supports the GIFCT, also embraces this approach as part of its effort to help other companies address extremists’ exploitation of their tools. In addition to supporting the design and implementation of productive terms of services regarding the promotion of terrorism, the organization's key resource for tech companies, the Knowledge Sharing Platform, plans to make its own “image hashing database” available to members of the Tech Against Terrorism project.

Concurrently, some countries continue to push social media companies to institute more exacting and expedient regulations against extremist content. In February 2018, after repeated calls for tech companies to progress further and faster in the removal of promotional materials, the United Kingdom’s Home Office unveiled technology that detects terrorist propaganda on any platform. With a degree of accuracy, “the technology uses advanced machine learning to analyse the audio and visuals of a video to determine whether [content] could be [IS] propaganda.” While the tool shows potential in its application to any digital platform utilized by IS—certainly a step in the right direction—limitations arise from the tool’s inherent emphasis on formally identified, strategic propaganda products. Though developers trained the model on over 1,000 propaganda videos, this sample represents only the most obvious segment of IS’ output as an organization. Most recently, EUROPOL announced a coordinated takedown operation that targeted “major IS-branded media outlets like Amaq, but also al-Bayan radio, Halumu and Nashir news.” The effort has promise because it compromises IS-central ability to broadcast content and provides law enforcement new leads, but the initiative leaves critical parts of the IS communications apparatus untouched.

Ultimately, an enduring challenge confronting governments and tech companies alike is the sheer amount of images shared among extremists that are not conventional terrorist propaganda. Archived social media accounts of Safya Yassin, a Missouri woman who eventually pleaded guilty to communicating IS-related threats on Twitter, show that much of the content she posted prior to her arrest was not official material or overtly violent. Under varying iterations of the ‘Muslimah’ Twitter handle, she regularly posted screen grabs of Western media coverage, motivational quotes, and political memes. In one tweet, the account mocked photos of weeping American soldiers with the caption, “Bunch of babies.” Like Yassin’s post, the items posted within pro-IS social networks do not always violate companies’ terms of service, much less federal laws. Without overtly endorsing the organization or violent tactics, such media items may validate or distill the narratives promoted by IS, affecting sympathizers in the jihadisphere. Yassin, for example, was fairly established in pro-IS Twitter networks before her arrest; even when she shared more subtle propaganda, her peers regularly ‘liked’ and ‘retweeted’ the material. These trends demonstrate how crucial it is to understand both the flow of extremist communications and the specific nature of the content itself.

Examining the Flow and Nature of Extremist Content

IS and its web of supporters leverage an array of technologies in both the digital and physical arenas. While some platforms are highly regulated and hostile to extremist communications, others tools are more conducive to information-sharing among IS sympathizers. Users with operational security concerns may also weigh the benefits of security features such as self-destruct or encryption technology. Extremist individuals and groups appear to make logistical considerations in their selection of various tools; IS, for example, traditionally makes decisions based on factors like the ease of access, reach of the platform, and reliability of security features.

Within this matrix, the popularity of the Telegram messaging app among jihadists is logical, especially in light of the tool’s file-sharing affordances and reasonably good security options. Aided by the company’s inconsistent regulation, Telegram serves as an optimal environment to study the flow of pro-IS images among sympathetic digital networks. In short, this pinhole allows researchers to understand the nature of information disseminated between IS, its affiliates, and its supporters worldwide.

As part of an initiative investigating IS’ online activity, the George Washington University’s Program on Extremism tracks pro-IS channels on Telegram. Pictures are an integral part of the flow of communications on these channels, and qualitative analysis of these image files demonstrates the diversity of information exchanged between sympathizers. Immediately apparent is the informal dissemination and proliferation of content that falls outside the scope of official propaganda.

In an evolving structure of communications that defies black and white categorization, the use of pictures requires subsequent investigation. Anecdotally, official propaganda, which is traditionally produced by entities within IS’ organizational structure, represents a significant portion of images shared by supporters on Telegram. Examples include materials created by Al Hayat Media and Al Furqan, which users frequently share and re-post regardless of their original release date. Whereas some official IS media are left in their original form, other posts take on new meaning when sympathizers add translations, commentary, or editing. Ultimately, however, formal content is often supplemented by a range of materials on pro-IS channels.

IS’ elusive apparatus in the virtual theater broadly reflects its dynamic organizational structure in the real world. While it is relatively easy to identify materials released by IS’ established bureaus, media published by peripheral affiliates complicate the formal structure.

Take, for example, the morphing ties of al-Battar Media Foundation, allegedly the media wing of the Libyan-linked al-Battar Brigade. On an organizational level, the Brigade is also affiliated with IS and reportedly linked to some prominent attacks in Europe, yet it seems to operate with a degree of autonomy. These murky connections manifest in al-Battar Media Foundation, which produces a variety of high- and low-definition products generally regarded as “pro-IS,” but not formally branded as official IS releases.

In a noisy media space, the plethora of actors creating and sharing jihadist content makes it difficult for stakeholders to discern what is “legitimate.” Whereas IS’ strategic communicators and the scholars that study them are typically attuned to the intricacies of IS’ media networks, passive consumers may not make the same distinctions. Ultimately, the transience of official and unofficial content affects sympathizers and, by extension, counterterrorism policymakers and tech companies attempting to oppose IS in the digital sphere.

Building off this nuanced media structure, sympathizers produce and distribute a diverse but substantive array of images in an effort to mimic the official products of IS and its affiliates. The execution and sophistication of such photos vary, as some are overtly fabricated whereas others are difficult to distinguish from IS’ formalized, strategic messaging. Press releases from Amaq News, an auxiliary media agency for IS, serve as a useful illustration of the interplay between official and unofficial media outlets. Even under the banner of ‘Amaq News,’ image-based releases differ in quality and style, hinting at varying degrees of cohesion and authenticity. Additionally, IS opportunistically embraces Amaq’s publication of claims about attacks, namely when evidence verifies the assertion. Regardless, Amaq products may affect online sympathizers engaged in the broader dialogue who, as analyst Aymenn Jawad Al-Tamimi has written, are “eager to seize on whatever they can find to support their own preconceptions regarding an attack.”

Compounding and complicating the swathe of official and unofficial propaganda, individual users repurpose formal material with editing and commentary. Exhibiting a range of skills, the lower end of self-styled products include screenshot collages of propaganda, as well as text and graphic overlays on existing pictures. More savvy individuals also produce edited content, photoshopping official propaganda to produce semi-original, unique material. In some instances, editors brand these items with a logo. The works of “Asawirti Media” and “Al Hifawi” illustrate this trend of commandeering IS media for campaigns in support of IS objectives.

At the grassroots level, a range of users create entirely new images inspired by official IS propaganda or extremist ideology writ large. This organic content includes edited pictures displaying the IS flag, images of weapons with jihadi nomenclature, and political memes. The production value and execution of these image files vary from polished to laughable, flooding the online sphere with what has been called “the peanut gallery of the Islamic State jihad.” Ultimately, it is crucial to recognize that within the decentralized jihadisphere, not all shared content explicitly promotes IS’ strategic messaging objectives.

Aside from propaganda, supporters frequently post organizationally unaffiliated, sometimes innocuous images that reinforce broader IS narratives. On pro-IS Telegram channels, mainstream media photos and screenshots of news articles are commonplace. Anecdotally, supporters use these images to vilify adversaries and redeem their peers. During a series of wildfires in the United States in 2017, for example, sympathizers shared photos and commentary framing the fires as religious retribution against non-believers. Other frequently shared non-propaganda images include news articles about terrorist attacks in the West, political cartoons, and current events. Photos of destroyed buildings and injured children are especially prevalent and often fall in this puzzling space. These images defy categorization but reflect narratives identified in IS-propaganda: the persecution of Muslims, military engagement, and shaming the West.

On a logistical level, it is also crucial to recognize that pro-IS accounts do not necessarily share image files in isolation. From maps and captioned memes to operational instructions and screenshotted texts, photos alone can communicate a spectrum of information. While some users disseminate pictures with internal files, others post images with links to outside domains, such as file-sharing platforms. For example, IS sympathizers on Telegram often post promotional photos of al-Bayan Radio, an official IS station, in tandem with internal or external directives to audio broadcasts. This phenomenon undoubtedly occurs on other mediums as well, including Twitter.

Recommendations

Although some governments and their respective partners in the tech sector are improving their ability to flag extremist content online, it is difficult to discern the effect these measures will have on the range of information disseminated by IS sympathizers. Across relatively uninhibited Telegram channels, strategic media products are the most obvious segment of IS’ digital repositories. More often than not, users intersperse these materials with other, less explicitly extremist content, which nonetheless comprises an integral part of communications in the virtual jihadisphere. While this content may seem less threatening, the proliferation of such material within pro-IS networks is consequential, especially in light of content removal efforts that triage official strategic media products and graphic content. Meanwhile, homemade jihadi memes, edited screen grabs, and captioned mass media pictures contribute to the normalization of pro-IS rhetoric in the virtual arena. To push back on such materials, stakeholders concerned with extremists’ exploitation of digital platforms should acknowledge how this breadth of content might make extreme violence more accessible and palatable to sympathetic actors online. As IS supporters adapt to mounting pressures and shift from broad-based social media to encrypted messaging apps and file-sharing platforms, it is easy to see how organic, bottom-up propaganda reemerges.

In this changing landscape, content removal is not a sufficient mechanism to moderate items that validate IS’ overarching narratives without explicitly breaking the law or a company’s terms of service. This grey zone between problematic terrorist content and material that is suitable for the public highlights the need for tactics that complement content removal. Notably, information that is neither rule breaking nor innocuous applies to more than select file types; the phenomenon arises in image, video, text, audio, and PDFs, as well as URLs that lead to sites hosting materials that indirectly promote violent extremism.

A more productive and sustainable path forward requires stakeholders to adopt the marginalization paradigm, which argues that extremism is best confronted by the depreciation of extremist perspectives. The private sector should further explore practices that strive to reduce online communities’ exposure to extremist content and entities. Although official and explicitly violent propaganda is one aspect of the problem, a more encompassing approach could proportionally respond to materials and actors that fall somewhere in between. For example, companies might not place ads on items that are on the cusp of removal, or perhaps implement safeguards so such materials cannot become viral. Those tasked with countering IS in the digital sphere can make progress by sidelining radical material in all forms, preventing the rhetorical normalization of violent extremism. At the political level, policymakers must foster the design and implementation of more dexterous tactics, rather than pushing tech companies to go further and faster in content-removal; these responses might include redirection, counter-messaging, and even online intervention.

Logistically, technology providers should continue to participate in industry-led forums like Tech Against Terrorism, and specifically support the development of new methods and standards regarding all forms of extremist content. Tech companies should work together to identify platform-specific moderation techniques, support entities that lack the resources to confront violent extremism online, and critically examine the potential of automation beyond takedowns. By exploring methods that complement content removal, tech companies can improve their ability to cope with items in the middle ground, where sympathizers violate neither terms of service nor laws in their support of the movement. In both design and implementation, stakeholders must take steps to promote transparency and reduce the risk of aggrandizing users within the jihadisphere.

While the fight against IS in the digital sphere is far from over, progress is tangible as policymakers and practitioners reflect on IS’ adaptation to mounting pressures in the virtual arena. Despite increasing regulations, IS’ online channels of communication remain populated by a frenzy of materials, image-based and beyond. Sympathizers’ resilience demonstrates the enduring need for a more comprehensive approach to confronting the group online. By comprehending the full scope of IS’ dynamic apparatus of communications and the content exchanged between the top-down to bottom-up parts of the movement, entities responsible for countering terrorist activity online can more effectively minimize the virtual threat in its entirety. Holistic approaches provide opportunities to diminish IS' effect online, expanding from the most obvious facets—official propaganda—to fundamentally addressing IS' ability to leverage digital communications technology writ large.


Audrey Alexander is a Research Fellow at the George Washington University's Program on Extremism. She focuses on the role of digital communications technologies in terrorism and studies the radicalization of women.
Helen Christy Powell is a Presidential Fellow at the George Washington University’s Program on Extremism. She studies Islamic State social media and U.S. counterterrorism policy.

Subscribe to Lawfare