Foreign Relations & International Law Lawfare News

Combating Terrorism Online: Possible Actors and Their Roles

Zann Isacson
Sunday, September 2, 2018, 10:00 AM

Editor’s Note: Fighting terrorism online is one of those ideas that everyone agrees with in principle but disagrees on in practice. What should be regulated and who should do so are among the many areas of disagreement. Zann Isacson of Georgetown examines the different actors that might play a role, including Congress, technology companies and of course the executive branch, and assesses what each might bring to the table.

Daniel Byman

***

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: Fighting terrorism online is one of those ideas that everyone agrees with in principle but disagrees on in practice. What should be regulated and who should do so are among the many areas of disagreement. Zann Isacson of Georgetown examines the different actors that might play a role, including Congress, technology companies and of course the executive branch, and assesses what each might bring to the table.

Daniel Byman

***

For many Americans, the Islamic State gained notoriety through widely circulated beheading videos, beginning with the execution of American journalist James Foley. These grizzly videos demonstrated the monstrous tactics of the group, who filmed and disseminated gruesome murders for global consumption and disgust. Of course, the Islamic State did not invent beheading for publicity—in recent memory, American journalist Daniel Pearl was killed in Pakistan in 2002. Yet, as the Washington Post noted in 2014 after Foley’s death, extremists filmed Pearl’s murder on a camcorder and distributed the video by tape, which limited the scope of circulation and the potential audience. When terrorists shifted to posting the videos online, as with the decapitations of Nick Berg, Kenneth Bigley and others, the online infrastructure to instantly share videos did not exist. The advent of social media allows any video, photo, or audio clip to reach audiences across continents. The hashtags #A_Message_To_America and #NewMessageFromISIStoUS associated with the video of Foley amassed more than 2,000 tweets in the first three hours of the video’s release, according to SITE Intelligence Group, and polling shows that more than 80 percent of adults in Great Britain knew about the video of Foley’s murder within days of it being posted online.

The White House and Congress condemned the violence by the Islamic State, especially against U.S. citizens. However, they omitted any reference to the role of technology companies—such as web-hosting platforms, communication applications, and social media—in sharing the content. More than three years later, the United States still lacks a policy to confront terrorist propaganda on social media.

Terrorism relies on publicity. Defeating the Islamic State and its successors will in part require removing their access to these technology platforms, which includes assigning responsibility for regulating this content. The preferred approach might lie with Congress deeming certain content inappropriate and mandating its removal, tech companies acting to self-regulate, or even inaction. This article surveys the costs and benefits of three very different approaches: government-led policy, private sector-led policy and a passive approach.

Government Responsibility

A government-led policy would involve Congress and the president assuming some role in limiting pro-terrorism content. The U.S. public expects their government to ensure their safety and security. Legislation could draw the boundaries for technology companies, require the removal of specific content, or mandate data sharing between the private and public sectors. Legislation of this sort would likely entail some aspect of censorship.

Without conjuring images of “Fahrenheit 451,” the U.S. public has accepted some censorship as a society. The Federal Communication Commission (FCC) illustrates the standard example: U.S. courts have determined that the First Amendment does not protect obscene material, and broadcast television and radio channels may not air indecent material when children will likely have access to programming. As such, the FCC restricts the content of private corporations for mass audiences, and Americans have accepted this censorship in support of appropriate material for audiences at large. Although not under the purview of the FCC, much of terrorist groups’ propaganda arguably could fall into this category, though some may be subject to First Amendment protections for free speech. The Department of State Foreign Terrorist Organization (FTO) list functionally serves to identify foreign enemies; an argument can be made that any propaganda originating from one of these groups would represent measures to undermine the United States. Congress could start with restricting content emanating from these groups.

The U.S. government has an interest in limiting the dissemination of terrorist content from both a national security and public safety perspective. The Islamic State’s videos encouraged “lone wolf” attacks in the United States. Al-Qaeda’s online literature influenced one of the Boston bombers, and then-FBI Director James Comey confirmed that the Orlando nightclub shooter had in part been radicalized online. More recently, FBI agents found 90 Islamic State videos on a cell phone belonging to Sayfullo Saipov, the perpetrator of the 2017 truck-ramming attack in New York City. Even with the contentious state of congressional politics, legislation aimed to combat terrorist propaganda should gain support from both sides of the aisle.

The U.S. government also maintains the expertise to determine the type of accounts and content that cause the most harm. Across government agencies, an array of counterterrorism, counter-radicalization, counter-extremism and law-enforcement experts focus on the threats to the United States and how to best oppose them. These experts could decide which groups or content to block. Terrorism is a muddy concept that is difficult to define. For example, the FTO list identifies groups abroad, but no domestic equivalent exists. As such, would white supremacist groups be included in regulation? An American’s account that posts pro-Islamic State articles? The U.S. government could outline these determinations, rather than a tech company that likely lacks the knowledge to prioritize threats. Defining what constitutes terrorist propaganda and determining the scope of egregious content should inform potential legislation.

Furthermore, government regulation will ensure more uniform implementation across tech companies in the United States. Currently, tech companies self-police, which leads to responses that vary in severity. In response to the beheadings by the Islamic State, some companies removed all content of the beheadings, while others took more targeted action depending on the situation. Twitter responded by deleting all images of Foley’s murder and suspending accounts that uploaded or shared the image. YouTube also took down the video as a violation of its terms of service, which stipulate that terrorist organizations may not upload videos through the platform. Facebook acts on a case-by-case basis. Legislation could ensure that all companies respond consistently. This would help prevent terrorist organizations from exploiting particular companies with lax policies. Regardless of the company, the offending content would be removed.

Even if tech companies proactively support government efforts, the U.S. government could provide cover to help these companies save face to consumers and stakeholders.

The public split on the removal of the Foley beheading videos: Some lauded this act of self-censorship, while others balked at these organizations determining the bounds of acceptable media. Government regulation that blocks some content would allow tech companies to take the same action as before but attribute it to legislation. Regulation would allow tech companies to shift blame while still complying.

Technology Companies

Tech companies represent the other side of the coin. As private companies, they can limit the content hosted on their sites (of course, within reason). The terms of service generally outline what may or may not be hosted. Holding tech companies responsible would require public pressure to push these organizations to remove offending material and encourage them to more effectively monitor terrorist organizations on their platforms.

Tech companies have acted in response to public pressure faster than government regulation could accomplish. It is in the best interest of these companies to prevent their platforms from becoming a conduit of radicalization or used to promote violence; inaction would diminish companies’ brands, discourage investors, and invite regulation or oversight. As such, many tech companies have been proactive: Twitter suspended more than 300,000 accounts linked to terrorism in the first half of 2017 alone, while YouTube blocked videos by Anwar al-Awlaki. In addition, Facebook, Twitter, Microsoft and Google formed the Global Internet Forum to Counter Terrorism in June 2017 to “share information and best practices about how to counter the threat of terrorist content online.” These actions suggest an industry effort to prevent terrorists and their organizations from exploiting these platforms.

Although Silicon Valley’s preliminary actions may appear insufficient relative to the U.S. government’s threat assessment, the paradigm must shift to view tech companies as partners in counterterrorism. As terrorist communication and recruitment has shifted into the online space with MP3s and YouTube videos replacing audio and video cassettes, the role of tech companies has become more consequential. As Philip Mudd, senior fellow at New America, told the House Committee on Homeland Security, “The adversary is recruiting not personally but digitally… We don’t own the data. PayPal does, Google does, Verizon does.” Tech companies know their products and how to combat unwelcome users and content. The U.S. government should work with these companies; Mudd suggests the government should ask Silicon Valley “what should we do?” and enable those in this space to determine solutions.

Turning to tech companies for solutions rather than regulating content supports innovation. Government is notoriously slow to embrace change and any regulation to come out of Congress will likely be vague, focus on existing companies rather than future technologies, and require removing content. A censorship-based approach will push terrorists to find new technologies or develop measures to evade identification. Rather than this game of cat and mouse, an approach that allowed technology companies to lead would leave space for social technologies to develop innovative ways to target terrorists online.

It is unclear what regulation would accomplish or how the government would enforce rules and penalize violations. To return to the FCC example, the FCC fines violators. Would fines be the appropriate penalization for hosting terrorists or terrorism-promoting content? Considering that large social-media platforms host millions of users, it is unrealistic that all unsavory content could be removed immediately. Would enforcement target only the most egregious accounts? Or would it focus on the length of the account’s operation? Zoe Bedell and Benjamin Wittes previously discussed some of these issues in a three-part series, “Tweeting Terrorists.” The difficulties in determining the parameters of regulation suggest that the U.S. government must rely on the tech companies to act.

An additional benefit of tech companies setting regulations is that they can be implemented uniformly, rather than on a country-by-country basis. If a tech company determines content is unacceptable, then the content will be removed from the platform entirely; this is the only sure way to block its dissemination. If, by contrast, governments make these determinations, then countries with stricter laws on creating or disseminating terrorist content will remove or block all applicable content and countries that prioritize privacy protections will allow the same content to stay live. Geoblocking, or restricting content based on location, may limit content to some audiences while allowing it to remain on the platform. However, this is not a perfect solution because the content remains available to most audiences, and even in countries where the content should be inaccessible, VPNs, TOR browsers and other mechanisms allow users to obscure their location and access the blocked material. When tech companies determine unacceptable content, the content is removed from the platform, not simply blocked based on location. This more effectively prevents users from accessing the egregious content on a single platform, regardless of location.

Relying on private companies to limit the content hosted on their platforms will also assuage concerns about government censorship and support internet freedom. Censorship makes many American politicians cringe, no matter how appalling the target. Legislation that limits the First Amendment faces political hurdles and opposition due to the “slippery slope” argument: Today, we limit the Islamic State and tomorrow, we burn books. Free speech advocates share this concern and criticize potential government regulation on the grounds that it creates a dangerous precedent for future content and may overstretch to include dissent or other political speech. Alternatively, a private corporation limiting the use of its product simply alters the purpose of the platform without infringing on First Amendment protections.

Finally, tech companies promote transparency. Many tech companies, including Apple and Twitter, publish transparency reports that detail government requests for information that may include warrant canaries that hint at received National Security Letters. Watchdogs and privacy protection organizations use this information to understand the extent of government oversight and monitor government influence online. Governments will likely classify the equivalent information that they maintain and restrict the information that tech companies may provide, preventing its dissemination to the public. As such, tech companies have responded with legal action to push to release more information. Relying on tech companies will provide more information to the public on government involvement with their platforms.

Inaction

Rather than pointing to tech companies or the government to act, possibly the solution lies with inaction. Both companies and the government must dedicate an astronomical amount of resources, in terms of people and programs, to prevent the proliferation of terrorist accounts and content, even though this will not entirely prevent terrorists from using these platforms. In addition, as technology progresses and more applications are created, terrorists will use newer platforms. Regulators will be spread thin policing outdated platforms as terrorists exploit new apps on the digital frontier. The Islamic State, for example, now prefers the application Telegram over Twitter because, among other reasons, it allows for end-to-end encryption. Instead of focusing resources on current, popular technologies, a more efficient allocation of resources could focus on research and development for nascent systems.

Censorship will only develop into a prolonged whack-a-mole exercise. Preventing the dissemination of terrorist content will not prevent terrorist organizations from creating propaganda—rather, terrorists will search for new outlets or means to spread the content as the U.S. government tries to remove it. Removing content alone will not destroy the cause. For instance, even though Germany bans Nazi symbols, domestic far-right movements and violence drastically rose over the past few years. Censorship alone is insufficient to prevent supporters from accessing content.

Some analysts also propose keeping these social media accounts active as a tool to gain actionable intelligence. As a previous article I c-authored with Daniel Byman, Sarah Tate Chambers, and Chris Meserole notes, “Terrorists’ tweets, Facebook posts and communications can be monitored, and often, security services discover unknown threats or find compelling evidence that can be used to disrupt terrorist attacks.” In at least some cases, the intelligence community may prefer to monitor these accounts and accept the consequences of greater publicity than to lose real-time data by shutting them down.

Censoring terrorist propaganda may also backfire. Preventing the dissemination of content may give credence to terrorists’ claims of victimization. Censorship may also pique curiosity and encourage individuals to seek out this media. Some individuals who may have dismissed the propaganda as false and obscene may be persuaded to seek out the content to determine independently why the government labeled the content as egregious.

The potential precedent such censorship represents would be the most troubling and significant consequence. Illiberal governments disparage their political opponents as terrorists to delegitimize them. If the U.S. government or American companies block social media under a counterterrorism policy, then we risk losing our high ground when China or Syria’s Bashar al-Assad obstruct opposition speech under a similar guise. Social media has become an important tool for organizing and messaging for pro-democracy forces, including for the 2011 revolutions in the Middle East. As the traditional supporter of democratic movements around the world, the United States should be wary of efforts that may constrain legitimate protest movements in the future.

This debate over how to respond to terrorist content online faces competing priorities, including First Amendment concerns, national security, business interests, international norms, and state sovereignty. The United States has yet to determine how to proceed, and this survey of potential policies only manages to highlight the benefits and costs of delegating responsibility. Before determining a course of action (or none), lawmakers and companies must determine the expectations of technology, what legislation would accomplish, and the consequences of government involvement—and if the costs outweigh the potential national security gains.


Zann Isacson holds a B.A. in International Relations from the College of William and Mary and a M.A. in Security Studies from Georgetown University, concentrating in terrorism and substate violence. She previously worked on Capitol Hill and at the Department of Defense and the Department of State.

Subscribe to Lawfare