Cybersecurity & Tech Terrorism & Extremism

Regulating Internet Content: Challenges and Opportunities

Daniel Byman, Sarah Tate Chambers, Zann Isacson, Chris Meserole
Thursday, November 16, 2017, 11:30 AM

Terrorist groups, like everyone else today, rely on the internet. Al-Qaeda in Iraq made its name disseminating hostage beheading videos. Omar Hammami became a Twitter star for the Somali jihadist group al-Shabaab.

Published by The Lawfare Institute
in Cooperation With
Brookings

Terrorist groups, like everyone else today, rely on the internet. Al-Qaeda in Iraq made its name disseminating hostage beheading videos. Omar Hammami became a Twitter star for the Somali jihadist group al-Shabaab. The Islamic State put all this on steroids, producing and disseminating thousands of videos in Arabic, English, French, Russian and other languages to reach Muslims around the world.

Much of this content is propaganda, meant to inspire new recruits, demonstrate to funders that their money is being well spent, and shore up morale among existing followers. Social media also is a way to involve part-time supporters who can modify Islamic State images or generate their own content and disseminate it. Such propaganda also is used by face-to-face recruiters who reinforce their in-person sessions with video or teaching distributed through the internet.

Some of the content is clever and compelling. The Islamic State, for example, put up numerous images of how good life is under its rule, with pictures of swimming pools, happy children, and Nutella. Many jihadists posted pictures of themselves with cats (the adorable feline mujahids often held guns). Many of their most famous images, however, involved beheadings of hostages, including several Americans, or the burning alive of a Jordanian pilot in a cage—videos that are visceral and stomach-churning.

Less publicly, but often more importantly, terrorists often use the internet for operations. Over the years, they’ve used an array of technologies to plan attacks or help move money or people from place to place. New recruits are often directed by email or via supposedly secure applications. The Islamic State posts instructions on “How to Survive in the West” for all to see. And they have strategy debates, with key works such as the “Management of Savagery” readily available for anyone who wants to download it. We’ve also seen individual terrorists post their words and intended deeds on Facebook or otherwise advertise their commitment to violence to their “friends” or “followers.”

Preventing terrorists from recruiting and planning operations via the internet is difficult for several reasons. One is that the internet encompasses many things, from email and websites to apps and social media—-and terrorists use them all. Whether it’s Gmail and YouTube or Telegram and WhatsApp, stopping specific content from flowing across all internet platforms can be technically difficult or impossible. Indeed, monitoring new content and communications is highly resource-intensive at best, requiring thousands of human hours to comb through the nooks and crannies of different virtual sites or sort what various algorithms have generated.

Legal concerns often loom large. The line between embracing violence and engaging ina political debate can be tricky. The infamous “Nuremberg Trials” website involved an anti-abortion group that posted the names and addresses of abortion doctors and had links to other sites that endorsed violence. Because the website itself did not openly call for violence, a fiercely contested court case pitting Planned Parenthood against the ACLU ensued. In the end, the host of the website simply denied the group its services—solving this particular case but not the issue in general. Similarly, earlier this year Cloudflare dumped the Daily Stormer because of its objectionable content and fears that Cloudflare’s brand would be tainted. With this decision went Cloudflare’s claims to “remain neutral” in free speech disputes. Large legal questions remain: How does incitement work in this context? Should these actions be covered under the material support statutes? To what extent is a particular instance political speech? And those are just a few.

It might be tempting to simply have a high bar and not fret about all these possibilities—terrorists are terrorists, after all. But a high bar can easily interfere with legitimate users in the name of counterterrorism. Facebook, for example, took down the iconic and disturbing image of a naked Vietnamese girl fleeing after a napalm attack on the grounds that images of naked girls are, well, child pornography. (After an outcry, Facebook restored the image.) Likewise, a high bar on encryption might prevent terrorists from communicating securely with one another, but it could also prevent human rights dissidents from doing the same.

Making this more complex, different countries have vastly different rules. The United States is a free-speech outlier—most European states are far more restrictive, let alone dictatorships like China or Saudi Arabia. Germany, for example, prohibits pro-Nazi propaganda. However, without the United States entering the arena with a feasible alternative, these more restrictive states proceed unchallenged. Earlier this month, Britain’s home secretary announced a proposed expansion to the statute that criminalizes possessing or collecting information for terrorist purposes. Previously, the law covered only information that was downloaded or printed; the proposal would include information that was repeatedly streamed or viewed online. In an increasingly restrictive environment, the United States may find value in opening the debate and proposing alternative solutions.

Although terrorists’ exploitation of the internet is a net negative, intelligence agencies may also want some terrorist use of the internet. Terrorists’ tweets, Facebook posts and communications can be monitored, and often, security services discover unknown threats or find compelling evidence that can be used to disrupt terrorist attacks. When an Islamic State cell in Australia plotted a major bombing this summer, the attack was thwarted only because an intelligence agency intercepted communications between the cell and their handler inside Syria.

Not surprisingly, technology companies are being thrust into the limelight. Part of this is because the terrorists are using their platforms to foster their violent agendas. But part of it is also because the policy world has not engaged this issue directly. Different legal regimes in different countries, regulations written for an older technological era, and tricky issues balancing free speech and the risk of violence all make it easier to demand that internet companies act on their own rather than governments take responsibility.

To solve the problem, we need action—ideally, coordinated action—at multiple levels. Technology companies are already experimenting with artificial intelligence and other tools to identify hateful content and at times block it. Many have dumped Islamic State, neo-Nazi, or other accounts, using their own terms of service as justification even when the law does not require them to do so. To date, these takedowns have been effective: Islamic State supporters created fewer new Twitter accounts after the platform began suspending them en masse, while white nationalist supporters on Reddit made far fewer extremist comments (or left the site altogether) after the company shut down its most hateful subreddits.

Yet self-regulation by the technology sector, however welcome, is not in itself a viable long-term solution. Both Twitter and Reddit, for example, cracked down on extremist content only after a massive public outcry—and in the case of the Islamic State, only after the group had already used the platform to attract thousands of new supporters, many of whom either traveled to Syria or carried out attacks in their own countries.

Any effort should begin at home and in Europe, but it shouldn’t end there. The United States will also need to negotiate with China, Russia and other less friendly powers. These negotiations will require a delicate balancing act: They will need to restrict our enemies from fomenting extremist speech but without so normalizing free speech restrictions that autocrats across the globe feel more empowered to suppress pro-democracy voices at home. If the United States demanded that terrorist-related content be banned, many countries would quickly brand any domestic opponents as “terrorists.” These governments would label not only the Islamic State and al-Qaeda as terrorists, but also Uighurs in China, the Muslim Brotherhood in Egypt and whoever is criticizing Vladimir Putin on a given day. Finally, if negotiations fail or simply are not possible—as is often the case in states experiencing civil war—direct action may be necessary in severe cases. In rare cases, this may require arresting or targeting the propagandist if the propaganda poses an immediate threat to national security, or knocking their sites offline even in their home countries.

Fortunately, Lawfare has long debated the intersection of technology and extremism, such as when Seamus Hughes questioned how frequently radicalization happened exclusively online. Most of the commentary to date has focused on narrow legal issues. For example, Matthew Weybrecht cited Humanitarian Law Project and Brandenburg to argue that self-radicalization may lead to another carve-out in free speech jurisprudence, while Elinor Fry summarized the famous Dutch Context prosecution, noting how the Netherlands’ incitement requirement differs from Brandenburg. Meanwhile, Paul Rosenzweig wondered if an ISIS-related copyright act modeled after the Digital Millennium Copyright Act could compel providers to take down some ISIS-supporting content. Yet the most sustained commentary on the issue comes from Zoe Bedell and Benjamin Wittes. In a three-part series, the two considered whether Twitter was violating the material support statutes by allowing designated Foreign Terrorist Organizations to have accounts. They note that perhaps the prudential judgments of the FBI and Department of Justice most protect Twitter, and that these judgments push prosecutors to steer as far away from the First Amendment line as possible when prosecuting material support cases. Likewise, in another set of articles, Bedell and Wittes questioned Twitter’s civil liability for violating material support statutes; they found a civil claim would be difficult because establishing both mens rea and proximate causation would be a stretch. In a follow-up piece, Bedell and Wittes looked at Section 230 of the Communications Decency Act and determined that the section would not protect Twitter or a similar service provider if the plaintiff did not rely on the contents of the tweets as evidence; however, it would be nearly impossible to do so.

Over the coming months, we will be building on Lawfare’s longstanding debates over technology and terrorism. In particular, we will be looking at policy proposals and technical ideas that might reduce the internet space available to terrorists. Our focus will be on assessing the impact and practicality of many proposals, and we’d welcome input on this as well as on the legal ramifications of what we propose.

We suspect some steps will be straightforward and others near-impossible. But a concerted effort on all fronts can restrict the virtual space in which terrorists operate while ensuring that civil liberties and business interests also are protected.


Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.
Sarah Tate Chambers is a second year law student at the University of South Carolina. She currently works for the Department of Justice's Publications Unit. She graduated from the University of Minnesota with a B.A. in Religious Studies.
Zann Isacson holds a B.A. in International Relations from the College of William and Mary and a M.A. in Security Studies from Georgetown University, concentrating in terrorism and substate violence. She previously worked on Capitol Hill and at the Department of Defense and the Department of State.
Chris Meserole researches emerging technology, international security, and violent extremism. He is a fellow in the Center for Middle East Policy at the Brookings Institution.

Subscribe to Lawfare