Cybersecurity & Tech

Addressing Challenges in Content Moderation: A Series

Daniel Byman, Chris Meserole
Sunday, October 27, 2019, 2:00 PM

“My problem isn’t terrorists, it’s the KKK,” a senior social media company executive told us. We’d asked him about the challenges of countering terrorist groups like the Islamic State, only to receive an education about the difficulties of countering nonviolent hate groups. He had a point.

Facebook F8 Developer's Conference 2017 (Flickr/Anthony Quintano, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

“My problem isn’t terrorists, it’s the KKK,” a senior social media company executive told us. We’d asked him about the challenges of countering terrorist groups like the Islamic State, only to receive an education about the difficulties of countering nonviolent hate groups. He had a point. Although terrorist use of the internet is extensive and well-documented, major social media companies have made impressive strides in blocking and purging content and denying well-known jihadist groups from using their platforms. A much trickier issue, however, was nonviolent groups that serve as potential gateways to terrorism. These groups spew hate on the internet, help isolated radicals form networks, and ratchet up tension, but the smarter ones are careful to stay on the right side of the law between extreme politics and support for violence. And these groups aren’t the only offenders. Governments, too, abuse social media platforms, using them to discredit their enemies and even promote genocidal policies.

The challenge posed by nonviolent hate groups seemed too daunting to flesh out on our own. So we followed the time-honored tradition of professors everywhere when confronted with a vexing new policy question: We taught a class on it, hoping that the smart students would make us smarter. As part of a “Centennial Lab” series of courses at Georgetown, the course sought to encourage students to do independent work on the solutions to pressing policy problems. We wanted to look at a range of groups, recognizing that social media companies need to make their policies consistent regardless of the ideology behind the violence. Neo-Nazis, the jihadist-oriented Al Muhajiroun and its spinoffs in Europe, and the “Incel” movement all peddle hate, albeit with vastly different justifications. Other groups, such as the followers of Insane Clown Posse (Juggalos and Juggalettes), are considered by some law enforcement entities to be gangs but are also are part of a broader cultural movement, posing their own challenges.

If anything, the policy challenges posed by such groups can be greater for social media companies than it is for governments. They are truly global, and their laws and policies need a degree of consistency, making it hard to balance rules that honor the strong U.S. support for free speech with the more cautious European views on acceptable discourse with the patently illiberal approaches of China and other dictatorships. How should social media companies balance privacy and security, public opinion and government pressure, transparency and accountability, all in a globally consistent way? And what rights should nonviolent groups have on their platforms? How do internet companies determine whom to ban and whom to permit to use their services? Are their alternative approaches to banning content or users that technology companies should consider? What are the human rights and legal implications of these choices? These are some of the themes and questions this class addressed.

This course was different from what we’ve taught in the past in several important ways. First, it was focused on a set of concrete problems, and the students were expected to identify real solutions to them. The solutions could not be “balance privacy and security concerns” or other anodyne and thus useless recommendations. Rather, they had to involve focused diagnoses of the problems facing social media companies and offer ideas that might actually be implemented.

Two of the class papers are being published on Lawfare today. Given the breadth of the class subject matter, they address quite different topics. The first looks at how Facebook is being used, and misused, by the government of the Philippines. The second critiques Facebook’s new privacy proposals and offers alternative approaches.


Daniel Byman is a professor at Georgetown University, Lawfare's Foreign Policy Essay editor, and a senior fellow at the Center for Strategic & International Studies.
Chris Meserole researches emerging technology, international security, and violent extremism. He is a fellow in the Center for Middle East Policy at the Brookings Institution.

Subscribe to Lawfare