Cybersecurity & Tech

Value Pluralism and Human Rights in Content Moderation

Molly K. Land, Laurence Helfer
Thursday, October 27, 2022, 8:16 AM

The new EU social media law opens the door to renewed conflict with the United States over freedom of expression. Ensuring national legislation meets human rights standards will mitigate these risks.

Social media icons on an iPhone. (Mike MacKenzie, https://flic.kr/p/WVdYrA; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Oct. 4, the Council of the European Union (EU) approved the new Digital Services Act (DSA). The EU’s new law regulates social media company liability for user posts containing content prohibited under the laws of EU member states—such as hate speech, incitement to terrorism, and child sexual abuse. The DSA has been lauded as a “gold standard” for regulating social media platforms and protecting users. Despite this praise, significant concerns about over-censorship and enforcement overreach remain.

Less attention has been paid, however, to the ways in which the DSA opens the door to renewed conflicts over global standards on speech. Approval of the new EU legislation came within a week of a U.S. federal appeals court decision upholding a Texas law that prevents social media giants from removing user posts based on viewpoint, teeing up a circuit split almost certain to be reviewed by the U.S. Supreme Court. The DSA’s passage also came less than a day after the Supreme Court agreed to hear a case about Section 230 of the Communications Decency Act, the federal law that broadly shields social media companies from liability for content on their platforms. Regardless of how these cases turn out, they are an important reminder of just how different the United States is from Europe (and the rest of the world) when it comes to freedom of expression. The EU’s and United States’ approaches represent sharply divergent views of the benefits and risks of speech regulation—one highly skeptical of government intervention, the other acutely aware of the concrete harms of inciting and discriminatory speech.

Is there a coming conflict between the DSA and U.S. law, including the First Amendment? The largest social media platforms have navigated the potentially conflicting demands of national law since the landmark dispute in the early 2000s between Yahoo and France over Nazi memorabilia. However, the recently approved DSA imposes expansive content moderation rules on all but very small social media platforms in Europe, which may not be able to sustain the cost of complying with different legal regimes. Meeting the requirements of the DSA (and its national implementing legislation) may also compel changes in the content moderation policies of U.S. internet intermediaries. Moreover, this is not merely a risk of “censorship creep.” If the U.S. Supreme Court mandates some version of content or viewpoint neutrality, even the largest platforms will likely find it difficult to comply with that mandate while also removing hate speech and misinformation accessible to users in the EU. 

International law can help to minimize these potential conflicts. As we explain in a recent paper about Meta’s Oversight Board, human rights treaties that protect freedom of expression “facilitate challenges to abusive and arbitrary speech restrictions while accommodating a modicum of political and cultural variation and the preferences of different polities.” More specifically, human rights law requires that state restrictions on speech meet the requirements of legality, legitimate aim, and proportionality, while also recognizing that there are different ways to balance freedom of expression with values such as autonomy, dignity, privacy, and equality. 

These principles suggest that European regulators must be cautious about uncritically applying existing laws and legal doctrines to online speech. Wholly aside from the discussion of whether and how to regulate “lawful but awful” content, even the ostensibly more straightforward demand to remove illegal content raises significant human rights concerns. For example, extending German law’s prohibition on insults and Nazi symbols to social media may contravene free speech principles in both international and regional human rights law, especially when applied in an environment capable of nearly perfect identification and enforcement. Content moderation algorithms can identify and act on far more speech than is possible in the offline world. There is thus a significant risk that enforcing speech laws such as Germany’s in this setting will have a disproportionate impact on free expression.

These concerns are exacerbated by how social media companies moderate content. Because they often give limited consideration to the context of online speech when deciding whether or not to remove users’ posts, platforms cannot ensure that their application of national laws satisfies international freedom of expression standards. Yet as the Meta Oversight Board has emphasized repeatedly, context is essential to understanding the proportionality of restrictions on speech.

National governments also can better respect human rights and the divergent policy choices of other jurisdictions by encouraging platforms to adopt mechanisms for users to choose which speech norms apply to them. There will, of course, be mandatory rules that governments establish for certain types of content, pursuant to either international law or domestic regulations. Beyond that, however—especially where national jurisdictions might reasonably reach different choices and where enhanced enforcement risks disproportionately burdening speech—users should be allowed to choose for themselves. 

For example, instead of Germany trying to prosecute nearly every insult on the internet and enlisting platforms in that endeavor, users might be offered the ability to block insulting speech on their content feeds. Many have proposed user control as normatively preferable to either state-defined or platform-defined norms (here, here, and here, among others). Our point is that such control also helps to ameliorate tensions between national jurisdictions seeking to protect users from harms in the digital sphere. In addition, greater user control may diminish some of the problems that national law seeks to address. For example, Germany’s interest in removing insults that only a few people choose to view would be marginal compared to the chilling effects of overbroad enforcement. 

In sum, rather than an “all or nothing” struggle for regulatory dominance among states, and between states and platforms, offering users the opportunity to choose what content they do and don’t see on their personal feeds better accommodates the wide diversity of national speech regulations. And it does so subject to the overarching guidance and constraints of international human rights standards. In the longer term, efforts to foster deliberative choices about online rules will hopefully encourage users to take more responsibility for the speech they consume and share.


Molly K. Land is the Catherine Roraback Professor of Law and Human Rights at the University of Connecticut School of Law. Her scholarship focuses on the intersection of human rights and science and technology studies (STS). She has published widely on the relationship between human rights and technology and is the lead editor of the open access volume, “New Technologies for Human Rights Law and Practice” (Cambridge 2018).
Laurence R. Helfer is Harry R. Chadwick, Sr. Professor of Law and co-director of the Center for International and Comparative Law at Duke University. He is also a Permanent Visiting Professor at iCourts: The Center of Excellence for International Courts at the University of Copenhagen. His research interests include human rights, international adjudication, and the interdisciplinary analysis of international laws and institutions. He has authored more than seventy publications on these and other topics.

Subscribe to Lawfare