Cybersecurity & Tech

Regulating Offensive Online Speech—Are the Times Finally A-Changin’?

Yuval Shany
Friday, June 5, 2020, 8:00 AM

Political pressure is mounting against broad liability protections for online platforms. What’s a better way forward?

Someone opens the Twitter app on their phone (World's Direction/https://flic.kr/p/UdzJRj/Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

The fast-moving escalation in the conflict between President Trump and Twitter in the last week of May may prove in hindsight to be a watershed development for the legal architecture of online social media.

The flurry of activity included Twitter’s flagging of two of Trump’s tweets for containing disputed or misleading information about mail-in-ballots by appending links to mainstream media coverage of the claims against mail-in-ballots; the platform’s partial blocking of a third Trump tweet on the events in Minneapolis for glorifying violence; the president’s executive order calling for a new interpretation of Section 230 of the Communication Decency Act by executive departments and agencies; and Trump’s call for revoking Section 230 altogether.

The dramatic events in the U.S. follow another notable legal development for online social media. On May 13, the French Parliament adopted the Avia Law, which requires social media companies to quickly remove certain “manifestly” unlawful content after a user flags the content in question. According to the law, platforms have 24 hours to remove flagged content in the case of hate speech, and 1 hour in the case of terrorist content or child pornography. Like the 2018 German law that inspired it—the NetzDG—the Avia Law threatens to impose on social media companies and search engines hefty fines in the event of a failure to meet the new legal requirements. The aforementioned new developments coincide with another landmark development in May 2020—the announcement of the establishment of Facebook’s new Oversight Board, which will review appeals over the platform’s content removal decisions.

What do all of these significant developments have in common? Through various means, they appear to mark the end of the “hands-off” phase in the history of regulation of online content disseminated through social media.

The IDI/Yad Vashem Recommendations for Reducing Online Hate Speech

Calls for a change in the regulatory paradigm for online social media have been growing in recent years. One such proposal comes from the 2019 IDI/Yad Vashem Recommendations for Reducing Online Hate Speech, a list of measures formulated by an international team of researchers working under the guidance of senior human rights experts (including serving U.N. rapporteurs and past and present U.N. treaty body members). I coordinated the process of formulating the recommendations on behalf of the Israel Democracy Institute (IDI). The recommendations may serve as a helpful starting place for discussing specific reforms in content moderation practices.

The main thrust of the IDI/Yad Vashem recommendations is as follows: Recommendation 1 stresses that social media companies have a legal and ethical responsibility to reduce online hate speech. In other words, the recommendations reject the notion that social media companies should have blanket immunity from liability for harm they could and should have prevented, and they call on the companies to “offer accessible remedies for violations of the applicable norms.” Recommendation 2 directs social media companies to international human rights law as a principal legal framework for reexamining their content moderation policies. This approach follows the one taken by David Kaye, the U.N. special rapporteur for freedom of opinion and expression. Kaye—who also advised on the contents of the IDI/Yad Vashem recommendations—has long argued that internet companies should embrace a human rights framework. In Kaye’s view, international human rights law could offer legitimate substantive standards for balancing freedom of expression with other rights and interests, and provide remedial avenues that guarantee legal accountability, along the lines provided in the 2011 U.N. Guiding Principles on Business and Human Rights.

Recommendations 5 and 6 encourage the development of nuanced and human rights-compatible methods to prevent and address online hate speech that go beyond simple content removal. These alternative tools would include flagging, prompting users with a self-moderation request and raising awareness among users through community rules. This would push social media companies to exercise more editorial discretion vis-a-vis potentially offensive content. Recommendations 7-10 lay out the contours of complaint mechanisms that would allow for effective remedies for those adversely affected by content moderation decisions. These include an independent mechanism for reviewing “hard cases” (Facebook’s new Oversight Board, depending on the jurisdiction it’s ultimately given, may qualify as such a mechanism). Finally, Recommendations 12-16 discuss the ways in which content moderation policies should be reviewed and updated, and how social media companies can promote transparency about their policies and ensure the successful application of such policies in practice.

Although not a panacea for the current internet governance crisis, the IDI/Yad Vashem recommendations might, if adopted, mitigate some the legal and political pressure on social media companies to incur more responsibility and to conduct themselves with greater transparency in reducing hate speech. They also provide a useful benchmark for outside observers to evaluate the content moderation policies of social media companies and to hold such companies accountable for some of the real-life consequences of their policies. On top of that, the recommendations provide standards against which one may assess the legitimacy of governmental responses to the operation of social media companies.

A New Regulatory Equilibrium?

The IDI/Yad Vashem recommendations offer a new regulatory equilibrium, which differs from the traditional “hands-off” approach to the regulation of online content disseminated through social media. The current regulatory model, which has held sway for the past 25 years, had been premised on a combination of factors, including Section 230, which provides the legal basis for treating online social media companies—unlike traditional media outlets—as largely immune from liability for third-party speech content they host; the First Amendment to the U.S. Constitution, which limits the power of the U.S. government to regulate speech, including online speech; and reluctance by social media companies themselves to assume significant content moderation responsibilities. This last factor is aptly encapsulated by Mark Zuckerberg’s oft-repeated claim that Facebook won’t serve as an arbiter of truth.

The structures supporting a “hands-off” approach to the regulation of online content have been under growing pressure inside the U.S. in recent years. In 2018, Congress passed legislation removing Section 230 immunity when platforms knowingly faciliate or support content connected to sex trafficking. Furthermore, there have been recent (though consistently unsuccessful) attempts to sue major internet companies in U.S. federal court for breaching antitrust laws by allegedly coordinating the suppression of conservative speech, as well as several initiatives by Republican lawmakers to condition the preservation of Section 230 immunity on “platform neutrality” in political matters. Though the new executive order has been criticized widely, and it is unclear how much of a legal effect it will have, it is part of a broader movement toward narrowing existing immunity.

What makes the regulatory environment in which social media companies operate even less stable is the rising trend in Europe to penalize online platforms for harmful content they propagate, and the concomitant public pressure on such companies to engage in more aggressive content moderation. Indeed, events such as the 2019 shooting massacre in Christchurch, New Zealand, have resulted in calls on governments to regulate extremely harmful online content and on social media companies to engage in more robust self-regulation.

Although Section 230 has allowed internet companies to remove objectionable content without incurring legal liability, provided they exercise discretion in “good faith,” it is unclear whether and for how long the current legal immunity regime will hold. The ever-increasing centrality of social media as a public space for exercising basic rights is likely to prompt more and more demands that platforms depart from their traditional “hands-off” approach and adopt new human rights-based content moderation policies. Furthermore, once online platforms begin to engage in extensive content moderation, the public may expect them to incur responsibility for harm caused by offensive content that they could and should have blocked. Put differently, once social media companies have become in practice “arbiters of speech,” including in difficult cases that raise sensitive questions about freedom of expression, there are good reasons to subject their power to moderate content to legal checks and balances. Indeed, the IDI/Yad Vashem recommendations reflect a normative position according to which social media companies should be held legally and ethically accountable if they unjustifiably fail to address concerns about online hate speech.

One can understand Zuckerberg’s attempt to distance Facebook from Twitter’s approach to tackle online misinformation as reflecting an understanding that the exercise of broad content moderation discretion is putting stress on public support for the basic legal architecture of online platforms under U.S. law (which is, incidentally, also the basic architecture of Facebook’s business model). Zuckerberg may worry that going down the path of undertaking a broad responsibility to police controversial speech would result sooner or later in significant political momentum to erode Section 230 immunity. Trump’s attack on Section 230 may be intended, in fact, to maintain the same status quo ex ante—to goad social media companies to return to the good old equilibrium, which was based on their taking a relatively minimalist approach to content moderation.

The IDI/Yad Vashem recommendations seem to stand for the proposition that some erosion of Section 230 and comparable immunity arrangements is not necessarily a negative development. Rather, such a change may help to remedy the current corporate responsibility deficit characterizing the content moderation operations of many internet companies. Still, the French and German approaches to content moderation might be even more problematic in nature than the status quo, since they apply a heavy hammer—the threat of high monetary fines—to the sensitive, at times surgical, task of balancing freedom of expression against competing societal interests. Such legislation might have a significant chilling effect on freedom of expression, subject social media companies to incompatible national legal standards, and push controversial expression from relatively responsible large social media companies to completely unregulated, and more dangerous, online outlets.

Arguably, the most promising way forward involves the development of a new equilibrium for the regulation of online social media, built around sensible content moderation policies and effective accountability mechanisms developed by social media companies (through self-regulation or co-regulation), with states maintaining a residual rights-protecting role. According to this approach—reflected inter alia in the IDI/Yad Vashem recommendations—social media companies should engage in content moderation in accordance with (a) international standards, (b) applicable national standards that do not violate international standards, and (c) other lawful standards they have agreed to adhere to (such as community rules). If they violate these standards, they should be held legally and ethically accountable. At the same time, states should not be allowed to exercise regulatory powers over social media companies that exceed the powers that international human rights law—including international freedom of expression norms—allows them to exercise. Twitter’s new policy on misleading information and Facebook’s new independent Oversight Board appear to be steps in the right direction: developing more robust substantive norms on content moderation and stronger procedural safeguards for accountability in the event of violation of applicable standards by the platforms. (It is interesting to note in this regard that Facebook’s announcement explicitly refers to international human rights law.)

Ultimately, the IDI/Yad Vashem recommendations advocate measures designed to hold online social media companies to higher standards of conduct, accountability and transparency, while protecting them internationally as a critical space for the exercise of basic rights and freedoms. This offers a path to a new regulatory equilibrium that can more effectively govern the dissemination of online speech and reduce hateful content.

The author thanks Tehilla Shwartz Altshuler, Karen Eltis and the Lawfare editorial team for their useful comments on a previous draft.


Professor Yuval Shany is the Hersch Lauterpacht Chair in International Law and former Dean of the Law Faculty of the Hebrew University of Jerusalem. He also currently serves as Senior Research Fellow at the Israel Democracy Institute , and was a member of the UN Human Rights Committee between 2013-2020. Prof. Shany received his LL.B. cum laude from the Hebrew University, LL.M. from New York University and Ph.D. in international law from the University of London.

Subscribe to Lawfare