Cybersecurity & Tech

Facebook’s White Paper on the Future of Online Content Regulation: Hard Questions for Lawmakers

Evelyn Douek
Tuesday, February 18, 2020, 11:27 AM

The company’s new white paper is a thoughtful document that raises serious questions that regulators, and the rest of us interested in the future of online content regulation, need to reckon with.

Facebook CEO Mark Zuckerberg onstage at Facebook's 2017 developers conference. (Maurizio Pesce, https://flic.kr/p/V5XXzf; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

I once wrote, after Australia passed some particularly terrible legislation, that maybe Facebook CEO Mark Zuckerberg should have been more specific when he publicly asked for more regulation of “harmful content.” It seems that Zuckerberg has finally gotten around to clarifying what he meant. His recent comments at the Munich Security Conference, where he suggested that online platforms should be treated somewhere between telecoms and media industries, were more confusing than not. But Facebook released a white paper today by Monika Bickert, the company’s vice president of content policy, that is a more constructive step forward.

Facebook’s motivation in releasing the paper is obvious. The next year is looking to be critical for the governance of online speech: Regulators around the world are considering legislation and rethinking their previously hands-off approach. The paper focuses on regulatory structures “outside the United States,” which are likely to be more aggressive, given the higher tolerance for governmental regulation of speech than under the First Amendment. Facebook obviously has an interest in what that regulation looks like. But motivations aside, the white paper is a thoughtful document that raises serious questions that regulators, and the rest of us interested in the future of online content regulation, need to reckon with.

Overview

The report begins by listing four characteristics of online content that make prior models of regulation ill-fitted and will require new frameworks to address:

  • Platforms are global: Many internet platforms have a global user base and straddle jurisdictions with very different legal rules and expectations for what speech is acceptable.
  • Platforms are constantly changing: Internet platforms are not homogeneous—each has its own affordances and dynamics, and all are “constantly changing to compete and succeed.”
  • Platforms will always get some speech decisions wrong: The unfathomable scale of modern internet platforms—which requires millions of speech decisions to be made every day—means that enforcement of platform standards will always be imperfect.
  • Platforms are intermediaries, not speakers: Platforms facilitate speech, but they do not and cannot review every piece of content posted before it is posted—and therefore should not be treated the same as publishers.

The white paper identifies four key questions that need to be answered in order to design a framework that meets these new challenges:

  1. How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?
  2. How should regulation enhance the accountability of internet platforms?
  3. Should regulation require companies to meet certain performance targets?
  4. Should regulation define which “harmful content” should be prohibited on internet platforms?

The white paper then proceeds to engage with these questions in a substantive and reasonably detailed way and is worth reading in full. Here, I’ll highlight some key themes that emerge.

Incentives Matter

The paper emphasizes the importance of ensuring that legislation does not create perverse incentives. When performance targets are enshrined in law, they risk incentivizing actors to focus on meeting the targets themselves, rather than focusing on the overarching goal of the regulatory scheme. So regulators need to be careful to create performance targets and metrics that platforms will not game to decrease enforcement burdens. The white paper gives a number of examples of how this might occur, such as:

  • Measuring response times to user or government reports of violating content, which could incentivize companies to define violation categories (like hate speech) narrowly—or to make it harder for users to report violations, thus decreasing the number of reports needing a quick response.
  • Transparency mandates in certain areas (such as the rate at which platforms find content proactively, before users flag it) could incentivize companies to neglect other areas (like the accuracy of this detection) to boost performance in measured areas.
  • Hard deadlines for removal (such as “within 24 hours of upload”) could disincentivize removal of content older than the deadline, which is not as likely to be seen by regulators but can still cause harm. This could also disincentivize the creation of better detection tools to scan for this older content.

Incentives are a key consideration for any regulatory design, and Facebook is correct that lawmakers need to get it right. But a number of the problems that Bickert highlights here also apply to Facebook’s current self-regulatory measures. Facebook touts figures such as adding “800 visually-distinct” versions of the Christchurch massacre video to the hashing database it uses to remove terrorist content from its services as signs of progress. But there is no external verification of this figure or way to know if it included legitimate versions, such as in the context of news reporting. Similarly, the company’s transparency reports include figures such as number of posts taken down for violating Facebook’s rules and how much of the content Facebook detected before it was reported by a user, again without any assurances of accuracy. The incentives this kind of reporting creates are clear; ever-higher figures seem “better,” indicating that Facebook is taking more aggressive action to moderate its platform. But there are no guarantees that these figures reflect higher quality content moderation overall.

Regulation may change transparency mandates or impose auditing and verification measures. In the meantime, though, Facebook should consider its own advice about the incentives its metrics create.

Choosing Which Errors to Err on the Side Of

Another large theme of the paper is that, because errors are inevitable in content moderation, regulators need to choose which kinds of errors they prefer. The paper gives the example of time-to-action targets: that is, how long it took Facebook to review content flagged by users or governments. As noted above, these can create a perverse incentive for platforms to over-remove content quickly to meet targets, even though this might lead to more content being removed incorrectly. But the paper also suggests that for some categories of content—where the harm occurs even if no one sees the post, such as child sexual exploitation videos, coordination of a live attack or expression of an intent to self-harm—the importance of quick action might justify an approach that incentivizes fast decision-making even if more false positives result.

Calculated and open choices about which kinds of error to make like these would be a sign of a more mature regulatory approach. Too often, regulators assume a level of technical and enforcement capacity that denies the need to make trade-offs. The suggestion is that if platforms simply “nerd harder,” they will be able to create tools that do not make errors and so do not require trade-offs. This is unrealistic. But platforms themselves are partly to blame for the persistence of this idea: They often tout the abilities of their artificial intelligence tools to reassure regulators that they are making progress. Indeed, Facebook’s emphasis in its transparency reports on ever-higher figures as signs of progress, without any verification of simultaneous increases in accuracy, is an example of an area where Facebook could do more to acknowledge the fallibility of its systems and prompt a more informed conversation about the trade-offs involved. (Mark Zuckerberg did suggest in an op-ed released at the same time as the white paper that the company is looking at opening up its content moderation systems for external audit, which would be a big step in the right direction.) There needs to be much more realism and transparency from all sides about what is possible.

Transparency Matters

Content moderation is going through a legitimacy crisis. No one trusts the current online information ecosystem or that its current rules and regulators—that is, the platforms—have the public interest at heart. The purpose of regulation is at least in part to hold the platforms accountable to rebuild this trust. And transparency is a key way of achieving this. The white paper acknowledges that transparency at all stages of rule formulation and enforcement can “ensure that the companies’ balancing efforts are laid bare for the public and governments to see.”

This is ironic, coming from a company whose transparency-deficits in content moderation are well known. Still, as Supreme Court Justice Felix Frankfurter wrote, “Wisdom too often never comes, and so one ought not to reject it merely because it comes late.” Facebook’s proposed Oversight Board is one promising way to facilitate this kind of transparency in reasoning by exposing the way Facebook has balanced competing values (like freedom of expression and safety) to arrive at its rules. This is why it is especially disappointing that the board’s remit at start-up will be limited to a small subset of the content decisions Facebook makes: Generally, the board will be empowered only to review individual take-down decisions and anything else that Facebook decides to refer. The white paper even explicitly references the value of allowing users to appeal both removal and nonremoval decisions—ironically, given that the latter is a particularly disappointing omission from the Oversight Board’s jurisdiction at present.

But companies are not the only entities for which transparency is important. The white paper points out that, if governments choose to define for internet companies what content should or should not be allowed on their platforms, they should do so in clear terms and in a way that can be enforced at scale. Vague standards merely outsource judgments to companies and do not enable voters to hold lawmakers accountable for the trade-offs that legislatures choose to make.

The Long-Term Bet

The white paper emphasizes the value of transparency, but Facebook has more immediate power than any regulator to enable meaningful transparency and accountability for its content moderation rules and enforcement. So why wait for regulation? The problem is that transparency enables scrutiny, and scrutiny enables criticism. Therefore, in the absence of regulation, there is limited incentive for companies to be meaningfully transparent.

The white paper suggests that “companies taking a long-term view of their business incentives will likely find that public reactions to their transparency efforts—even if painful at times—can lead them to invest in ways that will improve public and government confidence in the longer term.” Facebook’s Oversight Board is, in theory, exactly this kind of long-term bet on transparency and legitimacy. But in the absence of regulation, Facebook has the ability to keep making ad hoc disclosures and reporting metrics that suit its own narratives, despite the transparency offered by the Oversight Board. Meanwhile, other platforms can make different calculations entirely, and continue to make opaque and seemingly arbitrary decisions.

Regulation is coming, and almost everyone—including Facebook—now seems to accept this is a good thing. But two wrongs do not make a right, and regulators themselves should not move fast and break things. The questions and concerns that Facebook raises in this white paper are real and difficult, and regulators should take them seriously.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare