Cybersecurity & Tech

The Oversight Board Moment You Should’ve Been Waiting For: Facebook Responds to the First Set of Decisions

Evelyn Douek
Friday, February 26, 2021, 1:00 PM

Facebook said it committed to action as a result of nearly two-thirds of the FOB’s recommendations. This is too rosy a picture, but the responses do show promise and the value of a more open dialogue about content moderation.

The Facebook logo on a keyboard. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The most important determinant of whether the Facebook Oversight Board (FOB) contributes, well, anything, to content moderation is how Facebook responds to the voluntary policy recommendations the FOB makes. The scope of the FOB’s “binding” authority is extremely limited—Facebook is only required to do what the FOB says about the individual piece of content at issue in a case. This isn’t much: A single H2O molecule in the ocean of millions of content moderation decisions Facebook makes everyday. The FOB’s impact therefore hinges on whether and how Facebook responds to its non-binding policy recommendations. And the moment has finally arrived where the public can start evaluating that contribution: Facebook responded to the FOB’s first set of decisions yesterday.

The good news is the response is not a big “eff you” to the FOB. Facebook made 11 commitments in response to the FOB’s recommendations (that tally is based on Facebook’s own framing; my count has it lower, which I will return to below), said it would assess the impact of five other recommendations, and only refused one: The FOB’s recommendation that Facebook should take “less intrusive measures” in response to COVID-19 misinformation where harm is identified but “not imminent.” Facebook says it is committed to its robust response during the pandemic, and has built out its COVID rules in consultation with global public health experts and won’t let the FOB overrule them. Fair enough.

But there’s still a lot to be desired in Facebook’s responses. Some of the “commitments” are likely things that Facebook had in train already; others are broad and vague. And while the dialogue between the FOB and Facebook has shed some light on previously opaque parts of Facebook’s content moderation processes, Facebook can do much better. Lawfare’s FOBblog has Facebook’s responses to each case here. This post has some higher level takeaways.

The Good Parts (And the Questions They Raise)

Facebook responded and accepted some of the broader recommendations! Well done, Facebook. Even that low bar is better than many skeptics have been expecting.

Some of the responses are substantive improvements. In the one case concerning an Instagram post, the FOB pointed out that it wasn’t clear how Facebook and Instagram’s rules interacted. As a result, Facebook has begun the process of clarifying their relationship and committed to providing more detail. The comprehensiveness of Facebook’s Community Standards compared with the more bare-bones Instagram Community Guidelines, and the sometimes inconsistent enforcement actions between them, has always been a puzzle. The FOB has prompted progress (again, admittedly from a low baseline).

There were other promising signs. Facebook has committed to providing more detail in the notifications it sends to people whose content is removed. It will also reassess how automated moderation is used and also how users get told when a decision on a piece of content is made by AI and not a human. Facebook has promised more detail on policies in a new Transparency Center, including clarifying their Dangerous Individuals and Organizations policy and the definitions of “praise,” “support” and “representation.” It has also consolidated information about its policies on COVID-19 misinformation, which were previously dispersed around various blog posts and webpages. This is something researchers have been asking Facebook to do for a while. The FOB delivers what frustrated researchers’ tweets could not.

Facebook’s responses were also educational. I watch the content moderation space and Facebook pretty closely, and I learnt things.

For example, Facebook explained the error choice calculation it has to make when using automated tools to detect adult nudity while trying to avoid taking down images raising awareness about breast cancer (something at issue in one of the initial FOB cases). Facebook detailed that its tools can recognize the words “breast cancer” but users have used these words to evade nudity detection systems, so Facebook can’t just rely on just leaving up every post that says “breast cancer.” Facebook has committed to providing its models with more negative samples to decrease error rates.

Facebook also mentions that it typically only uses automated tools where they are “at least as accurate as content reviewers.” The platform explains that it does not want to implicitly “overrepresent the ability of content reviewers” by telling users that automated tools have been used. It’s a good, and often overlooked point, that of course humans are also fallible and a human in the loop will not solve everything. But this opens the door to lots of questions.Tell us more about the “ability of content reviewers,” please! In how many areas do your tools beat the humans? How bad are the humans? Is the content moderation singularity imminent?

Facebook also noted that it is not always possible to classify a decision as simply “automated” or not, because sometimes an initial decision is made by a human reviewer and then automation is used to “detect and enforce on identical copies.” Knowing more about when and how Facebook combines different forms of automation and human review would provide more insight into how mistakes happen. Researchers rightly want more transparency from Facebook, but it’s helpful to know what kind of transparencywhat stats to ask for, in other wordswould actually be useful.

Controversially, the FOB had taken aim at Facebook’s pandemic policies, suggesting that Facebook was applying its rules too broadly in taking down misinformation about cures where physical harm was not necessarily “imminent” and the comments were directed at criticizing government policies. This was the one suggestion Facebook explicitly and loudly rejected, noting that public health authorities have made it clear that “imminent physical harm” can have a broader meaning in the context of a public health emergency. It also disagreed with the FOB’s argument that the fact that a harmful drug was not available in the country (in the case at issue, France) where a user was posting about it meant harm was not imminent because “readers of French content may be anywhere in the world, and cross-border flows for medication are well established.” To my mind, this is a fair response and Facebook declining to follow this recommendation does not undermine Facebook’s commitment to the FOB experiment.

Some of Facebook’s responses were also quite candid noting, for example, that “[g]iven the frequency with which we update our policies conducting a full human rights impact assessment for every rule change is not feasible.” It would be good to know, then, what the threshold is for a given policy change to get this kind of assessment.

Overall, this kind of dialogue between Facebook and the FOB is useful. Despite progress in recent years, Facebook’s content moderation remains fundamentally opaque. These responses lift up the curtain less than an inch, but they do tell the FOB and researchers where and how to keep asking further, more informed questions.

The Bad Parts

There’s a lot not in Facebook’s responses though.

First, the headline numbers of recommendations that Facebook says it is accepting is too rosy. They likely overstate the degree to which certain initiatives are in response to the FOB’s decisions, rather than steps Facebook was taking anyway. The launching of a new Transparency Center in coming months and connecting people with authoritative information about COVID-19 vaccines, for example, are listed as commitments to action but the FOB’s impact here was likely marginal at best. These things were probably going to happen even if the FOB didn’t exist.

In response to the FOB’s request for a specific transparency report about Community Standard enforcement during the COVID-19 pandemic, Facebook said it was “committed to action.” Great! What “action,” you might ask? It says that it had already been sharing metrics throughout the pandemic and would continue to do so. Oh. This is actually a rejection of the FOB’s recommendation. The FOB knows about Facebook’s ongoing reporting and found it inadequate. It recommended a specific report, with a range of details, about how the pandemic had affected Facebook’s content moderation. The pandemic provided a natural experiment and a learning opportunity: Because of remote work restrictions, Facebook had to rely on automated moderation more than normal. The FOB was not the first to note that Facebook’s current transparency reporting is not sufficient to meaningfully assess the results of this experiment.

By my count, then, at least three of Facebook’s 11 commitments are illusory.

Another critical detail missing from Facebook’s responses is the extent to which Facebook implemented the FOB’s decisions with respect to “identical content in parallel context.” Let’s take a step back. The FOB’s bylaws state that Facebook will follow the FOB’s decisions with respect to the individual piece of content being reviewed, and will take action on “identical content with parallel context” where it has the “technical and operational capacity” to do so. The amount of wiggle room in each of these words is evidently enormous. Facebook has provided no detail as to how broadly it has interpreted its obligations here. Its responses simply note, “We’ve started the process of reinstating identical content with parallel context in line with the board’s decision. This action will affect not only content previously posted on Facebook but also future content.” Does that mean it removed three similar posts? Thirty? Three-thousand?

This information is obviously important for assessing the breadth of the FOB’s impact, but it’s valuable for another reason. On a call with stakeholders about Facebook’s responses (which I was on in my academic capacity), Facebook said that it anticipated the number of instances of identical content to be quite low because the FOB’s decisions had been so context- and fact-specific. I have been critical of how much hair-splitting about facts and context the FOB engaged in: Context is vital, but taking an unduly fine-grained approach will make the FOB’s decisions impossible to implement at scale. The issue of “identical content with parallel context” shows another flaw in this granular approach: It confines the FOB’s direct impact even more in each case. More transparency from Facebook about the effects of this might impact how the FOB approaches its job in the future by prompting it to think more about how to make its decisions scalable.

Another important detail missing from Facebook’s responses is deadlines for updates on all the commitments it has made (whether you count them at eight or 11). Far too often, platforms make commitments and that’s the last the public hears about them. Perhaps they’re implemented; perhaps not and observers just forget. Deadlines, even provisional ones for when Facebook will provide further updates on their progress (of course, some commitments like reviewing automation in enforcement an appeals systems and making “adjustments where needed” are amorphous and sweeping and so not conducive to hard timelines), would show that Facebook is meaningfully committed to follow-through.

Finally, many commitments are extraordinarily vague: Facebook will “continue to invest in making our machine learning models better”; “work on tools to connect people with authoritative information”; “assess whether there are opportunities to strengthen the inclusion of human rights principles.” This may be the nature of the beast: Systemic change is amorphous and impossible to completely specify in advance. At the same time, it means these responses do not shed as much light as they could. Outsiders often do not have enough information to specify good metrics by which to judge progress, and Facebook hasn’t provided any here. The responses as currently written could be good faith commitments to seriously reckon with the FOB’s recommendations, or they could be heavily-worked corporate fluff. Facebook should put more thought into how it could specify the goals of its commitments; no doubt it has internal metrics it judges its progress against.

The upshot is that even after these responses it’s still not entirely possible to tell how broad the FOB’s impact will be. It’s not as large as Facebook has tried to make out, but it’s definitely not nothing. The suspense about whether Schrödinger’s Oversight Board is dead or alive continues.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare