Cybersecurity & Tech

What Do the Facebook Oversight Board’s First Decisions Actually Say?

Jacob Schulz
Friday, January 29, 2021, 1:17 PM

The grand experiment yields its first set of decisions. What's in them?

Someone using Facebook (leakhena khat/https://flic.kr/p/XENkM5/Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

The FOB has spoken. The Facebook Oversight Board (FOB)—a nascent court-like review board—has unveiled its first-ever set of decisions. The box score doesn’t look pretty for Facebook: Four of the five verdicts overturn moderation decisions made by Facebook.

But the four-one split isn’t as interesting as the substance of the decisions themselves. These contain the first building blocks of the FOB’s jurisprudence, and some also spell out non-binding policy recommendations for Facebook. The five verdicts read like short legal opinions, though they blissfully dispense with the footnotes and bluebooking. They pull from different sources of “law.” They try to carve out the bounds of the relevant set of rules. They use the word “particularized.” And they give way to a host of important normative questions, some of which Evelyn Douek has already tackled in Lawfare. But there’s also the more basic question: What do the opinions actually say?

The Oversight Board turned in the bulk of its first batch of work on time. It announced its docket on Dec. 3, 2020, and pinged out its final decisions 56 days later, 34 days before its 90-day limit for rendering opinions. I wrote in detail about the first docket earlier this month, but it’s worth recalling some of the specifics. The board initially picked six different cases. Five were user appeals, and one was a referral from Facebook. Things shuffled up a bit after one of the six cases got mooted because a user voluntarily took down the post to which the content in question was attached. The FOB replaced the mooted case with a new referral from Facebook, and the panel hasn’t yet wrapped up its deliberations on that replacement case. Of the five cases the FOB ruled on this week, four concern content posted on Facebook, while one comes from Instagram. The board punted on tackling any major U.S. political controversies with its first six picks. Instead, it takes a tour of global scandals. There’s China’s treatment of Uighur Muslims, the Syrian refugee crisis, historical Armenian churches, nipples, Joseph Goebbels, and, of course, hydroxychloroquine.

Case 1: Anti-Muslim Hate Speech From Myanmar

The board’s first opinion deals with a Facebook image post of a dead child “lying fully clothed on a beach at water’s edge.” The Oversight Board’s announcement of the case didn’t say much else to describe the post, noting only that “The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons.” Per the docket announcement, the post ran afoul of Facebook’s hate speech rules and so it got taken down. The Oversight Board reversed Facebook’s decision, noting that “[w]hile the post might be considered offensive, it did not reach the level of hate speech.”

The opinion helps to shed some light on exactly what was actually going on in the post. The Oversight Board’s announcement noted that the post was in Burmese, and the decision clarifies that the user is from Myanmar. And it turns out the post wasn’t a status update but a post in a Facebook group—and not just any Facebook group, but a group “which describes itself as a forum for intellectual discussion.” The opinion also clarifies that the post included two images, both of which are “two widely shared pictures of a Syrian toddler of Kurdish ethnicity who drowned attempting to reach Europe in September 2015.” Most importantly, the FOB’s opinion gives a clearer picture of what the post actually said: “the accompanying text begins by stating that there is something wrong with Muslims (or Muslim men) psychologically or with their mindset” and “concludes that recent events in France reduce the user’s sympathies for the depicted child, and seems to imply the child may have grown up to be an extremist.” It's a post with two images of a deceased Kurdish child, and the user suggests that Muslims tend to become terrorists, citing as evidence two terror attacks in France.

The original docket announcement mentioned that Facebook took the post down for violating the platform’s “hate speech” rules, but the opinion itself clarifies what Facebook had in mind. It wasn’t the jarring images that took the post over the line, it turns out, but instead the phrase that there is “something wrong with Muslims psychologically.” The opinion explains that Facebook’s Community Standards prohibit “generalized statements of inferiority about the mental deficiencies of a group on the basis of their religion,” so Facebook nixed the post and removed it from the “intellectual discussion” forum.

The opinion also spells out the nature of the user’s appeal. The appellant used the most tried and true of online harassment defenses: I was just kidding! Can’t you take a joke? The FOB reports that “[t]he user explained that their post was sarcastic and meant to compare extremist religious responses in different countries.” The user also took aim at Facebook’s translation abilities, claiming that “Facebook is not able to distinguish between sarcasm and serious discussion in the Burmese language and context.”

The FOB’s analysis here, as in all of the five cases, leans on three different sources of “law": Facebook’s Community Standards, Facebook’s “Values,” and international humanitarian standards. By each pathway, the Oversight Board arrives at the same conclusion.

On the first prong, the FOB concludes that the takedown represented an overreach by Facebook’s Community Standards police. The user’s critique of Facebook’s Burmese language facility wasn’t far off, the Oversight Board concludes: Facebook translated the text as “[i]t’s indeed something’s wrong with Muslims psychologically,” but the Board’s own translators read it as “[t]hose male Muslims have something wrong in their mindset” and the board “suggested that the terms used were not derogatory or violent.” The translation discrepancy flows in part from the FOB’s decision to look to broader linguistic context. There’s a lot of anti-Muslim hate speech in Myanmar, but “statements referring to Muslims as mentally unwell or psychologically unstable are not a strong part of this rhetoric.” This means that the post isn’t covered by the part of the Hate Speech community standard that bans “attacks about ‘[m]ental health.” The Oversight Board instead arrives at an alternative interpretation of the post, one that makes it compliant with the Hate Speech rules: “the Board believes that the text is better understood as a commentary on the apparent inconsistency between Muslims’ reactions to events in France and in China.” As such, it is opinion and can stay up, whereas generalized attacks on the “mental” deficiencies of a group cannot.

The FOB offers a more cursory analysis of the post’s relationship to Facebook’s values. The “Values,” spell out a balancing test of sorts: “‘Voice’ is Facebook’s paramount value, but the platform may limit ‘Voice’ in service of ​​several other values​, including ‘Safety.’” Safety, according to the Values Update, means that “[e]xpression that threatens people has the potential to intimidate, exclude or silence others and isn't allowed on Facebook.” It’s the same basic idea as in many certain domestic free speech regimes: The government, in this case Facebook, can curtail free speech in the interest of preventing certain harms. The FOB concludes that Facebook improperly biased the “Safety” consideration here: “this content did not pose a risk to ‘Safety’ that would justify displacing ‘Voice.’”

The opinion further concludes that no humanitarian obligation exists that mandates taking the post down. It doesn’t violate the “advocacy of religions hatred constituting incitement to discrimination, hostility or violence,” under the International Covennant on Civil and Political Rights (ICCPR); and “the Board does not consider its removal necessary to protect the rights of others.” Instead, the FOB gestures at some humanitarian considerations that would weigh in favor of keeping the post up. Article 19 of the ICCPR spells out a “right to seek and receive information, including controversial and deeply offensive information;” the UN Special Rapporteur on Freedom of Expression has recently affirmed that “international human rights law ‘protects the rights to offend and mock;” and there’s a value in keeping up “commentary on the situation of Uyghur Muslims” because such information “may be suppressed or under-reported in countries with close ties to China.”

So the post goes back up.

Case 2: Historical Armenian Churches and an Anti-Azerbeijani Slur

The second case concerns a Facebook post that Facebook took down because it contained a slur directed at Azerbaijanis. Here, the FOB okays Facebook’s original judgment and let the ban stand. The November 2020 post included images of “historical photos described as showing churches in Baku, Azerbaijan” and text in Russian that claimed “Armenians had built Baku and that this heritage, including the churches has been destroyed.” One problem: The user deployed the epithet “‘тазики’ (‘taziks’) to describe Azerbaijanis, who the user claimed are nomads and have no history comparable to that of Armenians.

The opinion explains that Facebook removed the post because it violated the platform’s hate speech rules. Specifically, Facebook doesn’t allow slurs “to describe a group of people based on a protected characteristic,” which in this case is national origin. The term literally means “wash bowl,” but the FOB explains that “it can also be understood as wordplay on the Russian word ‘азики’ (‘aziks’), a derogatory term for Azerbaijanis.” The latter meaning features on “Facebook’s internal list of slur terms … which it compiles after consultation with regional experts and civil society organizations.” Facebook determined that the whole post and its broader context—perhaps the fact that the post came just after Armenia and Azerbaijan signed a ceasefire to end a hot war—made clear that the user posted the slur in order to “insult Azerbaijanis.” Put another way, the slur was on the Bad Words List, so the post got banned.

This user reached for another canonical internet defense to appeal the takedown: The post only got reported because the other side was trying to censor it. The opinion explains that the user appeal “claimed that the post was only removed because Azerbaijani users who have ‘hate toward Armenia and Armenians’ are reporting content posted by Armenians.” The FOB didn’t waste much breath giving this argument any kind of detailed response. The appeal also claimed that the post wasn’t hate speech but instead “was intended to demonstrate the destruction of Baku’s cultural and religious heritage.”

The Oversight Board explains that all three prongs—the Community Standards, Facebook’s Values and International Humanitarian Norms—support keeping the post off the platform. To evaluate the Community Standards violation, the FOB “commissioned independent linguistic analysis” to take a look at the slur. Their report “supports Facebook’s understanding of [the] term as a slur.” The opinion recalls that the Community Standard’s slur prohibition isn’t absolute and that it’s possible that “words or terms that might otherwise violate [its] standard are used self-referentially or in an empowering way.” Not the case here, the FOB explained, as “context … makes clear” that the slur “was meant to dehumanize its target.” The opinion deploys the “Values” balancing test here, too, although it arrives at a different result from the first case. It holds, “In this case, Facebook was permitted to treat the use of a slur as a serious interference with the values of ‘Safety’ and ‘Dignity.’” Here, the FOB explains, context matters: “the content in question was posted to Facebook shortly before a ceasefire went into effect” between Armenia and Azerbaijan and “the danger of dehumanizing slurs proliferating in a way that escalates into acts of violence is one that Facebook should take seriously.” This contextual analysis leads the FOB to conclude that “the removal was consistent with Facebook’s values of ‘Safety’ and ‘Dignity,’ which in this case displaced the value of ‘Voice.’”

The FOB also signs off on the takedown on humanitarian grounds. Here, the Oversight Board borrows the three-part test of “legality, legitimacy, and necessity and proportionality” inscribed in Article 19, paragraph 3 of the ICCPR. To meet the mark on “legality,” the Oversight Board explains, “any rule setting out a restriction on expression must be clear and accessible.” Users have to know what is and isn’t out of bounds. The opinion holds that the Community Standard meets the mark here. The poster of the slur “attempted to conceal the slur from Facebook’s automated detection tools by placing punctuation between each letter,” which the FOB notes, “tends to confirm that the user was aware that they were using language that Facebook prohibits.” If you’re trying to dupe an algorithmic censor, chances are that you know what you’re posting runs up against the rules. “Legitimate” restrictions on speech must pursue a “legitimate aim,” as listed by the ICCPR. The board holds that Facebook’s takedown satisfied a bunch of these “legitimate aims,” including “protect[ing] the right to security of a person from foreseeable and intentional injury.”

The FOB splinters on the question of necessity and proportionality. This standard “require[s] Facebook to show that its restriction on freedom of expression was necessary to address the threat, in this case the threat to the rights of others, and that it was not overly broad.” Some Oversight Board members dissented here, but the majority argued that the removal satisfied the standard: “Less severe interventions, such as labels, warning screens, or other measures to reduce dissemination, would not have provided the same protection.” The majority highlights that Facebook might have dropped the hammer even harder on the user but “did not take more severe measures also available to them, such as suspending the user’s account, despite the user seemingly re-posting offending content several times.”

This opinion closes with a policy recommendation for Facebook. The FOB admonishes Facebook to “[e]nsure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing.” Facebook shot itself in the foot here by not being transparent with the user and “left its decision susceptible to the mistaken belief that it had removed the post because the user was addressing a controversial subject or expressing a viewpoint Facebook disagreed with.” Unlike the verdict, this recommendation doesn’t bind Facebook.

Case 3: Female Nipples and Breast Cancer Awareness Month in Brazil

The dispute between the Oversight Board and Facebook picks up steam in the next case. In this case, a Brazilian user posted an Instagram picture with a Portugese caption describing how the post intended “to raise awareness of signs of breast cancer,” in honor of the “Pink October” campaign. The picture included eight images within it that displayed different breast cancer symptoms, five of which included “visible and uncovered female nipples.” The opinion explains that a “machine learning classifier trained to identify nudity in photos” flagged the picture and Instagram pulled the post off its app for violating “Facebook’s Community Standards on Adult Nudity and Sexual Activity.” The user appealed to Facebook. The FOB doesn’t say what the result of that appeal ended up being, but hints that it may never have gotten a careful review: “In public statements, Facebook has previously said that it could not always offer users the option to appeal due to a temporary reduction in its review capacity as a result of COVID-19. Moreover, Facebook has stated that not all appeals will receive human review.”

Things start to get interesting here. The user took the case to the Oversight Board and the dispute made the cut for the first docket. But Facebook “reversed its original removal decision” and restored the post to Instagram right after it popped up on the docket. The FOB explains that Facebook fessed up that its AI moderator committed an “enforcement error,” and emphasizes that Facebook only made things right thanks to the Oversight Board’s choice to take the case. Facebook tried to clear up its position on the discrete moderation issue at play, probably with the goal of stopping the FOB from thinking too seriously about the underlying structural problems that caused it. It clarified that the Facebook Community Standards do indeed apply on Instagram and that female nipples “are allowed for ‘educational or medical purposes.”

But Facebook went even further and argued that its mea culpa meant that the FOB no longer had jurisdiction to decide the case. The case is moot, Facebook argued, as “there is no disagreement that [the post] should stay on Instagram.”

Not so fast, says the Oversight Board. This isn’t federal court, and the mootness rules before the FOB are somewhat relaxed. The FOB’s charter “only requires disagreement between the user at the moment the user exhausts Facebook’s internal process”; there’s no requirement that the disagreement be currently unresolved, as there would be in a federal litigation. The FOB continues, “For Facebook to correct errors the Board brings to its attention and thereby exclude cases from review would integrate the Board inappropriately to Facebook’s internal process and undermine the Board’s independence” (emphasis in original). In a bit of hyperbole, the opinion also emphasizes that the trigger-happy AI already caused what it terms “irreparable harm”: “Facebook’s decision to restore the content in early December 2020 did not make up for the fact that the user’s post was removed for the entire ‘pink month’ campaign in October 2020.”

The opinion walks through the ways in which the takedown faux pas contravened the three relevant sources of “law.” The post “falls squarely within the health-related exception” to the Community Standards ban on female nipples; the takedown betrayed both Facebook’s “Voice” and “Safety” values; and the ban entailed all sorts of international human rights standards violations.

The catchiest part of the opinion comes in the policy advisory recommendations. Half of the recommendations deal with flaws in the automated moderation system that Facebook has leaned on more heavily during the pandemic. The FOB suggests, for example, that Facebook tell its users when their posts get taken down by a robot. But the policy statement also includes some loftier recommendations: It suggests “ensur[ing] users can appeal decisions taken by automated systems to human review” in cases where material gets taken down for violation of the Adult Nudity and Sexual Activity rules. What’s more, to the delight of researchers, it proposes “[e]xpanded transparency reporting to disclose data on the number of automated removal decisions per Community Standard, and the proportion of those decisions subsequently reversed following human review.” Facebook doesn’t give up much about its moderation practices, so as Evelyn Douek has described, the proposal is “ambitious and important.”

The other recommendations prod Facebook to clear up what the rules are for Instagram. The opinion proposes two more fact-specific reforms, but then comes the zinger: “Clarify that the Instagram Community Guidelines are interpreted in line with the Facebook Community Standards, and where there are inconsistencies the latter take precedence.” In other words, stop making users guess which set of rules are king on Instagram. As Douek writes, if Facebook makes this change “Facebook’s and Instagram’s rules would, for all intents and purposes, be explicitly harmonized.” Facebook has 30 days to sort out its response to these challenges.

Case 4: The Joseph Goebbels Misquote

Next up are the Nazis. The Oversight Board ruled in favor of a U.S. appellant who reshared a post he had made two years prior that included an English translation of an “alleged quote” from the Nazi propagandist Joseph Goebbels about how “arguments should appeal to emotions and instincts,” rather than intelligence. The user had been prompted by Facebook’s ‘On This Day’” feature to reshare the post. The FOB clarifies that the user wasn’t exactly a historian and had erroneously attributed the quotation to Goebbels. Facebook took down the post for a violation of its Dangerous Individuals and Organizations rule, but the Oversight Board directed the platform to reinstate the post.

The user argued that he didn’t mean to invoke the Third Reich propagandist in admiration, but rather intended to “draw a comparison between the sentiment in the quote and the presidency of Donald Trump.” Using a style that mirrored some of the rhetorical preferences of the man the user sought to criticize, the appeal notes that the content of the quote was “VERY IMPORTANT right now in our country as we have a ‘leader’ whose presidency is following a fascist model.”

The Oversight Board explains how Facebook deals with quotations (real or imagined) attributed to “dangerous individuals.” Facebook’s default assumption is that if you quote Adolph Hitler, you mean to quote Hitler in support of Hitler. A user earns a more holistic interpretation only if he or she “provide[s] additional context to make their intent explicit.” The user in this case didn’t bother to frame his alleged Goebbels quote, so Facebook took it down (though the FOB notes that, again, Facebook didn’t inform the user why the post got taken down). In its response to the Oversight Board, Facebook also confirmed that “the Nazi party … has been designated as a hate organization since 2009 by Facebook internally. Joseph Goebbels, as one of the party’s leaders, is designated as a dangerous individual.” This point might seem obvious, but it actually speaks to one of the FOB’s biggest value propositions: resurrecting and making public tibits of details from the immensely opaque platform’s internal policies.

The FOB, however, takes a different interpretive path in evaluating the post. It suggests that easily available context—comments on the user’s post—indicate that “the post sought to draw comparisons between the presidency of Donald Trump and the Nazi regime." Consequently, the takedown fails the Community Standards assessment.

The FOB holds that the takedown also betrayed Facebook's values in overweighting “the minimal benefit to the value of ‘Safety’” while “unncessarily undermin[ing] the value of ‘Voice.’” Under the international human rights standards, the FOB determines that the takedown actually satisfies the “legitimate aim” prong insofar as the Dangerous Individuals and Organizations policy has the “legitimate aim[]” of “protect[ing] individuals from discirimation and protect[ing] them from attacks on life or foreseeable intentional acts resulting in physical or mental injury.” But the removal fails both the “legality” and “necessity and proportionality” test. The FOB notes that the Dangerous Individuals and Organizations rule contains troublingly vague language. But more foundationally, the FOB chides Facebook for failing to make public “a list of individuals and organizations designated as dangerous, or, at least examples of groups or individuals that are designated as dangerous.” A user might reasonably assume that Nazis make the cut for any bad persons list, but the Oversight Board pushes the platform to offer a more direct indication of that fact. On “proportionality,” the FOB notes that Facebook’s decision to instruct moderators to ignore contextual clues “resulted in an unnecessary and disproportionate restriction on expression.” The FOB considers a final human rights pillar, “equality and non-discrimination,” and holds that “removing content that sought to criticize a polician by comparing their style of governance to architects of Nazi ideology does not promote equality and non-discrimination.”

The Goebbels misquote yields some policy recommendations from the FOB too. Again, the Oversight Board urges Facebook to “[e]nsure that users are always notified of the reasons for any enforcement of the Community Standards against them, including the specific rule Facebook is enforcing.” The FOB also suggests that Facebook clear up some of the ambiguity in the Dangerous Organizations and Individuals rules. Among other things, it prods the platform to spell out who’s earned a spot on the list, or at least give up some “illustrative examples.”

Case 5: Hydroxychloroquine, a French Doctor and “Imminent” Harm

Last but not least, the FOB had to figure out what to do with a post that celebrates the hydroxychloroquine torch-bearer. The case concerns a video post in a public French Facebook group which is “related to COVID-19.” The post alleged “a scandal” at France’s version of the Food and Drug Administration, which wouldn’t budge on its refusal to authorize a hydroxychloroquine cocktail to treat COVID-19. The post lamented that “Raoult’s cure”—a reference to the wacky French doctor, Didier Raoult—was “harmless” and was saving lives in other countries. Facebook users watched the video 50,000 times before the platform pulled it. Facebook originally yanked the post off the site for violating its Violence and Incitement Community standard, which incorporates both its misinformation and imminent harm rules. In Facebook’s assessment, the post “could lead people to ignore health guidance or attempt to self-medicate.” Facebook elaborated that it took the video down because it erroneously claimed there was a cure for COVID and that content purporting to have found a guaranteed cure for the virus “could lead people to ignore preventive health guidance or attempt to self-medicate.” Facebook doesn’t draw these lines on its own, but consults with the World Health Organization

No need for a user appeal here. Facebook referred the case directly to the Oversight Board, “citing [it] as an example of the challenges of addressing the risk of online harm that can be caused by misinformation about the COVID-19 pandemic.” The platform confessed that the case creates awkward tensions in Facebook’s Values. “Voice” and “Safety” pull pretty strongly in opposite directions; there’s a free expression interest in allowing people to “discuss and share information about the COVID-19 pandemic,” but also an obvious harm-reduction interest in stamping out health misinformation.

The Oversight Board tells Facebook it has to reinstate the video about “Raoult’s cure.” The opinion holds that the video didn’t violate Facebook’s relevant Community Standard because the platform didn’t “demonstrate[] how this user’s post contributed to imminent harm in this case” (emphasis in original). The FOB characterizes Facebook’s response as slapping the “imminent” label on any misinformation about COVID-19 treatments and cures. Imminence is always a slippery legal concept. The FOB gestures at that reality and spills a lot of ink trying to tease out exactly what might give a hint as to whether the hydroxychloroquine post went over the line. The panel reaches out to experts who inform them that the precise cocktail prescribed in the post isn’t available over-the-counter in France, so “it is unclear why those reading the post would be inclined to disregard health precautions for a cure they cannot access.” Facebook didn’t do its homework, the FOB argues. It “did not address the particularized contextual factors indicating potential imminent harm with respect to such users,” and the failure to point to any supporting context means that Facebook “did not act in compliance with its Community Standard.” The FOB is more laconic on the Values issue. In two quick sentences, it notes that Facebook didn’t show that the “Safety” interest outweighed the “Voice” interest. The FOB chides the “vague prohibitions” that Facebook deployed here, citing a U.N. Special Rapporteur report which cautions that amorphous rules like the ones at issue here “provide authorities with ‘broad remit to censor the expression of unpopular, controversial or minority opinions.” The opinion takes aim at the freeform way that Facebook announced its COVID-19 policy changes. Instead of updating its Community Standards, Facebook promulgated new policies through “Newsroom” blog posts which “sometimes appear to contradict the text of the Community Standards.” This “patchwork” makes it “difficult for users to understand what content is prohibited.” The rules satisfy the “legitimate aim” prong, but the takedown flounders on the “necessity and proportionality” test.

The FOB has a lot of constructive criticism for Facebook in this case. It’s time to create a “clear and accessible” Community Standard about health misinformation, the Oversight Board suggests. The opinion also proposes that Facebook detail in its Community Standards the range of enforcement options available for health misinformation and rank them “from most to least intrusive based on how they infringe with freedom of expression.” Notably, the Oversight Board also tells Facebook it ought to take a less aggressive approach on content about scientifically unproven COVID-19 treatments, suggesting that Facebook add informational labels or “downrank” such posts instead of pulling them off the site. The FOB closes its day with another nugget for researchers, urging Facebook to publish a “transparency report on how the Community Standards have been enforced during the COVID-19 global health crisis."

Facebook has six more days to act on the FOB’s verdicts; and it has 30 days to chew over how it will respond to the FOB’s ambitious policy suggestions.


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare