Cybersecurity & Tech

Facebook’s Responses in the Trump Case Are Better Than a Kick in the Teeth, but Not Much

Evelyn Douek
Friday, June 4, 2021, 4:32 PM

Today was the 30-day deadline for Facebook’s responses to the policy recommendations in the FOB’s decision on the suspension of Trump’s account. The responses are underwhelming. 

Facebook like icons. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

One of the many ways that the Facebook Oversight Board (FOB) is different from many courts of law, let alone the Supreme Court, is that except on the narrowest of issues (the fate of the individual piece of content or account in a case) it doesn’t have the last word. The bulk and most consequential parts of the FOB’s decisions are non-binding policy recommendations; Facebook need not abide by them but does have an obligation to respond to them within 30 days. Despite often getting far less attention than the release of the decisions themselves, these responses are the most important part of the whole FOB experiment. It’s Facebook’s responses that make clear whether it’s engaging in the FOB process in good faith or whether the FOB is just Facebook’s very expensive agony aunt whose advice it only follows when it feels like it.

Today was the 30-day deadline for Facebook’s responses to the policy recommendations in the FOB’s decision on the suspension of Trump’s account. When the decision came out, much of the attention was focused on the fact that the FOB kicked the ball back to Facebook to decide what to do with Trump’s account. The FOB gave Facebook six months to work it out. Today, Facebook bit the bullet early: it suspended Trump’s account for 2 years. Much of the public attention on Facebook’s responses today has focused on this.

But this post is not about the politics of the decision to suspend Trump’s account for 2 years. There will be more than enough commentary about that. If anything, I read the two-year suspension announcement as a distraction from the disappointing nature of the rest of what Facebook had to say in response to the Oversight Board decision. So, this post is about the 20-page document buried at the end of Facebook’s announcement today; and those 20 pages constitute a largely milquetoast response from Facebook to the FOB’s substantial policy recommendations.

The headline is that Facebook says it is “committed to fully implementing” 15 out of 19 recommendations, adopting one recommendation in part, still assessing two recommendations and taking no action on one. But the devil is in the details.

I’ve previously written about how Facebook’s interpretation of its own commitments is more generous than mine. This holds true with Facebook’s “responses” this time around. Facebook considers statements like “We are working to enhance our automated tools to improve our proactive review of content that could potentially impact public safety” as fully implementing a recommendation. But isn’t this really just a general thing that Facebook should be doing anyway? There is no deadline for follow-up or metrics for determining success. Saying it will “continue to consider the broader context of content from public figures” is an unsatisfactory response to the FOB’s recommendation that Facebook needs to do better in resisting “pressure from governments to silence their political opposition and consider the relevant political context” in making these decisions. Many of the responses are of a similar nature.

Here, I focus on the more significant recommendations and responses. But the bottom line is in many of them, Facebook gives itself a gold star but they're really borderline passes at best.

Change in “Newsworthiness” Policy

The response that is getting the most attention, and which leaked yesterday, is that Facebook is ending its presumption that speech from politicians is inherently “newsworthy,” and thus it is generally in the public interest for them to remain on the site even if they break Facebook’s rules. This special treatment for politicians has long been one of Facebook’s most controversial policies. So it’s understandable that there was a lot of buzz about this “sharp” or “major” reversal.

But this interpretation misses that Facebook’s decision on this is not at all surprising and could result in little substantive change.

It’s not surprising because it was basically the only option Facebook had unless it wanted to give a giant middle finger to the FOB. In its decision the FOB could not have been clearer (well, actually, it could have, but we’ll come back to that). It said explicitly: “The same rules should apply to all users of the platform,” including political leaders. To reject this recommendation would have been to reject the entire premise of the FOB experiment that Facebook has invested so much in marketing and propping up: that for the most difficult decisions, Facebook should not be making calls on its own and the FOB is the best institution to provide guidance. Facebook also said it will start disclosing when it has left content up that otherwise violated its rules because that content is newsworthy. But this was also not news: it was announced in June last year.

(As a side note: Twitter still has a speech world leader public interest policy that it has been calling for input on and YouTube has not deigned to tell the public how or when it will decide what it's doing with Trump’s account.)

So that’s why Facebook’s change is not surprising. Why might it also not be a big deal? First, Facebook did not actually end the most controversial part of its policy and will still not submit any claims of politicians to fact-checking. Most lying is not per se against Facebook’s rules, so there will still often be no consequence for politicians who do so. It might be good that Facebook doesn’t take down political speech purely on the basis that it is false—that’s a fraught path to walk. But by preventing politician’s claims from being labeled when a fact-checker has determined them to be untrue, as might happen to every other user, Facebook is clearly treating them differently and not even taking more timid measures.

Second, and more consequentially, in cases other than falsehoods, it’s not clear that the responses have substantially altered the balancing exercise Facebook will conduct when determining what to do with a politicians’ speech. In its submission to the FOB for the Trump case, Facebook said it “has never applied the newsworthiness allowance to content posted by the Trump Facebook page or Instagram account.” This statement defied belief at the time. Facebook had regularly generally gestured at this policy when people called for Trump’s deplatforming or the removal of individual posts, and referenced it in the referral of the Trump case to the FOB. It also told the FOB that 20 pieces of content on Trump’s account were flagged as violating Facebook’s rules but “ultimately determined to not be violations.” But the reversals apparently weren’t made because the posts were newsworthy.

Turns out the statement that the newsworthiness policy had never been applied to Trump was unbelievable because it wasn’t true. Facebook conceded in its response today that it “discovered” it had applied it to content on Trump’s account twice.

The upshot is this: Facebook itself isn’t even sure how significant the newsworthiness policy was in its decisions about Trump. And even with the new policy, little has changed: a case about politicians’ speech will still go into the black box that is the Facebook policy enforcement machine and something will get spit out and no one will ever really know why. The factors going into each individual decision are still broad and leave plenty of discretion, making it hard to predict or challenge how they are applied in any particular case.

This is a key area where the FOB itself should have stepped up and provided more concrete guidance—instead, it basically left Facebook almost unfettered discretion in coming up with its new policy. Worse, it gave conflicting signals. While it said in the Trump decision that Facebook should treat all users the same, it also said “Political speech receives high protection under human rights law” and “it is important to protect the rights of people to hear political speech.” But it also said that, on the other hand, the fact that political leaders are more influential might also mean their speech could be considered more persuasive and dangerous. So the FOB set out three considerations that pull in different directions: treat all users the same, but value political speech (which, by definition, politicians’ speech will often be) more highly, but also be wary of the influence of political figures. Oh, and also take into account a number of other factors from the Rabat Plan of Action. The FOB was silent on the question of how heavily to weigh each factor. In short, the FOB’s guidance was confusing.

So while Facebook’s response has said that the platform will no longer put its thumb directly on the scale for politicians, considerations about political speech, and thus speech by politicians, will still be considered under other aspects of the balancing test that Facebook undertakes to weigh the public interest against the risk of harm. Indeed, the new policy still says “the speaker may factor into the balancing test.” Frustratingly, after all this, Facebook’s policy on political speech is still a vague, amorphous, multifactorial balancing test. Facebook has outlined more severe sanctions it might consider for public figures during periods of civil unrest, but Facebook’s Vice President of Global Affairs Nick Clegg has already said it only expects to apply its new enforcement protocols “in the rarest circumstances.”

The significance of understanding how this policy will apply in the future is not primarily about the account of Donald J. Trump. This is about politicians everywhere, many of whom will have been watching Facebook’s decision with interest. Indeed, a number of countries (and one U.S. state) have considered or passed legislation to try to prevent Facebook doing the same thing to their politicians. This is about Brazilian President Bolsonaro, Filipino President Rodrigo Duterte, Australian Member of Parliament Craig Kelly, Congresswoman Marjorie Taylor Greene—it’s about every politician on Facebook for as long as there’s a Facebook.

The FOB noted in its original decision that “the lack of transparency regarding these decision-making process appears to contribute to perceptions that the company may be unduly influenced by political or commercial consideration.” Given the nebulous nature of the “new” policies, it’s not clear that nearly six months of back-and-forth about Trump’s account has addressed this underlying problem.

The “cross-check” process

Facebook’s responses almost entirely whiffed on giving more detail about the “cross-check” system it applies to some “high profile” accounts. The existence of the previously undisclosed process came to light in the FOB’s decision. The system is a bad look for Facebook. While the description the FOB gave for this system in its decision was vague, when paired with stories that have come out over the years about how higher-ups within Facebook have interfered with content decisions regarding certain high profile users for political reasons, it suggests a special escalation (and protection) process for cases that might prove controversial.

Facebook’s responses today say “cross check simply means that we give some content from certain Pages or Profiles additional review” and that it normally applies this to “content that will likely be seen by many people.” The supposed documentation of this in the new “Transparency Center” sheds no further light. It’s still entirely unclear who weighs in in these exceptional cases to check or overrule frontline moderators. This distinction matters. If content is getting escalated to people outside the normal policy teams to people who are closer to the government relations teams in Facebook, for example, the considerations and priorities given to the post will be entirely different and potentially not aligned with the neutral application of Facebook’s rules.

Worse still, Facebook in its response says it’s “not feasible to track” the relative error rates of decisions made through the cross check process compared with ordinary enforcement procedures despite it only being “a small number of decisions.” This is poor form. Such data is basic quality assurance. In some cases “the failure to collect information necessary to … evaluation of procedural adequacy” can itself be a failure of due process. This is such a case.

Clarity on Facebook’s strikes policies

Facebook’s extra clarity on its strike and account penalty policies is marginal progress. The specifics aren’t worth getting into here (you can go read them in Facebook’s new Transparency Center), but the FOB had rightly criticized Facebook’s strikes and penalties system as not giving users “sufficient information to understand when strikes are imposed … and how penalties are calculated.” The importance of clear policies is not just a matter of due process to provide users notice of the rules they will be subject to. It can also help shape user behavior toward compliance if the rules and penalties are clear in advance.

The policies that Facebook added to the Transparency Center today are definitely more concrete and detailed than what they had previously disclosed, and this is welcome. But they are also littered with “Facebook may do X”. There are eight “mays” in Facebook’s policy on restricting accounts alone, and it says that the strike system Facebook has will apply “for most violations.” Facebook isn’t committing itself in any particular case.

These policies have been a mystery for far too long. Just to name one example: in the wake of the shooter in Christchurch Massacre live-streaming his acts on Facebook, COO Sheryl Sandberg said Facebook was “exploring restrictions on who can go Live depending on factors such as prior Community Standard violations” but there was never any further update. Facebook’s announcement today still failed to completely remedy that. Users “may” be restricted from using Facebook Live for periods which “normally range” from 1-30 days.

In all these cases, there is no insight given into what considerations determine when the regular policy may not apply.

Regardless, Facebook should use the opportunity of providing a more detailed policy as providing the chance for a natural experiment. With the greater detail on its policies, it should collect data on whether and how this changes user behavior. Releasing this could benefit content moderation more generally, as platforms can learn from Facebook’s experience.

Facebook’s Investigation of its Role Leading Up to Jan. 6

One of the most consequential recommendations the FOB made was for Facebook to “undertake a comprehensive review of its potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6, 2021.” BuzzFeed has reported that even before the FOB’s decision, Facebook had created such a report internally, but the FOB’s recommendation was specifically for an “open reflection.” Interestingly, the FOB’s recommendation became a focal point for public pressure: for example, Bob Bauer, who advised Biden’s presidential campaign and served as White House Counsel during the Obama administration, called on Facebook CEO Mark Zuckerberg to make “an unequivocal commitment to the complete and public review suggested by the Oversight Board.”

Unfortunately, neither Bauer’s plea nor the FOB’s recommendation worked to any substantial extent. Facebook did not commit itself to any further public reflection on the role it played in the election fraud narrative that sparked violence in the United States on January 6, 2021. It pointed to existing research partnerships with independent researchers, and did extend the amount of data it will provide them. Facebook also highlighted its previous enforcement actions against groups like QAnon. But it said “the responsibility for January 6, 2021, lies with the insurrectionists and those who encouraged them” and its only further commitment is to “continue to cooperate with law enforcement and US government investigations related to the events on January 6.” This is extremely disappointing. Of course the blame for Jan. 6 does not lie entirely, or perhaps even primarily, with Facebook. Other institutions also desperately need to hold themselves accountable. But the dramatic failure of other institutions does not mean that Facebook should not have seized this opportunity to do better and to add to the public record about what enabled the insurrection to happen.

Is the FOB working?

The Trump-Facebook saga is still not over. The FOB split the baby in its decision last month by upholding Facebook’s original decision to suspend Trump but referring the matter back to Facebook for a determination on what to do next. Facebook split the baby again by suspending Trump for two years which is a substantial period of time but, coincidentally, not enough time to make it past the next U.S. presidential election cycle. This quartered baby may well end up at the FOB again in 2023.

Still, with Facebook’s responses to the FOB’s most high-profile decision now in, it’s an opportunity to reflect on the overall Oversight Board experiment.

Despite some frustration with aspects of the FOB’s decisions so far, I had thought there were promising signs. The FOB’s recommendations have been targeted at making a broader impact on Facebook’s systems than its own limited remit would suggest, and Facebook had been committing to following through on a number of them. But this time, like with all other cases, don’t be distracted by the news about Trump’s account or the headline figures Facebook itself releases: the devil is in the details and the details are disappointing. It’s discouraging that the FOB itself has said it “is encouraged that Facebook is adopting many of the Board’s policy recommendations.” This is too generous.

I have never been as concerned about the non-binding nature of the recommendations as most people. Indeed, given that content moderation in practice is a recursive process that involves feedback mechanisms for how and whether policies on paper work in practice with the practical issues of moderating content at scale, I think the flexibility is important. The membership of the FOB is not well-equipped to be making final calls on the day-to-day logistics of content moderation. The main benefit is the open discussion of this process between Facebook and the FOB, and that has delivered so far —I am learning a lot. The Trump case alone brought insights into Facebook’s treatment of Trump specifically, its cross-check process and its strikes system. Other cases about other countries have been similarly enlightening.

Still, today shows where the non-binding nature of the recommendations can fall down. Facebook provided a little more detail on some policies and gave itself a tick. But in every case the policies it published today leave it substantial room to move. Of course, there will always be a role for discretion in the individual cases and policies can never be completely determinate. But it’s not clear that anything today changed the status quo. The FOB itself is partially responsible in refusing to give more concrete guidance in its decision.

It’s still early in the Oversight Board experiment. The FOB may in the future not shirk making more hard calls itself. Facebook may get sick of the FOB’s audacity and start ignoring it entirely. Public attention may wane, decreasing whatever leverage the FOB has, or public pressure may build if Facebook continues to ignore the FOB’s most important demands. The FOB might prove more useful in bringing attention to global issues, or it might start getting Facebook into too much trouble with governments around the world—it has already pushed Facebook to resist political pressure in places like Turkey, India and now potentially Brazil. And, of course, there’s 2023 to look forward to. After the last year in content moderation, it’d be foolish to try to predict what might happen.
The FOB was never going to fix Facebook overnight or entirely, and the FOB is still better than a kick in the teeth. But today also makes clear the limits of what the FOB can do if Facebook drags its feet. A statement in an email from Facebook announcing its responses today that “we are the only social media company holding ourselves accountable so significantly.” That’s arguably still true. This just shows how bad the content moderation governance status quo remains.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare