Cybersecurity & Tech Democracy & Elections

Facebook, It’s Time to Put the Rules in One Place

Carly Miller
Friday, March 5, 2021, 2:59 PM

Facebook’s policies on health misinformation stretch across blog posts, different sections within the Community Standards, and now in its Help Center. This must change.

Facebook like icons. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Jan. 28, Facebook responded to the Facebook Oversight Board’s first set of policy recommendations. These nonbinding recommendations included a suggestion that Facebook should “create a new Community Standard on health misinformation, consolidating and clarifying the existing rules in one place.” Plainly stated, the Oversight Board told Facebook, your policies are confusing and hard to follow. The board was right: Facebook’s Community Standards—Facebook’s key policy document—did not even mention the word “vaccine” until recently.

For the past few months, my team at the Stanford Internet Observatory has been zealously attempting to piece together Facebook’s policies. In the fall of 2020, we focused on tracking policy updates on election-related content for the Election Integrity Partnership (EIP). Recently, we’ve focused on vaccine-related content for the Virality Project. To keep track of policy changes, we scoured Facebook Newsroom blog posts and tweets by executives, and even used the Wayback Machine from the Internet Archive for a side-by-side comparison of changes.

Based on my time spent tracking Facebook’s policies, I learned that an average user does not have the time to track down all of Facebook’s policies on any given subject—they’re scattered and hard to follow. With the average user in mind, the Oversight Board recommended synthesizing a new Community Standard for health misinformation. Facebook responded last week by stating it had already taken action by consolidating its policies into a Help Center blog post, which it links to in the Community Standards. This is a step in the right direction, but it does not solve the underlying issue of consolidating and clarifying existing rules in one place: Facebook’s policies on health misinformation stretch across blog posts, different sections within the Community Standards, and now in its Help Center. Here’s how Facebook can address the Oversight Board’s concerns: Put all of your policies in one place.

In the grand scheme of content moderation, where Facebook puts its policies on its website may seem like a small fish to fry, a simple user experience design fix. But policies not only need to exist, they also need to be accessible to be meaningful. Both Facebook and other platforms should take this obligation more seriously, and not just for coronavirus-related policies.

Facebook’s Community Standards: Uneven Terrain

To understand why Facebook lacks policy consolidation, it’s first important to look at what Facebook’s Community Standards say and what they don’t.

When Facebook presented its Community Standards to the public for the first time in April 2018, the platform framed the move as “publishing our internal enforcement guidelines.” But these guidelines do not include all mechanisms the platform uses to take action on violative content.

Over the years, Facebook and other platforms have developed moderation strategies beyond a binary decision of keeping or removing content. As a result, Facebook’s enforcement mechanisms generally fall into three categories: removing content, reducing content distribution, and/or informing users about how and why their content violates their policies. Part of determining the best approach to take includes iteration and learning from users’ responses in different scenarios. As Facebook modified these enforcement techniques in response to real-world events, it introduced the majority of these new policies through blog posts. However, some of these policy updates did not make it into the Community Standards.

Facebook began to roll out policies on coronavirus misinformation in late January 2020 and updated them through a series of blog posts. It first mentioned the coronavirus vaccine in early December 2020, though a collection of blog posts from 2019 had spelled out Facebook’s previous policies about vaccine-related content. Before the Oversight Board’s decision, the coronavirus and vaccine-related policies existed almost exclusively in blog posts. In response to the board, Facebook added a few bullet points to its Community Standards, which then link to the more extensive blog post on coronavirus and general vaccine misinformation in its Help Center. In other words, Facebook’s policies are literally all over the place.

But coronavirus misinformation is not the only area where Facebook has this problem: Facebook’s election-related policies also lack consolidation. Leading up to the 2020 U.S. election, Facebook updated its policies through blog posts in September and October 2020. While Facebook updated its Community Standards each time, certain types of content the platform promised to take action on in the blog post were not included in the Community Standards. Specifically, Facebook’s policy to address content that delegitimizes election results or prematurely claims victory by a candidate is still absent from the Community Standards. In retrospect, both of these policies were significant: the Election Integrity Project monitored multiple social media platforms around the election and in our recently published report, we find that most of the claims we flagged to platforms during our election monitoring relate to attempts to delegitimize the election. It is also unclear what will happen to these policies now that the U.S. election is over. Will they apply to other countries? Are they specific to the 2020 U.S. election? Adding the policies to Facebook’s Community Standards would clear this up.

Even during the election, Facebook should have had one centralized location to track these updates. While the Election Integrity Project tried its best to consolidate these updates, as a multibillion-dollar company Facebook should have been able to provide more guidance; for example, Facebook could have linked to its election policies from its Voting Information Center. Or Facebook could have followed Twitter or TikTok’s lead and provided more transparency on the updates it made when it created and took action based on election-related policies. For example, Twitter showed a timeline of its policy iterations related to election integrity, and TikTok produced a timeline of the different content it took action on every day leading up to the election. Policy consolidation would have been useful and important during the election.

Diving deeper, some policies concerning platform features such as Facebook Live are also missing from the Community Standards. After the Christchurch shootings in 2019, Facebook published a blog post that restricted users who violate the platform’s “most serious policies” from using Live “for set periods of time.” Which of Facebook’s policies are the “most serious”? In its Community Standards, Facebook only restricts Lives containing violent and graphic content, stating that users cannot post “Live streams of capital punishment of a person.”

The Oversight Board’s decision brought attention to the lack of transparency and consolidation in relation to Facebook’s coronavirus misinformation policies, but this is a recurring issue. Facebook is capable of this much-needed consolidation: In some policy areas, like suicide and self-injury, Facebook does a good job of incorporating commitments expressed in blog posts into its Community Standards. Facebook needs to do the same for currently unconsolidated policies for other types of content and with new policy updates in the future.

Facebook should adopt the spirit, and not just the letter, of the Oversight Board’s recommendation, and consolidate these other areas as well.

Evaluating Facebook’s Solution—the Help Center Blog Post

If Facebook’s model for consolidating policies is the Help Center blog post published the week of Feb. 8, it’s worth examining the Help Center’s strengths and weaknesses as it relates to other types of policies in need of consolidation.

Facebook’s new policy does have some upsides. It manages to cut through the different types of content categories around which the Community Standards are structured. For example, the Help Center post examines how Facebook’s vaccine policy intersects with that of coordinated harm and hate speech to demonstrate that it is challenging to address content by placing it in mutually exclusive categories. For example, researchers note the prevalence of political conspiracies overlapping with vaccine misinformation. The Help Center post also provides considerable detail on the types of claims Facebook will act on and what authoritative sources it consults in making these policies, and it offers insight into how the platform decides to act on content that is told through personal experiences or anecdotes.

However, the Help Center solution falls short on the matter of increasing transparency in three important ways. First, Facebook fails to put everything into “one place,” like the Oversight Board recommended. Facebook’s policy additions related to the coronavirus vaccine in the Community Standards are scattered across subsections. And although Facebook links to the Help Center in its Community Standards, this link is a subbullet of another bullet, burying it on the page as a place to navigate to.

It’s unclear why Facebook did not add the contents of its Help Center post to the Community Standards. One of the main benefits of the Community Standards is that you can easily see what and when something has been recently updated: Facebook highlights new policy language and changes. The Help Center post does not do this.

Perhaps Facebook views its coronavirus policies as short term and thus doesn’t want to incorporate them permanently into its Community Standards. But Facebook should not treat its new policies as something with an expiration date—conspiracy theories on the coronavirus are unlikely to disappear after the pandemic ends. Facebook will need these policies for years to come.

Second, the Help Center blog post reduces transparency into Facebook policies that had been discussed in 2019 blog posts about vaccines. Prior to the coronavirus pandemic, Facebook’s policies around vaccine-related misinformation centered on reducing the distribution and recommendation of anti-vaccine pages and groups that promoted this type of misinformation. Facebook’s policies specified that this reduction took place in the user’s News Feed, and the platform does not allow accounts that propagate vaccine misinformation to be recommended or show up “in predictions when you type into Search[.]” Facebook also specified that on Instagram, which Facebook owns, the app won’t recommend vaccine misinformation specifically on the “Explore” page—where users can come across content from accounts they don’t follow—or “hashtag” pages—where users can look up content under a particular hashtag. In Facebook’s Help Center post, these enforcement details are condensed, so the public loses transparency into where this moderation takes place. Facebook’s new policy states: Pages, Groups, profiles, and Instagram accounts that repeatedly post misinformation related to COVID-19, vaccines, and health may face restrictions, including (but not limited to) reduced distribution, removal from recommendations, or removal from our site” (emphasis added). Users who have not read the previous Newsroom blog post may not know how or what this reduction looks like in practice. And the word “may” is doing a lot of work here.

Third, the Help Center post is scoped solely around the coronavirus and vaccines. Other types of medical misinformation such as “[p]romoting or advocating for harmful miracle cures for health issues” are mentioned in Facebook’s Community Standards yet fall under a different section of the rules: “Coordinating Harm and Publicizing Crime.” There’s an easy fix to this. Creating a new section in the Community Standards titled “Health Misinformation” that includes all of these issues would meet the Oversight Board’s recommendation for a “new Community Standard on health misinformation.” As I emphasize in this post, Facebook says it has adopted the board’s recommendations on this point, but it has done so only partially and in a way that does not address all the board’s concerns.

The Larger Impact Moving Forward—What Will the Transparency Center Bring?

These dispersed policies pertain to policy updates done in high-profile moments of public pressure (coronavirus misinformation, election integrity policies, responses to Christchurch). Because of that circumstance, they were announced through blog posts or Mark Zuckerberg’s Facebook posts in order to be as visible as possible. But this only underlines the need for them to be consolidated. A system of rules that requires researchers to remember whether there is a blog post or a Community Standard on a particular policy—perhaps many years after the fact—only leads to confusion about what the rules are at any given time, inadequate notice to users about what they can or cannot post, and difficulties in holding Facebook to account for enforcement of its policies.

Because it is one of the bigger social media platforms, how Facebook approaches content moderation sets a precedent for other platforms. While tracking different platforms’ policy updates during the election, I noticed that policy updates across different platforms were often published only a few days apart from one another and focused on updating policies on the same type of content. Facebook would announce a rule change, a few days later Twitter would announce a similar update, and TikTok would follow—though not necessarily in that order. As it stands, TikTok stopped warehousing all of its policies in its Community Guidelines and now puts some policies in its Safety Center. As is the case for Facebook, it is unclear what separates TikTok’s Safety Center from its Community Guidelines. In addition, Twitter seems to be missing a general policy on health misinformation and, like Facebook, it only has a coronavirus-related policy. But this smaller-scoped topic may make sense for Twitter as its policies are written sort of like blog posts and you can see both the content that is not allowed and the actions Twitter will take to address that content.

In its response to the Oversight Board on March 4, Facebook announced that in the coming months it will be launching a new Transparency Center to provide more insight about its Community Standards and how they are enforced. The center, if well designed, provides Facebook with the opportunity to consolidate the enforcement mechanisms it has introduced throughout its blog posts, and provide the necessary link between the type of content that is allowed on its platform and how the platform in turn will deal with it. While this means the Oversight Board (and I) don’t fully get our wish to have everything in one place, Facebook, I’m looking to you to seize this opportunity.


Carly Miller is a research analyst at the Stanford Internet Observatory (SIO). She most recently co-led the policy analysis for the Election Integrity Partnership, a coalition of researchers from SIO, the University of Washington’s Center for an Informed Public, the Atlantic Council’s Digital Forensic Research Lab, and Graphika. Before joining SIO, Carly was a Team Lead at the Human Rights Investigations Lab at the Berkeley Law School. Carly received her BA in political science from the University of California Berkeley, in May 2019.

Subscribe to Lawfare