Cybersecurity & Tech

The Lack of Content Moderation Transparency: The Cloudflare and Kiwi Farms Example

Paul Rosenzweig, Katie Stoughton
Thursday, September 8, 2022, 9:31 AM

Nobody knows how often or why some content decisions occur.

Hand against blue screen of numbers. (Source: Kai Stachowiak, CC0, via Wikimedia Commons)

Published by The Lawfare Institute
in Cooperation With
Brookings

Cloudflare has withdrawn services from Kiwi Farms.  

That sentence probably doesn’t mean anything to most readers, so let’s unpack it. Cloudflare provides internet security services (protecting websites, for example, against cyber attacks) and also provides for quicker access to websites by allowing the sites to be cached around the network (that is, stored in more than one network location). Kiwi Farms, which serves as an online forum that facilitated online harassment campaigns, employed Cloudflare to protect its site. According to the Washington Post, at least three suicides resulted from activity on Kiwi Farms, with “many on the forum consider[ing] their goal to drive their targets to suicide.” 

In a Cloudflare Blog post on Aug. 31, Cloudflare leadership argued that they see Cloudflare’s provision of basic security and caching services as infrastructure, like internet connectivity. They argued that they should not be held responsible for content that their services protect without judicial proceedings. Cloudflare contrasted their work with website hosting, the latter of which they said should come with increased responsibility and discretion. Based on this self-conception, Cloudflare initially resisted calls to block Kiwi Farms.

Later in the week, however, Cloudflare changed its stance. Though Cloudflare’s decision to withdraw services was clearly tied to increased public pressure, they ascribed their change of heart to increased threats of violence by Kiwi Farms users responding to critics, which proved significant. In other words, Cloudflare became increasingly concerned that Kiwi Farms was hosting content that might lead to violence and consequently terminated their service side support for the site. At its heart, this was a content moderating decision, and a major one. 

Cloudflare’s decision may seem insignificant, but it has tremendous impact. Approximately 20 percent of the internet uses Cloudflare for internet security services, and Cloudflare’s decision to drop Kiwi Farms allows Cloudflare to effectively, and unilaterally, shut down Kiwi Farms (at least until Kiwi Farms can find another security service provider, if they can at all). In the modern internet ecosystem, no hosting server would host an unsecured provider—the dangers would be too great. 

This recent move reflects the ongoing reality that many content moderation decisions are increasingly being made outside the traditional confines of websites and service providers, such as Twitter and Facebook. The challenge is that most of those content moderation decisions are obscure—companies are not obliged to make their decisions transparent and have no real economic interest in doing so.

The Cloudflare/Kiwi Farms case is almost paradigmatic. To be sure, Cloudflare has an abuse policy.  But there is no way of knowing how it is applied. Cloudflare publishes a transparency report of its activities (the latest is here, and their archives going back to 2018 are here, with further archives back to 2013 here), but those reports are exclusively about legal requests (primarily regarding requests to remove copyrighted material under the Digital Millennium Copyright Act  and respond to subpoenas). Nothing in those reports details how frequently Cloudflare blocks websites based on content, nor how it applies its own publicly-expressed standards. We know (because they have said so publicly) that Cloudflare has done this before; it previously blocked the Daily Stormer and 8chan, but that’s about it. The how, the why, and the who of Cloudflare’s content moderation decisions remains unclear.

None of this is to say that the Kiwi Farms decision is wrong—indeed, from what we can see on the public record, it seems well-justified—especially because of the manifest risk of physical violence and because of the apparent efforts to create real world harms for people in Kiwi Farms. But this is to say that many parts of the internet information ecosystem make decisions about content moderation, and nobody really has any sense of how it is done.

Cloudflare is not alone in this regard. Our recently completed research suggests that fewer than half of the companies in the “internet space” publish any transparency reports at all, and of those that do so, almost none provide transparency about their content moderation decisions. These companies exercise great powers in their content moderation decisions. If Kiwi Farms cannot find another service provider, it is effectively dead. Regardless of how one feels about Kiwi Farms, the fact that companies within the online information ecosystem have this power and we know almost nothing about it is troublesome. 

Cloudflare may think that its status as an “infrastructure provider” means that it should be held to a different standard. Perhaps so. But the reality is that we can’t know how to judge that statement without greater transparency as to what it does, and how


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.
Kathleen Stoughton is a 3rd year Juris Doctor candidate at American University's Washington College of Law. She received her BA in Political Science from Carleton College in 2020.

Subscribe to Lawfare