Cybersecurity & Tech

Countering Harmful Content: A Research Agenda

Paul Rosenzweig
Friday, March 11, 2022, 8:01 AM

Platform-based content moderation is filled with false-positive takedowns and false-negative failures. Is there a better way to approach content moderation on platforms?

A person holding a cell phone. (By: Tati Tata, https://tinyurl.com/t5x5upw; CC BY 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The steady stream of disinformation and harmful content within the online information ecosystem is nearing a flood stage. To date, efforts to ameliorate the scourge have been, in a word, ineffective. Why is that?

One possibility is that the efforts have been misfocused. Most, if not all, of America’s policy approach has looked to online platforms (like Facebook and Twitter) and asked them to do a better job of moderating harmful content. But that approach is simply inadequate.

Consider: Dean Reuter recently wrote a book entitled “The Hidden Nazi.” It is the untold story of SS General Hans Kammler and how the United States may have been complicit in burying the evidence of his monstrous deeds. 

Reuter had a problem though. The short YouTube video he put together to promote his book, which contained nothing unusual, was taken down. Though it is hard to say for sure, the best guess is that the word “Nazi” in the title triggered one of Google’s algorithms. It took two weeks for the video to be reinstated and, of course, the video has since benefited from the Streisand effect.

I mention all of this not to recommend the book (though Reuter, who is a friend, will no doubt be overjoyed if you bought a copy) but because of what Reuter’s experience says about content moderation at scale. His experience is far from unique and is a reminder that platform-based content moderation is filled with false-positive takedowns.

Likewise, and equally problematic (if not more so), the information space is filled with false-negative failures to control disinformation and harmful content. Despite best efforts, the amount of child sexual abuse material (CSAM) online is far greater today than it ever has been and shows no sign of stopping its exponential growth. Meanwhile, Russia’s campaign to sow misinformation in the 2016 U.S. presidential election was so effective that it became a template for domestic efforts in 2020. The list of false and fictitious information on the network is almost endless (a recent example is the Ghost of Kyiv, the mythical Ukrainian fighter pilot), and it seems impossible to keep up.

Why is this so? At scale, the content moderation process is cumbersome and lethargic. All of this is not through lack of effort—the platform operators are throwing billions of dollars at the problem. Rather, it is a reflection of the reality that content moderation on platforms is, fundamentally, a Sisyphean task.

There has to be a better way. Or, at least, everyone should hope that there is one. 

The way forward has to start with a reconceptualization of the problem. Implementing measures on the social media platforms is not the only way to moderate the harmful content coursing across the internet. There is an entire ecosystem out there with multiple layers of control and levers for influencing content—and policy makers have yet to explore how that entire ecosystem might be engaged in the process of defending against disinformation campaigns and malicious content. That is the focus of the ambitious “Addressing Harmful Content Online” project (and related podcast) of the Tech, Law & Security Program (TLS),  where I am a senior fellow.

Last year, two of my TLS colleagues, Jenna Ruddock and Justin Sherman, began the effort to think outside the box with a survey of the information stack, “Widening the Lens on Content Moderation.” They outlined the structure of an “online information ecosystem” that included a host of entities that might provide an access point for content moderation: 

  • Logical services: The services necessary for accessing, browsing, delivering, hosting and securing information online. These include internet service providers (ISPs), virtual private network (VPN) operators, Domain Name System (DNS) operators including registrars and registries, content delivery networks (CDNs) like Cloudflare and Akamai, cloud service providers (like AWS), web hosting platforms, DDoS (distributed denial of service) mitigation services, and web browser systems. 
  • Content services: There are many venues for direct content moderation, including the well-known platforms (such as Twitter or Facebook), but this category also includes search engines (like Google) and app stores (like Google Play). 
  • Financial services: Finally, the ecosystem also includes the financial systems that facilitate monetary exchange (like PayPal), which lies at the core of much of information exchange. 

That earlier paper was descriptive, but it made clear that many other entities within the online information ecosystem both can exert—and have exerted—control over content. Recall, for example, that 8chan was disrupted at least in part because its DDoS mitigation provider—Cloudflare—declined to continue providing services.

The next step in the process, currently underway, is a rigorous examination of the full ecosystem of harms online—including off-platform harms—to assess if, when, how, and according to what standards infrastructure owners can or should exert such control. For example, today, the providers in the infrastructure stack exert control almost exclusively through private contractual requirements—that may, or may not, be the optimal methodology. 

TLS’s research agenda looks at this question holistically, with the overall objective of determining whether or not the benefits of interventions beyond platform-based content moderation would be more effective at controlling harmful content (while being less threatening to legitimate content). The rest of this post describes how such a research program might proceed.

First. An initial question might be, what concerns about content moderation at the infrastructure or ecosystem level could reasonably be addressed by non-content reforms? In other words, what tools exist other than content moderation for addressing harmful content? Here are some thoughts:

An answer to this question might be found by examining cultural and economic factors within the information ecosystem. Here one thinks, particularly, of questions about corporate form and culture, market power, and cultural competency in countries of operation. Historical experience in the platform content moderation space suggests that these are foundational issues that: 

  • Allow small groups of companies’ content-related decision-making to have an outsized impact on the ecosystem as a whole. 
  • Lock in a default or unacknowledged U.S.-EU-centric perspective, since infrastructure companies in particular don’t see themselves as needing content-related competency in their countries of operation. 
  • Often obscure the fact that decision-making about whether infrastructure companies should be involved in content-related decisions is currently far from democratic and disproportionately excludes those most impacted by harmful content online. 

While a great deal of research about these dynamics exists at the platform layer (such as the culture at Facebook), there is very little such research at the infrastructure layer. Several high-profile content interventions by infrastructure companies have clearly been informed by companies’ exposure to and grasp of hyperlocal political dynamics, particularly in the United States—take, for example, Cloudflare’s decision to cease providing services to The Daily Stormer, or New Zealand and Australian ISPs’ response to platforms hosting the livestream of the Christchurch terror attack. How can companies assess whether similar interventions are justified in other countries where they operate, particularly in regions where English is not a dominant language used online, if they do not even know what to look for? 

Building on the structural and cultural analysis, the next focus of research would apply that understanding to a case study—perhaps of the payment processing and CDN/cloud hosting spaces. My hypothesis is that the examination would confirm the perception that market concentration results in a small group of service providers’ decision-making having an outsized impact on large parts of the ecosystem, increasing barriers to entry for others, and increasing the collateral consequences of any one service choosing to moderate certain content. One might also suspect that the structure of this infrastructure space locks in a U.S.-EU-centric approach to infrastructure policymaking and has the effect of obscuring the lack of oversight and the public’s and affected communities’ input into decision-making.

If that hypothesis were borne out, a third part of the inquiry would examine the need for more proactive oversight and civil society engagement on the issue of infrastructure-level content questions and ask what the structure for that engagement might look like.

Second. Another line of inquiry could look at the tools that might be available to infrastructure providers that would allow them to make content-related decisions in a more transparent and accountable way. My anecdotal perception is that infrastructure components of the information ecosystem are already exercising such control in nonsystematic ways, principally through the use of contractual limitations in their terms of service. When, for example, several ISPs shut down access to Proud Boy websites after the Christchurch attack, they did so under their preexisting terms of service. While this indicates how such control might be exercised, there is little, if any, research on the current scope of these non-platform efforts. That research should include:

  • A survey and compilation of the terms of service for the largest infrastructure providers in the various verticals (access, hosting, browsing, distributing and facilitating) to identify commonalities and differences among them.
  • A review of the (very limited number of) transparency reports released by information ecosystem infrastructure providers or their acts (and failures to act) in disrupting the spread of harmful content.
  • Again, a useful case study or two of instances in which infrastructure actors used their powers to cut off problematic content.

Here, research might find that infrastructure companies are moderating content with far less transparency and far fewer public-facing guidelines than their platform content counterparts. This, in turn, diminishes accountability and transparency in their efforts, increasing the potential collateral costs of more frequent infrastructure-level content interventions. One result of this area of research might be a proposal for a model set of terms of service and a set of guidelines and accountability mechanisms.

Third. Most ambitiously, having explored the culture and the existing rule set for content control, I believe that further research would be useful to answer two additional questions: 

The first question is, beyond private contractual controls, which, if any, governmental interventions (through standards, regulations, taxes, subsidies or laws) would be most effective in addressing malicious content? In other words, is the current tool set across the online information ecosystem sufficient, or should it be supplemented with more aggressive intervention? This inquiry would center on the reality that in highly concentrated areas of the internet infrastructure, already-dominant companies will be in a much better position to comply with regulatory burdens than potential new entrants—which is not a cycle that should be reinforced.

A related question is which of the various aspects of the infrastructure stack is the Coasean least cost avoider? That is to say, where would an intervention be most effective in disrupting disinformation, with the least cost to privacy and freedom of speech? Though the answer is unclear, it is at least plausible that content management will scale more effectively at a different level than the platform. Indeed, one can hypothesize that, all else being equal, there would be efficiencies of operation (and, ultimately, greater transparency and accountability) higher up the information stack than platform content control as it currently operates. 

To be sure, all of this is an ambitious research agenda. But it cannot be gainsaid that the current approach to disinformation and the moderation of harmful content is inadequate. Every day brings more reports of false positives (like “The Hidden Nazi”) and false negatives, as coronavirus-denialism and false election fraud claims suffuse the information space. Perhaps there is no better answer—but I have, I hope, at least outlined some important questions to ask before reaching that depressing conclusion.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare