Cybersecurity & Tech

Online Regulation of Terrorist and Harmful Content

Jacob Berntsson, Maygane Janin
Thursday, October 14, 2021, 8:04 AM

The surge of global internet regulation to combat terrorism and other harmful digital content continues to pose a risk to the freedom of expression online and the rule of law and leaves unanswered many questions about effectiveness.

Apps on an iPhone. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Global efforts to regulate the internet have increased dramatically over the past four years as government concerns about terrorist content or various other “online harms”' continue to grow. We at Tech Against Terrorism specialize in helping tech companies, ranging from social media platforms to smaller file-sharing sites, messaging apps, and financial technology platforms, improve their response to terrorist content while safeguarding human rights. And we know how hard it can be, and for smaller tech platforms in particular, to keep up with the fast-changing regulatory landscape. We therefore set out to provide a guide on online regulation emerging worldwide for the tech platforms we work with. To this end, we analyzed more than 60 pieces of legislation in 17 jurisdictions and examined how relevant initiatives were viewed by more than 100 experts from across civil society and the law.

The initiatives have ranged from imposing short removal deadlines to incentivizing for more automated content moderation practices. These efforts have the potential to rewrite some of the ground rules of the internet and online speech without, in our assessment, effectively tackling terrorist content online. Furthermore, many of the examined approaches pose a risk of negatively impacting freedom of expression, the rule of law, and sectoral competition and innovation. Unfortunately, democratic states risk setting a poor example in this regard.

The Overlapping Features Among the Laws and Proposals

The assessment of the global initiatives revealed numerous insights into the various approaches, as well as their shortcomings. Here we highlight some common features.

Governments have imposed short removal deadlines for illegal or harmful content. Once a government authority has issued a takedown notice after identifying potentially illegal content, companies face short deadlines ranging from one hour (as in the EU regulation on terrorist content), to four hours (Turkey), to 24 hours (in France’s proposed—and subsequently struck down—cyber hate speech law). A company will often face financial penalties for failing to take down the content in a timely manner.

Governments, predominately in Europe, have proposed, and in some cases implemented, outsourcing to tech platforms the responsibility for determining whether content is illegal. This means that companies—as opposed to courts or other judicial bodies—are required by law to assess whether the content is illegal following notification by the authorities or, in certain instances, by users. This outsourcing trend, also present in Australia, raises some fundamental questions about whether independent judiciaries or private tech companies are best placed to be the arbiters of illegality.

Governments have suggested holding platforms legally liable for illegal or “harmful” user-generated content. Such proposals have been introduced in the U.S., where discussions to reform Section 230 of the Communications Decency Act—which currently protects tech platforms from legal liability for content hosted on their services—have focused on holding platforms liable for failing to curb harmful and illegal content shared by users. In Indonesia, platforms can also face legal liability for user-generated content if they fail to comply with the country’s “social media law” passed in 2020. In other legislative proposals, such as in the U.K. and India, liability would extend to designated employees if a platform fails to comply with certain provisions of the laws.

Platforms have been incentivized to increase their reliance on automated content moderation. Such incentives are either explicitly encouraged or compelled by law, such as in the EU’s terrorist content regulation, Pakistan’s 2020 Citizens Protection Rules, and India’s 2021 Guidelines for Intermediaries and Digital Media Ethics Code. Platforms are also subtly incentivized through short removal deadlines, where companies may seek to deploy tech solutions to facilitate identification and removal of content at scale to comply with the removal deadlines.

Governments have also pushed for compelling the extraterritorial enforcement of national laws by mandating that platforms remove foreign content for violating domestic law. Pakistan and Brazil are examples of countries that have passed such legislation. Such measures could effectively see national speech codes being implemented globally, raising questions about territoriality.

Other proposals have focused on mandating transparency and accountability measures for platforms. Such provisions may be part of regulations that seek to improve action on illegal content and calls for transparency reporting based on a tech company’s compliance with said laws. The U.K.’s draft Online Safety Bill, the French cyber hate speech law and the EU terrorist content regulation all contain similar provisions. Other legislative initiatives, such as the EU’s draft Digital Services Act (DSA) or India’s 2021 Guidelines, focus more on systemic accountability and transparency, such as calling for algorithmic transparency (DSA) or easy-to-understand articulation of moderation policies and regular transparency reports (2021 Guidelines).

Lack of Understanding of Current Industry Practices

In examining global regulatory initiatives, we are concerned that policymakers underestimate the current efforts being made by the global tech industry. While there is certainly room for improvement, and no platform is perfect in its response, most larger (and an increasing number of smaller) companies already have policies and enforcement practices in place for content that may be “harmful” but—depending on the jurisdiction—nonetheless legal. Many companies did so long before the recent regulatory calls to mandate the removal of such material. And despite popular misconceptions about terrorist content running amok online, most large platforms automatically remove 95 percent or more of terrorist content (based on available tech company transparency reports), and most smaller platforms respond to takedown requests within hours of notification. Data from the Terrorist Content Analytics Platform—which we are building to support tech companies in actioning terrorist content swiftly—shows that since November 2020, 96 percent of URLs pointing to verified terrorist content were removed by smaller platforms after we alerted them to it. Likewise, we have in the course of our engagement with tech companies seen encouraging results in general efforts to improve the policies, enforcement practices, and transparency measures required for effective online counterterrorism. In our Mentorship Programme, we have worked with more than 25 different platforms, 12 of which have updated their counterterrorism policies, with five vastly strengthening their content moderation capability. Nevertheless, a small minority of platforms remain reluctant to engage with content moderation requests, and some of them are “alt-tech” platforms catering specifically to content rejected by more mainstream platforms.

The point is not to absolve platforms of responsibility, or to pretend that the threat is not serious but, rather, to examine the basis in data for the recent regulatory wave and the specific provisions within it. Few governments provide concrete evidence justifying the introduction of specific legislation or regulatory provisions.

Implications for Countering Terrorism Online

Many laws aimed at regulating the online space appear to be drafted with the Facebooks of the world in mind. This myopic approach accounts neither for the fact that smaller platforms do not have the same resources at hand nor for the reality of how terrorists use online platforms to disseminate content. Smaller tech companies already face the greatest threat of terrorist exploitation, and government failures to account for smaller platforms’ capacity constraints when tackling terrorism only exacerbates the problem.

Many of these smaller platforms are operated by just one person or, in some cases, a handful of people. While the larger companies have in the past few years established dedicated subject matter expert teams to tackle terrorist use of their platforms, smaller platforms are often unable to even have dedicated trust and safety teams. Further, due to financial constraints, many of these smallest platforms are unable to build and deploy automated content moderation systems that could alleviate some of this burden. Terrorist organizations seek to exploit this lack of resources, and while terrorist actors certainly want to use the large platforms and do indeed go to great lengths to circumvent their moderation systems—whether by sanitizing their rhetoric, blurring logos, or posing as legitimate news outlets to avoid automated identification—they are far more successful and able to build a stable presence on smaller sites.

And governments overlooking the role of smaller platforms may gravely undermine the efficiency of these global regulatory approaches. For example, the EU’s terrorist content regulation, which will enter into force in June 2022, requires public content-sharing platforms to remove suspected content within one hour. If a platform is staffed by only one person—which, for the platforms most exploited by terrorist groups, is not an unusual scenario—how can it comply around the clock with a one-hour removal deadline? If several of the most at-risk platforms will not be able to comply, what effect will the law have? And how can smaller platforms remain competitive—a central occupation of policymakers around the globe—under the weight of increased liability and government interference?

Regulatory approaches to online counterterrorism are similarly marred by governments’ lack of appreciation of smaller platforms in the matter of transparency. In our Mentorship Programme, we have supported several platforms to produce their first-ever transparency report. This is often an arduous process, and one with which smaller companies struggle due to their limited capacity to generate the relevant data. While the drive for improved tech-sector transparency is unequivocally good, transparency requirements on smaller platforms risk being overburdensome and ultimately ineffective without adequate support mechanisms in place. And to improve transparency reporting, on both sides, we developed the Tech Against Terrorism Guidelines, which set a benchmark for companies and call for increased meaningful government transparency—something that is currently more or less nonexistent.

The Risks to Human Rights and Freedom of Expression


Our analysis shows that, while the aims might be laudable, online regulation is often deeply flawed in its defense of the freedom of expression online and in its observance of the rule of law, and that democratic states are no less guilty of such lapses. Other researchers have vindicated this finding—most recently Freedom House, which highlighted misguided online regulatory efforts as a key reason why global internet freedoms decreased for the 11th consecutive year. To be clear, and as many lawmakers will point out, terrorism and violent extremism also pose severe risks to freedom of expression online and offline. However, in our view, there is no conflict between this truth and calling for governments to improve their own countermeasures in this regard.

The concerns for freedom of expression are clear when platforms are compelled to remove content within a short time frame at risk of penalties and liability. Furthermore, when governments outsource to private tech companies the function of evaluating what violates the law, a function that should be the duty of independent judicial authorities, the damage to democratic principles such as the rule of law is substantial.

Moreover, several of the legislative proposals examined are overly broad in scope and contain vague definitions of targeted content categories. In Kenya, the Communication Authority guidelines require tech platforms to curb the spread of content “that “contain[s] offensive, abusive, insulting, misleading, confusing, obscene, or profane language[,]” and in India the 2021 Guidelines prohibit content that “threatens the unity, integrity, defence, security or Sovereignty of India[.]” Further, some definitions risk being inoperable, with definitions of terrorist content bordering on the circular. This is the case in the U.K. Online Safety Bill, which defines terrorist content as content that leads to a terrorist offense.

Vague definitions of central concepts make it less likely that platforms will be able to comply with legislation by implementing moderation measures that are both appropriate and proportionate. Vagueness also entails a significant risk to freedom of expression: When platforms are likely to err on the side of caution to avoid penalties by opting for broad interpretations to cover all bases, there is a substantial risk that in the process they will remove speech that is legal or otherwise legitimate. Lawmakers might stress that some degree of flexibility is required to future-proof regulation, but this nonetheless risks impeding tech companies’ evaluation and implementation of any given legislative regime.

There is also a wider question concerning the extraterritorial enforcement of national legislation. Several of the legislative initiatives we examined violate principles of territoriality by requiring the removal of content across platforms beyond their national jurisdictions. The legislation to regulate the online space considered in Brazil, for example, could be used to force platforms to block content worldwide. Likewise, in Pakistan, the Citizens Protection Rules seeking to tackle online harms state that the provisions set out by the rules apply to all Pakistani nationals, regardless of where they are based. While governments may operate speech codes that reflect domestic customs and concerns, they should not enforce such codes beyond their borders to silence legal speech elsewhere.

The Replication of Ill-Fit Laws

The drive in global online regulation raises a more fundamental question concerning the poor regulatory example set by democratic countries, and the concomitant risk of creating a fragmented regulatory landscape.

In general, democratic states are introducing laws that lack proper safeguards for freedom of expression and digital rights, outsourcing legal adjudication to private tech companies, and pressuring platforms to remove content without sufficient time to make an informed and responsible decision.

Our analysis shows that there is considerable overlap in the online regulations promulgated both by states with solid democratic indices and by those with less impressive records. Laws passed by democratic governments occasionally lack proper mechanisms of accountability and redress and would benefit from clearer explanations of how their provisions may be enacted in the observance of both accountability and established liberties. This is all the more concerning when legislation passed in democratic countries is held up in other countries as an example to follow. Rwanda, Mauritania and Uganda, for instance, have all emulated legislation proposed or passed in Europe to argue for regulating the online space. Unfortunately, this trend extends to end-to-end encryption, which several democratic states have signaled a willingness to break, despite a large body of evidence—including a new Tech Against Terrorism report—suggesting that this would have a detrimental impact on digital security, and that it is possible to investigate terrorist groups online without infringing on privacy. Though it may be that nothing will compel nondemocratic states to desist from introducing draconian legislation, the fact that democratic states are introducing laws with such clear risks to fundamental freedoms limits both their ability and their authority to resist such trends in the wider world.

The wave of legislation emerging to regulate the global online sphere also creates a fragmented regulatory landscape. For platforms operating on a global scale, this means having to comply with a multitude of (sometimes contradictory) legal requirements across jurisdictions. Consider the recent news that Brazil will now punish platforms for removing legal speech—an unwelcome infection of online regulation by the culture wars (pioneered by Poland) that could limit platforms’ ability to remove disinformation and hate speech. This means that platforms are in the future likely to face penalties for removing legal speech in one jurisdiction and for failing to remove it in others. For smaller platforms with limited resources, complying with this minefield of legislation is particularly challenging. As legislation continues to multiply, the global diversity of platforms will be negatively impacted, and smaller platforms that cannot comply will increasingly risk having to cease operation in certain countries.

Next Steps for Policymakers

The public should demand more strategic thinking from policymakers to tackle terrorist use of the internet. Online regulation could be part of the solution, but the issue will not be solved by penalizing smaller platforms or by creating regulation that constitutes censorship. Policymakers should explore to what extent existing legislation and legal instruments might be repurposed for the online space, such as increasing consensus around definitions of terrorist content via improved designation, before adding to the mass of regulations with which companies already have to contend. In our work with tech platforms, we find that when terrorist groups are legally designated as such, it is easier to prompt action. However, the process of designation is slow and it is not always clear what eventual designation means for policing a group’s online presence. Policymakers can elicit improved action by tech companies by consolidating and streamlining the processes of designation and proscription.

Lawmakers must continue to address the root causes of terrorism and radicalization, and acknowledge that content moderation at scale is next to impossible to get right without error. Public statements from policymakers sometimes imply that tech companies have a “stop terrorism” button they simply choose not to press, and that they can, in the words of Evelyn Douek, just “nerd harder” to find a solution to the challenge of content moderation challenges. The reality is much more complex. Terrorism is a threat that long predates the internet and originates offline. When we look for the requisite holistic approach to counterterrorism, we must not overlook the qualities that make the internet vibrant and diverse.


Jacob Berntsson is the Head of Policy & Research at Tech Against Terrorism. Tech Against Terrorism is a public-private partnership supported by the United Nations Counter-Terrorism Executive Directorate (UN CTED) that works with the global tech industry to disrupt terrorist use of the internet whilst respecting human rights.
Maygane Janin is a Senior Research Analyst at Tech Against Terrorism. Tech Against Terrorism is a public-private partnership supported by the United Nations Counter-Terrorism Executive Directorate (UN CTED) that works with the global tech industry to disrupt terrorist use of the internet whilst respecting human rights.

Subscribe to Lawfare