Two Calls for Tech Regulation: The French Government Report and the Christchurch Call
The rush to bring law and order to online spaces is well and truly on. Two important documents on the topic of online speech regulation have come out of Paris in the past week alone.
Published by The Lawfare Institute
in Cooperation With
The rush to bring law and order to online spaces is well and truly on. Two important documents on the topic of online speech regulation have come out of Paris in the past week alone.
The first is a French government-commissioned report exploring a “general framework for the regulation of social networks.” The mission team that wrote the report spent two months working with representatives of Facebook, which the French government hailed as “unprecedented collaboration with a private operator.” This interim report to the French secretary of state for digital affairs, and the final report due by June 30, will inform the French government’s and European regulatory response to the increasingly pervasive problems of content moderation on social media platforms.
The second document is a nonbinding compact, named “The Christchurch Call to Action,” signed by 18 governments and eight companies so far and unveiled in Paris on May 15. The pledge calls for urgent action to “eliminate terrorist and violent extremist content online” in the wake of the livestreamed Christchurch attacks on March 15.
The two are very different documents. The first is a cautious survey of the thorny issue of how to manage government regulation of speech in the new platform era; the second is a high-level pledge to prevent the kinds of abuse of an open internet that occurred when the Christchurch shooter broadcast his massacre in a way designed to go viral. But they are both evidence of the growing momentum of moves to regulate the major tech platforms.
The French Government Report
The French report is notably more measured compared to other recent government forays into this area, such as a recent U.K. report on online harms and Australian legislation that criminalized the hosting of “abhorrent violent material.” Rather than blaming social networks for abuses, the report frames the problem in terms of abuses committed by isolated individuals or organized groups to which social networks have failed to provide an adequate response. Though the report states that this justifies government intervention, it is careful to acknowledge the civil liberties at stake—in particular, the need for government regulation of speech to be minimal, necessary and proportionate in order to comply with human rights obligations. It notes that public intervention in this area requires “special precautions” given the “risk of the manipulation of information by the public authorities.” This is a welcome acknowledgment of one of the trickiest issues when it comes to regulating tech platforms: unaccountable private control of important public discourse is problematic, but so too is government control. Extensive governmental regulations might be a cure worse than the disease.
As an interim report, the document leaves many details to be filled out later. But it bases its initial proposal for regulation around five pillars, the gist of which is as follows:
- Public regulation guaranteeing both individual freedoms as well as platforms’ entrepreneurial freedom and the preservation of a diversity of platform models.
- An independent regulatory body charged with implementing a new prescriptive regulation that focuses on the accountability of social networks, based around three obligations:
- Algorithmic transparency;
- Transparency of Terms of Service and content moderation systems; and
- An obligation to “defend the integrity of users,” analogous to a “duty of care” to protect users from abuse by attempts to manipulate the platform.
- Greater dialogue between stakeholders including platforms, the government and civil society.
- An independent administrative authority that is open to civil society, and does not have jurisdiction to regulate online content directly but does have power to enforce transparency and issue fines up to four percent of a platform’s global turnover.
- European cooperation.
The key and most interesting pillar is the second one, which describes the approach to regulation of platforms. The report proposes a model it calls “accountability by design,” which seeks to capitalize on the self-regulatory approach already being used by platforms “by expanding and legitimising” that approach. It notes that while self-regulation has the benefit of allowing platforms to come up with “varied and agile solutions,” the current system suffers from a severe legitimacy and credibility deficit. The main reason for this is what the report tactfully calls “the extreme asymmetry of information” between platforms on the one hand and government and civil society on the other. Others have called it “the logic of opacity,” where platforms intentionally obscure their content moderation practices to avoid criticism and accountability. To an extent, this tactic has worked—the French government report notes that without adequate information, observers are reduced to highlighting individual examples of poorly moderated content and are unable to prove systemic failures. For this reason, the report emphasizes that future regulation needs to enforce greater transparency both of moderation systems and of algorithms.
The report then discusses the balance between a “punitive approach,” the dominant model adopted by countries so far—which focuses on imposing sanctions for those who post unlawful content as well as the platforms that host it—and “preventative regulation.” In what is likely a reference to heavily criticized German laws, the report notes that the punitive approach incentivizes platforms to over-censor content to avoid liability and therefore “does not seem to be a very satisfactory solution.” It therefore recommends a preventive, “compliance approach” that focuses on creating incentives for platforms to create systems to prevent content moderation failures. The report draws an analogy to the financial sector: Regulators do not punish financial institutions that have been used for unlawful behavior such as money laundering but, instead, punish those that fail to implement a prescribed prevention measure, whether or not this actually leads to unlawful behavior.
The report recommends implementing this approach “progressively and pragmatically” depending on the size of operators and their services, with more onerous obligations for larger platforms. This is a careful acknowledgment of the need to avoid entrenching the dominance of the major platforms by imposing compliance burdens that only the most well-resourced companies can meet.
To date, most discussion of Facebook’s proposed Oversight Board for content moderation has treated the project as a quirky (if promising) experiment. But laws that require platforms to show good-faith efforts to create a robust content moderation system may encourage the use of mechanisms like this, which perform a kind of quality-assurance and transparency-forcing function for a platform’s greater content moderation ecosystem. Facebook’s Board as well as other measures, such as increased transparency and human resources, are cited by the report as evidence of the progress made by Facebook in the past 12 months that convinced the authors of the benefits of the self-regulatory model.
Overall, the report reflects a nuanced attempt to grapple with the difficult issues involved in preventing the harm caused by bad actors on social media while adequately protecting freedom of expression. The benefits of the French government’s collaboration with Facebook are evident throughout the report, which notes that the mission learned about the wide range of possible responses to toxic content used by Facebook and focuses on the need to make these more credible through transparency rather than by fundamental rethinking. This contrasts with the adversarial relationship between Facebook and the U.K. government: CEO Mark Zuckerberg refused three invitations to appear before a U.K. parliamentary committee, which responded by calling the company a “digital gangster” in its final report. It is also a contrast with the Australian approach, which creates vague but severe liability in a way that does not seem to appreciate the technical challenges involved and endangers free expression.
The Christchurch Call
Slightly more rushed was the Christchurch Call, spearheaded by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron. Although nonbinding, the Call reflects the newfound urgency felt by governments to address terrorist and violent extremist content online in the wake of the horror in Christchurch.
The Christchurch Call also opens by acknowledging the importance of a free, open and secure internet and respect for freedom of expression. The actions pledged by governments are all very vague and broad, including countering terrorism and extremism through education, enforcing existing laws and supporting responsible reporting. As to future laws, governments commit to consider “[r]egulatory or policy measures consistent with a free, open and secure internet and international human rights law” that will prevent the online dissemination of terrorist and violent extremist content. This could hardly be stated more open-endedly.
These vague pledges show that the Christchurch Call is the start of the conversation, not the end. Indeed, the last line of the Call is a list of upcoming forums, including the G-7 and G-20 summits, where the Call will be discussed further.
Similarly, online service providers commit to taking “transparent, specific measures” to seek to prevent upload of such content and its dissemination, “without prejudice to law enforcement and user appeals requirements, in a manner consistent with human rights and fundamental freedoms.” The suggested measures are equally high level: “technology development, the expansion and use of shared databases of hashes and URLs, and effective notice and takedown procedures.” Service providers also commit to greater transparency, enforcing their terms of service, mitigating the risk posed by livestreaming, reviewing algorithms that may amplify this harmful content and greater cross-industry coordination.
Microsoft, Twitter, Facebook, Google and Amazon all signed the pledge and released a joint statement and nine-point action plan in support. The nine points are all indisputably desirable but, like the Call itself, vague: “enhancing technology,” “education,” and “combating hate and bigotry” are hard to disagree with but could all mean any number of things. Just before the Call, Facebook also announced updated restrictions on livestreaming, including specified lockouts of livestreaming for violations of terms of service, which the company says would have prevented the Christchurch shooter from broadcasting his crimes live.
The point on “enhancing technology” will be particularly important. I have previously written about how Facebook’s failure to remove the Christchurch shooting footage in the immediate aftermath of the attack was not for lack of trying but, instead, was due to evasive actions by those wishing to spread it and other technological challenges that became apparent only during the unprecedented events. The current state of detection tools risks both over- and under-removal: Some civil society groups expressed concerns that there is no way for current tools to automatically remove terrorist content in a “rights-respecting way,” given these tools are likely to remove a large amount of legitimate speech and have built-in biases. At the same time, the day after the Call, a researcher still found copies of the video on both Facebook and Instagram.
Tech sector sign-on to the Call is not surprising. The joint statement echoes points made by Microsoft President Brad Smith in the days following the Christchurch attack, who wrote in a blog post titled “A Tragedy That Calls for More Than Words” that the tech sector needed to learn from and “take new action” based on what happened in Christchurch. It was clear even then that the way the horror played out online would be a moment of reckoning for the industry.
Both Smith and the Call look to investing in and expanding the Global Internet Forum to Counter Terrorism (GIFCT). The GIFCT is an industry-created body that keeps a database of “hashed” files: Companies can upload to the shared database files they identify as terrorist content, which is then given a digital fingerprint so that other participants can automatically identify if a user tries to upload it on their platform. But as Emma Llansó has argued, the database has long-standing problems:
No one outside of the consortium of companies knows what is in the database. There are no established mechanisms for an independent audit of the content, or an appeal process for removing content from the database. People whose posts are removed or accounts disabled on participating sites aren't even notified if the hash database was involved. So there's no way to know, from the outside, whether content has been added inappropriately and no way to remedy the situation if it has.
In calling for the use of the GIFCT to be expanded, the Christchurch Call is seemingly calling for the entrenchment of a body that goes against the very kind of “accountability by design” principles laid out in the French government report, although the Call does seek greater transparency in general.
Notably absent from the signatories of the Christchurch Call was the United States, reportedly due to concerns that the commitments described in the Call might run afoul of the First Amendment. It is true, as representatives from civil society have noted, that the definition of “terrorism and violent extremism” is a vague category, which may be open to abuse by governments seeking to clamp down on civic space. However, the broad wording and voluntary nature of the Call would not have compelled the U.S. to restrict any First Amendment-protected speech. When “considering” future regulations, the U.S. could have based them around First Amendment doctrine. This has led some observers to express disappointment that the United States did not sign the Call.
At the same time, the United States is an international outlier when it comes to freedom of speech: Current doctrine undoubtedly does protect most terrorist and extremist content, and while private platforms can choose to remove this material from their services, the U.S. government often cannot mandate that they do so. The American refusal to sign the pledge is consistent with its behavior toward global treaties implicating speech rights since the founding of the United Nations. The U.S. famously has a reservation on First Amendment grounds to the International Covenant on Civil and Political Rights article prohibiting propaganda for war and national, racial or religious incitement to discrimination or violence. Long-running attempts by states—most prominently including the Soviet Union—to create a treaty outlawing propaganda in the aftermath of World War II were always opposed by the U.S. for the same reasons. And the government is not alone in expressing concerns about the Call’s implications for freedom of expression: Members of civil society also worry that it seeks to push censorship too far into the infrastructure layer of the internet; that legitimate speech (including reporting and evidence of crimes) will be swept up by censorship efforts; and that it will impede efforts for counterspeech, research and education.
Regulation Is Coming
The upshot of the two Parisian documents is clear: Regulation is coming to online spaces. The French government report represents a bottom-up approach, presenting findings after a long period of work with a platform on the ground to learn about how content moderation works in practice. The Christchurch Call comes from the opposite direction, starting with high-level goals and calling for urgent solutions. But both are responding to the same reality: There is a lot of vile content online, and the current approaches to dealing with it are inadequate. Changes are in the near future.