Criminal Justice & the Rule of Law Cybersecurity & Tech Surveillance & Privacy

Zuckerberg’s New Hate Speech Plan: Out With the Court and In With the Code

Evelyn Douek
Saturday, April 14, 2018, 12:00 PM

Facebook CEO Mark Zuckerberg testified before Congress this week for over ten hours, but he had very little new to say.

Photo: Obama White House Archive

Published by The Lawfare Institute
in Cooperation With
Brookings

Facebook CEO Mark Zuckerberg testified before Congress this week for over ten hours, but he had very little new to say. The overwhelming theme of the questions from lawmakers on the Senate judiciary and commerce committees and the House Energy and Commerce Committee was that, as Senate Judiciary Chairman Chuck Grassley put it, “the status quo no longer works.” Consensus or even concrete proposals of what to do about it, however, were noticeably lacking.

Zuckerberg did shed some light on Facebook’s current thinking about how it will combat hate speech on its platform. The plan epitomizes technological optimism: in five to ten years, Zuckerberg said, he expects artificial intelligence will able to proactively monitor posts for hateful content. In the meantime? Facebook is hiring more human content moderators.

Other than that, there was no indication of any significant changes to current practice. Zuckerberg said “We’re working as quickly as we can … But some of this stuff is just hard.” Or in the language of social media: ¯\_(ツ)_/¯

Zuckerberg did not repeat the thought-bubble he floated earlier this month that a “Supreme Court” of Facebook might be set up to make calls on contested moderation decisions. These calls would be made by code instead.

This aspect of Zuckerberg’s testimony was an outlier during the hearings. The overall public-relations strategy appeared to be to make as little news as possible by releasing a flurry of announcements in the preceding days: launching a data-abuse bounty; creating an independent research group to study Facebook’s effects on democracy; committing to new political ad transparency and accountability measures; tightening up its data practices; making its privacy policies clearer; and removing dozens more accounts and pages created by the Russian “troll factory” known as the Internet Research Agency. These recent changes allowed Zuckerberg to answer many questions from lawmakers without providing new information: He simply detailed previously announced initiatives instead.

But hate speech is indeed, as Zuckerberg said, hard. I wrote last week about why hate speech presents a particularly intractable problem for global platforms like Facebook: Legal rules and norms on what constitutes hate speech vary vastly around the world, but almost all cases have in common the fact that evaluation of hate speech is highly contextual. This is a particularly difficult combination for a platform, like Facebook, attempting to do content moderation at scale.

On the other hand, it is a problem that cannot be ignored. The violence occurring in Myanmar—which U.N. investigators have accused Facebook of facilitating through its role in the spreading of hate speech—provides a particularly potent example of the tragic costs of failing to address the issue. And hate speech is not just a serious problem, it’s also a pervasive one. But experts—including those who work for the major technology platforms—suggest that faith in artificial intelligence is misplaced. Why, then, did Zuckerberg profess such ardent commitment to it?

The False Promise of Artificial Intelligence

In his testimony, Zuckerberg formally acknowledged that Facebook is “responsible” for the content on its platform. While this got some media attention, it is more an acknowledgment of reality than a concession and, therefore, came with little cost. Accepting responsibility in general terms does not translate into legal liability. That Facebook feels some sense of responsibility for content was already evident from its small army of content moderators, which it is committed to increasing to 20,000 by the end of 2018. Indeed, one of the pillars of U.S. internet law—Section 230 of the Communications Decency Act—is premised on the idea that platforms should be shielded from legal liability exactly so as to encourage them to be “Good Samaritans” and make efforts to clean up their services.

Zuckerberg said on Tuesday that he was “not that familiar” with the legal language of Section 230, but from his testimony it seems he fully embraces its underlying ethos. It’s no longer enough to just build tools, he said—Facebook needs to take a broader view of its responsibility and “a more proactive role in policing our ecosystem.” The platform that he built from his oft-mentioned dorm room has relied on a “reactive” model, where users could flag problematic content and Facebook would then evaluate it for removal. The platform is very different now, Zuckerberg said, and must proactively remove bad content before it spreads.

A.I. is the bedrock of this model. In certain areas, Zuckerberg said, the implementation of this model has been “very successful:” for example, 99 percent of Islamic State and al-Qaeda content on Facebook is taken down by A.I. before any user sees it. But others see a less rosy picture. Civil society groups have contested this success rate, and during the hearing, Rep. Susan Brooks (R.-Ind.) drew Zuckerberg’s attention to a number of terrorist pages discovered in the week before the hearings. Others have noted that the indiscriminate nature of algorithmic removal means that it is overinclusive: Facebook is not only removing terrorist content but also posts by civilians, activists and journalists that could be used as evidence of war crimes. So the success story presented by Zuckerberg does actually not inspire a lot of confidence in artificial intelligence’s capacity to handle the more difficult, more context-dependent case of hate speech.

Zuckerberg stated his belief that artificial intelligence will be able to develop the necessary understanding of “linguistic nuances” to identify context-dependent hate speech. But there are good reasons to be skeptical of this claim—reasons voiced not only from a legal community well-versed in the decades of debate over how to define hate speech, but also from those most familiar with AI’s technical capacities. At a major conference on content-moderation only a few months ago, representatives from the major platforms stressed the continued need for human moderators to determine the context of speech, as noted by Data & Society researcher Robyn Caplan. Zuckerberg did leave open the option that AI would simply flag content for review by human moderators rather than automatically remove it. But in a situation in which human moderators are making moderation decisions thousands of times a day, the design of the gatekeeper algorithm will be highly consequential.

Even assuming technical capacity to understand nuance and context, there are three more fundamental, overlapping problems with using artificial intelligence to moderate hate speech proactively.

First, proactive and preemptive removal of content might not be inherently desirable precisely because it hides what is being censored. With terrorist propaganda and child pornography, there is a more compelling argument to be made that it is appropriate for Facebook to seamlessly remove this content before it’s ever seen. But the case is less clear in politically sensitive areas. Jonathan Zittrain long ago wrote about the many problems that can arise from the “perfect enforcement” of laws that is enabled when “code is law,” to use Lawrence Lessig’s formulation: that is, when rules can be embedded in the hidden architectures created by code. This “perfect enforcement” can amplify and lock in mistakes, prevent a useful interface between the law’s terms and its application that can enable greater public understanding, and make decisions more difficult to challenge.

Zittrain’s concerns are particularly apropos given Facebook’s well-noted lack of transparency around content moderation. It is often difficult to hold platforms accountable for material they remove, because by design fewer people know about that content in the first place. An example of this played out recently in Sri Lanka: Country-wide censorship of anti-Muslim posts and pages by a small group of Sinhala-speaking moderators left people uncertain of what exactly was being removed, what biases the moderators had, and exacerbated concerns that Facebook had been unduly influenced by governmental pressure. There would have been even less visibility into these issues if there had been no decision-making trail at all due to the smooth efficiency of algorithms.

Second, there is the issue of algorithmic bias. A growing area of research has explored how opaque “black box” algorithms can perpetuate existing societal biases, through mechanisms such as the unconscious bias of engineers or skewed training-data. In the hate speech context, this may result in the views and values of demographic majorities coming to decide what should be censored. This would pose a problem within any one country, but it is even more concerning when considering Facebook’s global presence. This could be further compounded by the lack of diversity in Facebook’s workforce, a matter brought up by several members of Congress during the hearings.

Finally, and perhaps most importantly, use of artificial intelligence cannot avoid the fundamental truth that deciding what constitutes hate speech necessarily involves a value judgment. That judgment may be made ex ante through the design of the algorithm, but it is inescapable.

This is in fundamental tension with something else Zuckerberg found himself saying many times this week. He is deeply committed to Facebook being a “platform for all ideas” and wants to allow the “broadest spectrum of free expression,” he declared in response to conservative lawmakers who grilled him on whether Facebook was biased against conservative ideas. But to define “hate speech” is to choose what to remove from the “platform for all ideas” and to define those ideas as unacceptable. A good judgment is hard, and needs to consider a number of things that will be difficult to reduce down to code: societal norms, history, context and values. Making it harder is that, as Zuckerberg said, this is an area where “society's sensibilities are also shifting quickly.”

So What Was Zuckerberg Thinking?

Zuckerberg is surely more aware than most of the limitations of artificial intelligence and the difficulties of moderation of hate speech: He knows the past mistakes of his company and algorithms, and this week was made to answer for many them. What explains his AI evangelism, then?

Perhaps Zuckerberg wanted to sound proactive in how he was going to an intractable problem, knowing that lawmakers’ understanding of artificial intelligence technology would be limited enough that they would not challenge him on the technology’s limitations with respect to his company. For many people, especially members of Congress with little technological expertise, artificial intelligence and machine learning is “the amorphous super-technology of science fiction.” It sounds objective—after all, it’s a machine. It is difficult to challenge something you do not understand. Indeed, no member of Congress gave Zuckerberg any serious pushback on his optimism that AI would eventually be up to addressing the issue.

Or perhaps Zuckerberg is genuinely at a loss as to how to deal with hate speech and just wants someone or something to tell him how to define it—last week it was a “Supreme Court,” and this week it was AI.. It’s clear that he desperately doesn't want to be seen as making value judgments from Silicon Valley on charged political issues, especially when he was being hammered by conservative members of Congress about Facebook's perceived ideological bias. There were moments in the hearing where Zuckerberg seemed sincerely daunted by the stakes and searching for guidance. A particularly striking statement came when he was under fire from Sen. Ben Sasse (R.-Neb.) about the line between hate speech and legitimate political debate:

As we’re able to technologically shift towards especially having A.I. proactively look at content, I think that that’s going to create massive questions for society about what obligations we want to require companies to fulfil. And I do think that that’s a question that we need to struggle with as a country, because I know other countries are, and they’re putting laws in place. And I think that America needs to figure out and create the set of principles that we want American companies to operate under.

Zuckerberg is right that these are questions each society needs to deal with. But he can’t sit around waiting for “America” or a Supreme Court of Facebook to tell him how to design his algorithm. He seems to want to abdicate responsibility to a degree that’s untenable. During the hearings, Zuckerberg couldn’t even give his own company’s definition of hate speech when asked by Sasse to define it, deflecting with “that’s a really hard question.” He told Rep. Richard Hudson that speech that “might make people feel just broadly uncomfortable or unsafe in the community” was subject to moderation—it doesn’t take much thought to see that this is far too broad a standard for content removal, is not consistent with being the “platform for all ideas” and is certainly not how Facebook operates today.

Zuckerberg said many times throughout the hearings that Facebook is an “idealistic and optimistic company.” But at some point, idealism becomes wilful blindness. This problem is not going away, and five to 10 years is too long to wait for AI to somehow develop the capacity to make everyone agree on the bounds of acceptable discourse.

A Truly Proactive Approach

Instead of waiting five to 10 years, Facebook needs to become more proactive about monitoring for hate speech now, and not only once better artificial intelligence tools become available. This is especially important in countries with volatile ethnic tensions that manifest in violence. (Facebook arguably also has a special responsibility in developing countries such as Myanmar, in which it effectively created an online public sphere through its “Free Basics” program.) If Facebook is concerned that proactive monitoring of hate speech in these environments would violate the wider norms of freedom of expression held by its U.S. users, it could restrict censorship decisions to local users through geoblocking. Facebook generally prefers “borderless” communication, but it has made exceptions in the past—and it should at least do so in situations of ongoing violence.

Facebook should also invest more resources in engaging people with local knowledge and language skills who can help make context-sensitive determinations. Zuckerberg suggested during the hearings that Facebook would do so, hiring dozens more Burmese-language content reviewers, working with civil society in Myanmar to take down accounts of “specific hate figures”, and creating a product team to implement (unspecified) product changes in Myanmar and other countries that may have similar issues in the future. This is a start, even if it comes only after repeated public rebukes by frustrated local organizations. “Dozens” more moderators will help, but it’s hard to see how a number that low could be sufficient. What’s more, violence should not have to rise to the level of ethnic cleansing for Facebook to take these steps. Other countries need similar attention.

None of this is to deny that hate speech is hard. But hard problems require a collaborative, patient approach, not merely casting around for a simple fix.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare