Lawfare News

Internet Platforms: Observations on Speech, Danger, and Money

Daphne Keller
Friday, June 15, 2018, 8:48 AM

Public demands for internet platforms to intervene more aggressively in online content are steadily mounting. Calls for companies like YouTube and Facebook to fight problems ranging from “fake news” to virulent misogyny to online radicalization seem to make daily headlines. British prime minister Theresa May echoed the politically prevailing sentiment in Europe when she urged platforms to “go further and faster” in removing prohibited content, including through use of automated filters.

Published by The Lawfare Institute
in Cooperation With
Brookings

Public demands for internet platforms to intervene more aggressively in online content are steadily mounting. Calls for companies like YouTube and Facebook to fight problems ranging from “fake news” to virulent misogyny to online radicalization seem to make daily headlines. British prime minister Theresa May echoed the politically prevailing sentiment in Europe when she urged platforms to “go further and faster” in removing prohibited content, including through use of automated filters.

My new essay in the Hoover Aegis series discusses the foreseeable downsides of deputizing private platforms to police internet users’ speech. The first cost is to internet users’ free expression rights. Platforms enforcing unclear laws under threat of stiff penalties will play it safe and err on the side of deletion, and relying on flawed filtering technologies will only make this problem worse. The essay discusses what limits the U.S. Constitution might place on laws of this sort. While precedent in the internet context is scarce, Supreme Court cases about “analog intermediaries” like bookstores define important Constitutional constraints.

The second likely cost is to security. Removal errors by platforms racing to take down Islamist extremist content in particular will predictably affect internet users who are speaking Arabic, discussing Middle Eastern politics, or talking about Islam—adding fuel to existing frustrations with governments behind such laws, or platforms that appear to act as their proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in real-world experience with both online radicalization and platform content removal campaigns.

The third cost is to the economy. There is a reason why the technology-driven economic boom of recent decades happened in the United States. Our platform liability laws had a lot to do with it. Such laws remain particularly important to startups and small companies—who cannot, like Facebook or YouTube, avoid legal risk by hiring armies of moderators. Platform liability laws also affect the economic health of ordinary businesses that find customers through internet platforms.

The point is not that platforms should never take content down. They already do take things down, and this is often very much in the public interest. The point is that the upsides of platform removal efforts come with well-known downsides. Neither platforms nor lawmakers can throw a switch and halt the flow of particular kinds of speech—not without substantial collateral damage. Delegating speech law enforcement to private platforms has costs. Lawmakers need to understand them, and plan accordingly, when deciding when and how the law tells platforms to take action.

Internet Platforms: Observations on Speech, Danger, and Money by Hoover Institution on Scribd


Topics:
Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare