Criminal Justice & the Rule of Law Cybersecurity & Tech

No, Facebook and Google Are Not State Actors

Alan Z. Rozenshtein
Tuesday, November 12, 2019, 8:30 AM

Jed Rubenfeld is incorrect; the Good Samaritan provision of Section 230 does not turn internet platforms into First Amendment state actors.

A stretch of wall covered in chalk marks, reading, "The Facebook Wall: Write Something..." (InVision, CC BY 3.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

In a provocative recent piece, Jed Rubenfeld argues that Section 230 of the Communications Decency Act of 1996 transforms technology companies into state actors for purposes of the First Amendment. If accepted by the courts, this argument would revolutionize online speech, as Rubenfeld acknowledges; technology companies would no longer have carte blanche to censor content. Rubenfeld’s argument has the benefit of being pleasingly ironic, since the very same law that was meant to protect technology companies from government interference would end up reimposing massive government regulation through application of the First Amendment. Appealing as Rubenfeld’s argument is, it is flawed throughout.

Section 230 has two important provisions. Section 230(c)(1) holds that platforms are not to be “treated as the publisher or speaker of any information provided by” their users. Thus, if someone posts something defamatory about you on Facebook, you can sue that person, but you can’t sue Facebook. Section (c)(1) is by far the most litigated and discussed part of the law, but Rubenfeld’s argument focuses entirely on Section 230(c)(2), commonly referred to as the “Good Samaritan” provision, which immunizes platforms from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” For example, if Twitter deletes one of your tweets because it disagrees with the politics behind your message, the Good Samaritan provision prevents you from suing Twitter. At the same time, were the government to try to prevent you from tweeting in the first place, or if it commanded Twitter to remove the tweet, that would almost certainly violate the First Amendment’s prohibition on “abridging the freedom of speech.”

Rubenfeld’s argument is that Section 230(c)(2) therefore violates a key constitutional law “principle”: “Some kind of constitutional scrutiny,” he argues, “has to be triggered if legislators, through an immunity statute, deliberately seek to induce private conduct that would violate constitutional rights if state actors engaged in that conduct themselves.”

But no such principle exists. Rubenfeld relies mostly on the Supreme Court’s decision in Skinner v. Railway Labor Executives’ Association, but he omits an earlier, and far more relevant decision, Flagg Brothers, Inc. v. Brooks. In Flagg Brothers, the Supreme Court refused to find state action when a New York law authorized a private party to sell the furniture of a woman evicted from her residence who failed to pay storage fees. The court refused to hold that the government’s “mere acquiescence in a private action converts that action into that of the State.” Instead, the court held:

Here, the State of New York has not compelled the sale of a bailor's goods, but has merely announced the circumstances under which its courts will not interfere with a private sale. Indeed, the crux of respondents' complaint is not that the State has acted, but that it has refused to act. This statutory refusal to act is no different in principle from an ordinary statute of limitations whereby the State declines to provide a remedy for private deprivations of property after the passage of a given period of time.

Section 230(c)(2) presents the same situation. It does not require internet platforms to censor user content. It simply prevents the government from interfering with such private censorship.

Nor does Skinner help Rubenfeld’s case. In Skinner, the Supreme Court invalidated a federal regulation that let railroads perform drug tests on employees and then share the results of those tests with the government. The court held that the regulatory scheme turned the railroads into “an instrument or agent of the Government” and thus triggered the Fourth Amendment’s prohibition on unreasonable searches and seizures. Rubenfeld identifies several similarities between the regulations in Skinner and Section 230. Specifically, he notes that Section 230 explicitly preempts state and common law; further, just as the railroad employees in Skinner could not refuse a drug test, users have no way of fighting back when their content is censored.

But Rubenfeld either ignores or underplays the more significant differences between Skinner and Section 230. He overstates the extent to which Section 230, at least as it is currently interpreted by courts and used by companies, expresses “the government’s strong preference for the removal of ‘offensive’ content.” Contra Rubenfeld, the fact that Section 230 permits both content moderation (under subsection (c)(2)) and non-moderation (under subsection (c)(1)) means that there is no single government preference. Rubenfeld argues that “functionally the situation in Skinner was probably no different,” but that’s incorrect. As Skinner observed, “[A] railroad may not divest itself of, or otherwise compromise by contract, the authority conferred by [the regulation]. As the FRA explained, such ‘authority ... is conferred for the purpose of promoting the public safety, and a railroad may not shackle itself in a way inconsistent with its duty to promote the public safety.’” By contrast, technology companies could divest themselves of the Good Samaritan immunity by clearly stating in their terms of service that they will not censor content.

Rubenfeld also underplays an important part of Skinner: that the regulatory scheme facilitated information sharing between the private sector and the government. The regulations provided that, whenever railroads would drug test their employees, the employees would be deemed to have consented to having the results of those drug tests shared with the government. Rubenfeld claims that something analogous may be going on with Section 230, noting that “we don’t know the extent to which the federal government has requested or even required Google and Facebook to reveal information about users attempting to post objectionable content.” But though the government does request information from technology companies in the course of law enforcement and counterintelligence operations, in no way does Section 230 make that process easier—it does not, for example, lower the requirements for disclosures to the government under the Stored Communications Act, nor does it substitute for the user consent that companies would need to regularly voluntarily share information with the government.

Rubenfeld seems to tacitly recognize that his principle is inconsistent with settled constitutional law, since the hypotheticals he offers—which concern government immunization of conduct that would otherwise clearly be illegal—support, at best, a much narrower principle. For example, Rubenfeld imagines a state legislature passing the “Childbirth Decency Act, immunizing against any legal liability individuals who barricade abortion clinics, blocking all access to them.” Or a populist Congress passing an “Email Decency Act [that] immunizes from all legal liability any hackers who break into the CEOs’ email files and transfer them to public databases.” Or an anti-gun Congress passing the “the Firearms Decency Act[,]” which “authorizes nongovernmental organizations to hire private security contractors to break into people’s homes in order to seize and dispose of any guns they find there, immunizing all parties from liability.” In other words, the actual principle Rubenfeld is defending is this: State action occurs where the government incentivizes otherwise illegal private action by immunizing it. Barricading abortion clinics is trespass. Stealing someone’s email is hacking. Breaking into people’s homes is burglary. And so on.

Whatever the validity of this narrower principle, it’s simply inapplicable to the case of content moderation. When a private company censors the posts of its users, that’s perfectly legal, irrespective of Section 230, by virtue of the terms of service—specifically, the “community guidelines”—that give platforms the right to censor whatever content they want. Rubenfeld never points to a plausible cause of action that litigants would have to challenge content-moderation decisions, even in the absence of the Good Samaritan provision, especially on platforms that are free for users. In fact, any attempt by the government to impose an affirmative obligation on platforms to host content that they wanted to remove would run smack into the First Amendment itself, because it would seek to regulate the editorial judgments of the platforms themselves. In his search for the legal basis of content moderation, Rubenfeld ignores what’s most important: background contract and constitutional principle.

Rubenfeld misreads Section 230(c)(2)—vastly overstating the role that the Good Samaritan provision plays in day-to-day content moderation—in part because he ignores the statute’s legislative history. Section 230 was enacted as a response to Stratton Oakmont, Inc. v. Prodigy Services Co., in which a New York court found Prodigy, an early internet platform, liable for the defamatory posts of its users because Prodigy partially moderated its platform but had failed to remove the posts at issue. The decision was roundly criticized in Congress as incentivizing internet platforms to not engage in any content moderation, lest they be held liable for the remaining content that got through. Chris Cox, Section 230’s co-sponsor, called the decision “backwards” and stated that the legislation’s Good Samaritan provision was chiefly intended to “protect [internet platforms] from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem”—namely, preventing children from accessing pornography.

In other words, the main job of 230(c)(2) is to handle those cases in which internet platforms have moderated content incompletely. Admittedly, the text of 230(c)(2) goes beyond the specific concern that animated the legislation, but even in those rare cases in which the Good Samaritan provision is litigated directly, it is about matters peripheral to Rubenfeld’s core free speech concerns, such as whether content moderation can be challenged as anti-competitive conduct or as causing economic harm to third parties. The key point is that Section 230(c)(2) has never served as the main legal justification for censoring user content. That’s been the job of terms of service.

Even if Rubenfeld’s principle, in either its broad or narrow forms, could be defended, he would have to further establish that “pressure” from the government has played any role, let alone a substantial one, in companies’ content-moderation decisions. Rubenfeld cites some congressional grandstanding—as when one congressman, remarking on hate speech policies, urged companies to “[f]igure it out ... [b]ecause you don’t want us to figure it out for you.” Rubenfeld claims that the government is “threatening these companies with death-sentence regulatory measures, including an antitrust break-up and public-utility-style regulation,” but he doesn’t point to any concrete proposals or formal government action. Most importantly, he ignores the fact that companies have gotten mixed messages from members of Congress and the executive branch, which complain just as often about too much censorship—as when Senate Republicans allege a Silicon Valley conspiracy against conservative views—as about too little. Rubenfeld is right that, at some point, jawboning can become state action, but we’re nowhere near that point.

Finally, there’s the issue of the remedy. Even if the Good Samaritan provision of Section 230 created a First Amendment problem, the proper judicial response would not be to treat technology companies as First Amendment actors but, rather, to strike down the Good Samaritan provision. As Rubenfeld himself recognizes, courts have uniformly rejected arguments that platforms are public forums. Reversing course would commit courts to a complex ongoing project of applying an open-ended constitutional provision to a massive and fast-changing portion of the economy. Why allow Section 230 to make an end-run around settled law when courts could far more easily invalidate the Good Samaritan provision instead? This is especially true because that would cause the least amount of disruption to the industry—which could, as noted above, continue to censor content based on its terms of service.

Rubenfeld’s motivation is to preserve free speech in the digital age. This is a worthy project, and, as his follow-up post demonstrates, he has creative ideas for how to make that happen. But there’s a reason why, as Rubenfeld himself admits, the argument that Section 230 turns tech giants into state actors “seems to have escaped litigants and judges alike.” They should continue to avoid the argument.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare