Making Sense of the Supreme Court’s Social Media Decisions
Published by The Lawfare Institute
in Cooperation With
The Supreme Court recently issued two opinions regarding government efforts to influence how social media companies make decisions about what content appears on their feeds: Moody v. NetChoice and Murthy v. Missouri. While both decisions were resolved on fairly technical grounds (the former was remanded to apply the proper analysis for facial First Amendment challenges, the latter dismissed for lack of Article III standing), the opinions nonetheless provided some guidance as to how courts should evaluate efforts to regulate or otherwise influence social media companies that will have significant implications. Here are the top five takeaways.
A social media company’s content moderation decisions can constitute expressive activity, just like a newspaper deciding which op-eds to publish.
Despite remanding so that lower courts could properly apply the standards for evaluating facial challenges, the NetChoice majority nonetheless found it “necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions” and, in particular, determined that social media companies engage in “expressive activity” when curating their feeds. Although arguably in dicta, the Court repeatedly analogized social media feeds to the editorial pages of newspapers. In the Court’s words, government “restrictions on the platforms’ selection, ordering, and labeling of third-party posts … interferes with expression” because “expressive activity includes presenting a curated compilation of speech originally created by others.”
On this point, the Court painted with a fairly broad brush: Platforms’ decisions about their feeds will be treated as expressive regardless of why, how, or how often platforms exercise control over the content or appearance of their feeds. And the Court’s view that platforms are engaged in expressive activity would not “change[] just because a compiler includes most items and excludes just a few.” The Court went on to suggest that even excluding a single speaker, while hosting the speech of all others, is sufficient for a platform to be engaged in “expressive activity.” Nor does it matter whether a platform’s content moderation decisions express a “narrow, succinctly articulable message.” In separate concurrences, Justices Samuel Alito, Amy Coney Barrett, and Ketanji Brown Jackson each countered that the Court should require social media companies to make individual showings as to how their feed curation and content moderation decisions reflect expressive editorial judgments, with Justices Alito and Barrett suggesting the extent to which those decisions are made by human beings versus algorithms trained by artificial intelligence could make a difference. But the Justice Elena Kagan-led majority did not appear moved by those points. Indeed, despite starting her opinion with an ode to how social media companies have changed the world, the opinion retreats to the familiar analogy of a print newspaper’s editorial page without much consideration as to how the two forms of media may differ.
It will be difficult for governments to regulate social media content moderation.
The Court did not decide whether it will apply strict or intermediate scrutiny to regulations of social media content moderation decisions, and it found that the government does not have a compelling, substantial, or even “valid” interest in “changing the content of the platforms’ feeds.” The problem was that Texas’s objective, though phrased in terms of preventing platforms from engaging in “viewpoint discrimination,” was essentially “to correct the mix of speech that the major social-media platforms present,” which, by its nature, is “very much related to the suppression of free expression.” The Court similarly objected that “[t]he reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there.”
The Court did stress that “[m]any possible interests relating to social media” could be substantial or even compelling and that “nothing said here puts regulation of [social media companies] off-limits as to a whole array of subjects.” But it is hard to see how many other interests that governments might assert to regulate social media companies (e.g., reducing misinformation related to public health or elections or information that is harmful to children) would not likewise seek “to correct the mix of speech that the major social-media platforms present” or “to change the speech that will be displayed” on social media platforms.
NetChoice’s protections would apply equally to platforms that wish not to engage in as much content moderation as the government would prefer.
While the Court was focused primarily on the issue in the NetChoice case—that is, First Amendment problems with requiring platforms to host speech—the Court also spoke more broadly about the First Amendment problems that are triggered whenever the government’s “objective is to correct the mix of speech that the major social-media platforms present.” As the Court explained, “[a]n entity exercising editorial discretion in the selection and presentation of content is engaged in speech activity,” and “[w]hen the government interferes with such editorial choices … it alters the content of the compilation” and thereby “creates a different opinion page … bearing a different message.” And the Court’s concerns would appear to apply equally to platforms that engage in very little content moderation, as the Court suggested that an editorial choice to host the speech of “all but one candidate” would be viewed as a “focused editorial choice [that] packs a peculiarly powerful expressive punch.” Accordingly, NetChoice would seem to protect not only platforms wishing to engage in content moderation but also those seeking to resist government efforts to compel them to engage in more content moderation.
Governments seeking to influence social media content moderation are better off exerting informal pressure than legislating or regulating.
In stark contrast to NetChoice, which declared invalid any legislative attempt “to correct the mix of speech that the major social-media platforms present,” Murthy left in place informal executive efforts to do just that. There, two states and five individual social media users sued dozens of executive branch officials and agencies, alleging that they pressured the social media platforms to censor their speech in violation of the First Amendment. In short, several executive branch officials had direct contacts with social media companies that were intended to and, in at least some instances, had the effect of causing the companies to do more to combat perceived misinformation on their platforms, particularly in relation to elections and public health. According to the district court that enjoined the executive officials’ conduct, the officials crossed the line from persuading the social medial companies to censor speech to “coerc[ing]” or “significantly encourag[ing]” them to do so, including by hinting that the government may take action against the social media companies for unrelated alleged antitrust violations or move to reduce their protections under Section 230 of the Communications Decency Act.
The Supreme Court ruled, 6-3, that the plaintiffs lacked standing because they could show neither a sufficient causal connection between the government conduct and any specific adverse actions taken against them by the social media companies, nor that the district court’s injunction was likely to provide them relief from future harm. Justice Barrett’s majority opinion required a fairly close nexus between government pressure and social media action. As she explained, because plaintiffs’ theories of harm depend on the actions of the third-party platforms, they run into the “bedrock principle that a federal court cannot redress injury that results from the independent action of some third party not before the court” (quotations omitted). While the Court has sometimes accepted standing theories that rely on the independent actions of third parties, it has done so only where the plaintiff can show that those third parties “will likely react in predictable ways” (quotations omitted). Here, while “the record reflect[ed] that the Government defendants played a role in at least some of the platforms’ moderation choices,” the record also “indicate[d] that the platforms had independent incentives to moderate content and often exercised their own judgment.” Accordingly, the Court held that the plaintiffs did not have standing because, inter alia, they could not link “any discrete instance of content moderation” taken against them to any specific government “communications with the platforms.”
Any social media user seeking to bring a similar suit going forward faces the same problem of tying particular adverse platform decisions to particular government communications. Accordingly, government officials need not worry much about similar suits going forward, at least so long as they keep their requests to social media companies general and any threats veiled. Similarly, it is not clear what recourse social media companies that are subject to such government pressure tactics may have. A mere government threat, without follow through, is unlikely to give rise to an Article III injury. And if the government makes good on a threat by initiating an enforcement action, or taking legislative or regulatory action, it could be very difficult for the social media company to prove the connection. Perhaps the best recourse for a social media company that wishes to resist government pressure is to make precisely the record that Murthy says is necessary to show Article III standing: attributing particular content moderation decisions to particular government pressure tactics. But social media companies may be resistant to doing so for a host of reasons.
Individuals may have few legal rights to challenge, or even to access, social media content moderation decisions.
NetChoice made abundantly clear that governments cannot regulate social media content moderation for the sake of ensuring that the substance of those decisions treat individual users even-handedly. Such regulation would “forc[e] a private speaker to present views it wished to spurn in order to rejigger the expressive realm.” But, in addition to regulating the substance of social media content moderation decisions, the Florida and Texas laws created procedural rights for individuals whose social media posts are censored, such as requiring platforms to give reasons to a user if it removes or alters her posts. Since content moderation is expressive activity, the Court held, such procedural requirements are permissible only if they are not “unjustified” or so “unduly burdensome” that they chill protected speech. While the fate of the procedural regulations will be determined on remand, the U.S. Court of Appeals for the Eleventh Circuit previously held that requiring platforms to explain millions of content moderation decisions per day was indeed “unduly burdensome.” Still, Justice Alito’s concurrence countered that many platforms already provide a notice-and-appeal process for their removal decisions. Accordingly, whether social media users will enjoy such procedural rights will depend on their feasibility and on whether platforms decide to voluntarily provide their users with procedural rights and transparency—which, to be fair, some platforms have already done.