Cybersecurity & Tech

Justice Thomas's Misguided Concurrence on Platform Regulation

Berin Szóka, Corbin Barthold
Wednesday, April 14, 2021, 10:30 AM

The justice’s speculations on the possibilities for regulating social media platforms are already changing the tone of the debate on the political right—but he makes a weak argument.

The courtroom of the U.S. Supreme Court. (John Marino, https://tinyurl.com/3f6wnwub; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

After months of delay, on April 5 the Supreme Court finally granted certiorari and ruled in Biden v. Knight—the case, renamed after President Biden took office, concerning whether the First Amendment prevented then-President Trump from blocking his critics on Twitter. The justices vacated the ruling by the U.S. Court of Appeals for the Second Circuit and instructed the lower court to dismiss the case as moot.

That could have been that. But Justice Clarence Thomas issued a concurrence in the case that could have implications well beyond the Twitter accounts of politicians. The justice’s speculations on the possibilities for regulating social media platforms are already changing the tone of the debate on the political right, where commentators have pointed to unsubstantiated claims of political bias by social media platforms in order to push for greater regulation. Thomas’s concurrence is just a nonbinding statement, issued without briefing, in which one of the court’s nine justices speculates about what legal theories might justify curtailing social media websites’ First Amendment rights—but conservatives are celebrating it as a “roadmap” for “reining in the social media giants.”

It is no such thing. Thomas raises three questions about the legal status of social media websites. First, are they de facto state actors subject to First Amendment restrictions? Second, might they be compelled, as common carriers, to carry speech against their will? And third, might they be barred, as public accomodations, from “discriminating” against certain content or viewpoints? In an effort to promote the idea that the sites’ right to exclude speech might be permissibly curtailed, Thomas treats these questions as though they are unexplored, unsettled, even wide open. As we will explain, however, the answer to all three questions is no.

“Applying old doctrines to new digital platforms is,” Thomas submits, “rarely straightforward.” Yet in the case before him, it really was. When the government opens a space to free expression, it creates a “designated public forum” in which it may not discriminate based on content or viewpoint. At issue in the case was whether Trump, by using his Twitter account for government business, leaving the account open to replies, and then blocking certain users, had discriminated among viewpoints in a designated public forum. The Second Circuit reached the conclusion that Trump had done so and that the First Amendment barred him from blocking the individual plaintiffs in the case.

While the government’s petition for certiorari was pending, the parties agreed that the case was moot—though they disagreed about why. The government argued that the mootness arose from Trump’s ceasing to be president. The respondents contended that it arose when Twitter suspended Trump’s account following the Jan. 6 riot.

In Thomas’s view, the suspension of Trump’s account informs the merits of the case. “It seems rather odd,” he proposes, “to say that something is a government forum when a private company has unrestricted authority to do away with it.” But it’s actually not odd at all. Suppose a mayor regularly offered commentary on his administration at events, open to the general public, held at a large conference room at a local Hilton. The room would constitute a designated public forum, yet Hilton, a “private company,” would still retain “unrestricted authority to do away” with that forum. If the mayor used the room to incite a riot, for example, Hilton would have every right to kick him out.

Thomas seems to think that Twitter is not like the Hilton because “digital platforms” are “highly concentrated” and have “enormous control over speech.” Both propositions are dubious. On the one hand, a mayor who got himself booted by Hilton, Marriott and Hyatt hotels might find himself quickly running out of large conference rooms in his city. On the other, Trump can easily speak, and attract widespread attention for his speech, from an alternative social media website, a new network of his own, or even his own personal website.

The key question in the case at hand was whether the “interactive space” in Trump’s Twitter account—where an unblocked user can respond to his tweets—was a designated public forum. As the Second Circuit explained, the “space” clearly met that standard: it was “intentionally opened for public discussion when [Trump], upon assuming office, repeatedly used [his account] as an official vehicle for governance and made its interactive features accessible to the public without limitation.” But Thomas focuses on an entirely distinct question in discussing Twitter and public-forum doctrine: whether the whole of Twitter is a public forum. That question turns not on any action Trump took in regard to his account, but on the very different issue of whether Twitter itself is a de facto state actor.

Thomas acknowledges that because Twitter had “unbridled control of [Trump’s] account,” the First Amendment restrictions that restrain the government, in the operation of a public forum, “may not” apply to Twitter. In fact, in Manhattan Community Access Corp. v. Halleck—a decision Thomas joined—the Supreme Court confirmed that only the equivalent of a state actor can be deemed to operate a public forum, and that a private entity that “opens its property for speech by others is not transformed by that fact alone into a state actor.”

As Halleck explains, “a private entity can qualify as a state actor” in only “a few limited circumstances.” One is when “the private entity performs a traditional, exclusive public function”—and there is nothing either “traditionally” or “exclusively” governmental about running a social media website. Another circumstance is “when the government compels the private entity to take a particular action.” Thomas speculates that “plaintiffs might have colorable claims against a digital platform if it took adverse action against them in response to government threats.” He acknowledges, however, that “no threat is alleged here,” and that it’s “unclear” what sort of government threat could turn the likes of Twitter into a state actor. Thomas cites cases holding that the threat must be so coercive that the private party’s action is “not voluntary” and is in effect “that of the State.”

The public forum doctrine is the sole topic at issue in the case at hand. The doctrine, however, is not even the primary subject of Thomas’s concurrence. Thomas devotes most of his attention to exploring two legal theories that might allow greater government control over content moderation. The first is common carriage. Riffing on a single academic article by Adam Candeub, Thomas suggests that digital media might be like toll bridges, railroads or telephone networks—which must “offer service indiscriminately and on general terms.”

By contrast, newspapers actively curate content. “The presentation of an edited compilation of speech generated by other persons is a staple of most newspapers’ opinion pages,” the Supreme Court has said, describing its landmark decision in Miami Herald Publishing Co. v. Tornillo. Thus, newspapers cannot be compelled to carry speech they find objectionable. Their editorial judgments fall “squarely within the core of First Amendment security,” wrote the court. The same goes for social media, which actively exercise editorial judgment in moderating content—and thus deserve the same constitutional protections as newspapers. As Justice Antonin Scalia once declared: “[T]he basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communications appears.”

On multiple levels, social media sites are more like newspapers than any of the examples Thomas cites. Unlike newspapers or social media, railroads and telephone networks hold themselves out as serving everyone equally, without editorial intervention. In 1974, the Federal Communications Commission (FCC) extended traditional common carriage regulation to nascent cellular telephony—but not to wireless “dispatch services such as those operated by police departments, fire departments, and taxicab companies, for their own purposes.” The U.S. Court of Appeals for the D.C. Circuit upheld the classification of the latter as private carriage: “What appears to be essential to the quasi-public character implicit in the common carrier concept is that the carrier ‘undertakes to carry for all people indifferently….’” Likewise, the FCC’s 1985 Computer II order created the distinction that still undergirds telecommunications law: Services that offer “pure transmission” are common carriers while those offering “data processing” are private carriers. The key, as Thomas explained in his 2005 Brand X decision, is “how the consumer perceives the service being offered.”

Thomas argues that, even absent such perception, common carrier regulation “may be justified … when a business, by circumstances and its nature, … rise[s] from private to be of public concern,” quoting a 1914 decision involving insurance regulation. He also cites an 1894 decision in which telegraph network operators demanded limitations on their liability as a benefit of traditional common carriage regulation. Neither case says when communications platform operators are not merely “conduits,” but speakers with their own speech rights—like newspapers.

Where courts have upheld imposing common carriage burdens on communications networks under the First Amendment, it has been because consumers reasonably expected them to operate conduits. Not so for social media platforms. To understand why, consider net neutrality.

In 2015, the FCC reissued rules requiring most mass-market internet service providers (ISPs) not to block or throttle lawful internet traffic—and formally classifying them as common carriers. The D.C. Circuit upheld the order, and concurring with the court’s denial of a rehearing, the two judges who wrote the panel decision explained that the order did not implicate the First Amendment because it applied only insofar as broadband providers represented to their subscribers that their service would connect to “substantially all Internet endpoints.” This merely “requires ISPs to act in accordance with their customers’ legitimate expectations.” Conversely, the judges wrote, ISPs could easily avoid the burdens of common carriage status, and exercise their First Amendment rights: “[T]he rule does not apply to an ISP holding itself out as providing something other than a neutral, indiscriminate pathway—i.e., an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’”

Every social media service provides just that kind of filtered service, spelling out detailed terms of service that expressly reserve the right to remove content that violates those terms. Although subscribers to standard broadband service might legitimately expect to obtain access to all lawful internet content, users of a social media service cannot reasonably expect that they may use the service to say whatever they want.

Thomas cites Turner Broadcasting v. FCC, in which the Supreme Court upheld forced carriage under the First Amendment. In that case, the court ruled that cable companies “must carry” local broadcasters’ channels for free. Turner seems to parallel conservatives’ contemporary arguments about Big Tech: “When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber's home…. A cable operator, unlike speakers in other media, can … silence the voice of competing speakers with a mere flick of the switch.”

But the comparison between cable companies and social media platforms doesn’t hold water. Prior to the advent of direct broadcast satellite television, cable operators controlled the only pathway for bringing multichannel video programming services to consumers. This was thanks, in part, to exclusive local franchises granted by municipalities, which controlled access to rights of way—clear state action. Today, no platform controls the only pathway to expression, and the government confers no monopoly privileges on any particular tech service.

What’s more, Turner is not, fundamentally, a speech case. Although the law at issue in Turner gave some broadcasters a right to cable carriage (and therefore favored their speech over the cable providers’), the majority nonetheless concluded that the law was not content based. The cable providers had not objected to any content or viewpoints expressed in the broadcasters’ programming; rather, as the majority noted, cable operators suffered an economic loss from not being able to charge for the one-third or so of their channel capacity allotted to broadcasters. The majority therefore applied only intermediate scrutiny.

When it comes to the regulation of speech on social media, however, the presumption of content neutrality does not apply. Conservatives present their criticism of content moderation as a desire for “neutrality,” but forcing platforms to carry certain content and viewpoints that they would prefer not to carry constitutes a “content preference” that would trigger strict scrutiny.

Under strict scrutiny, any “gatekeeper” power exercised by social media would be just as irrelevant as the monopoly power of local newspapers was in Miami Herald. Ironically, Thomas himself wanted to apply strict scrutiny in Turner because, as a dissent he joined put it, Congress’s “interest” in platforming “diverse and antagonistic sources” was not “content-neutral.” Yet a platform mandate for “diverse and antagonistic sources” is essentially what many conservatives are arguing for now. Whether “must carry” for cable was really content neutral in Turner was debatable—the majority saw no “subtle means of exercising a content preference”—but the agenda behind “must carry” for social media is unmistakable.

Thomas asserts, in his Knight concurrence, that common carriage could be imposed on social media companies “especially where a restriction would not … force the company to endorse the speech.” But a second reason Turner did not apply strict scrutiny was its conclusion that forcing cable companies to carry local broadcasters’ channels would not “force cable operators to alter their own messages to respond to the broadcast programming they are required to carry.” Noting that the FCC had first instituted some form of must-carry mandate in 1966, the Supreme Court concluded: “Given cable’s long history of serving as a conduit for broadcast signals, there appears little risk that cable viewers would assume that the broadcast stations carried on a cable system convey ideas or messages endorsed by the cable operator.” Similarly, Thomas alludes to Pruneyard Shopping Center v. Robins, which forced a mall to let students protest on its private property. “The views expressed by members of the public” on the mall’s property, Pruneyard declared, “will not likely be identified with those of the owner.”

Although users cannot reasonably expect social media services to operate as pure conduits, they can and do associate websites with the content they allow. Like newspapers, and unlike telephone networks, social media sites are increasingly held accountable for the consequences of the speech they carry. They are regularly boycotted by users—and, increasingly, by advertisers, under growing pressure from their own investors—for refusing to take down objectionable content. This is business reality for Facebook, as reflected in the multiple references in its most recent quarterly report to “risk factors” related to how the company’s handling of content is perceived. In Facebook’s last quarterly earnings call, CEO Mark Zuckerberg spent most of his time explaining how the company would handle misinformation about the then-impending election.

Section 230 of the Communications Decency Act allows platforms to moderate what shows up on their services without fear of liability—whether they choose to leave content up or take it down. Clearly, Congress did not want social media to be forced to function as mere conduits (like telegraph and telephone networks) for the speech of others.

But Thomas makes another argument, too. “Even if digital platforms are not close enough to common carriers,” he suggests, “legislatures might still be able to treat digital platforms like places of public accommodation.” But in two key cases that Thomas’s concurrence does not address, the Supreme Court ruled that anti-discrimination laws could not trump private entities’ First Amendment rights to speak, to refrain from speaking, or to decline to associate with others’ speech. The same goes for newspapers and social media companies.

In Masterpiece Cakeshop v. Colo. Civil Rights Commission, the Supreme Court ruled that the commission violated the First Amendment’s Free Exercise Clause though its hostility toward the religious beliefs of a baker whom it sanctioned for refusing to create a custom cake for a same-sex wedding because of those beliefs. “[A]s a general matter,” Thomas opined, in a concurrence, “public-accommodations laws do not target speech but instead prohibit the act of discriminating against individuals in the provision of publicly available goods, privileges, and services.” Thomas drew this language from a ruling that, in turn, invoked Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, a landmark decision barring the city of Boston from dictating which signs or messages a private organization had to allow at its St. Patrick’s Day parade. Notably, Thomas cites neither Masterpiece Cakeshop nor Hurley in his Biden v. Knight concurrence.

Much as activists today press for more detailed social media moderation policies, LGBT rights groups had complained that the parade lacked written procedures for selecting participants, and that what procedures there were were not applied uniformly—resulting in discrimination against LGBT groups wishing to participate in the St. Patrick’s Day parade. Although the state courts accepted these objections, the Supreme Court held that in doing so, they had, in effect, improperly turned the parade sponsors’ “speech itself” into a public accommodation. In excluding LGBT signs, the sponsors had decided “not to propound a particular point of view,” the Supreme Court concluded, “and that choice”whatever the sponsors’ reason for itlay “beyond the government’s power to control.”

After quoting Miami Herald’s affirmation of a newspaper’s First Amendment right to compile, curate, and edit opinions as it sees fit, Hurley rejected the notion that a parade is “merely a conduit for the speech of participants,” rather than “itself a speaker.” The parade sponsors were “intimately connected with the communication advanced” in the parade. Letting the LGBT groups use the parade to “disseminat[e]” a view “contrary” to the sponsors’ “own” would, the Supreme Court ruled, compromise the sponsors’ First Amendment “right to autonomy over the[ir] message.” Again, the same goes for social media platforms.

So which decision—Turner or Hurley—applies to social media? Are social media platforms more like cable companies, which can be compelled to carry others’ speech, or more like parade sponsors, which cannot? Like the parade sponsors in Hurley, social media operators all refuse to carry certain content and viewpoints. The cable operators in Turner, by contrast, raised no such objections. They had, the record showed, “an incentive to drop local broadcasters and to favor affiliated programmers.” The more “channels over which [they] exercise[d] unfettered control,” therefore, the higher their profits. Their complaint turned on their bottom line; they raised no argument about their right to free expression.

That cable operators never objected to the content of broadcast channels is unsurprising. Broadcast content is usually highly sanitized—policed by the FCC for indecency and by broadcasters themselves for anything that might offend advertisers targeting mass audiences. Halleck expressly declined to address the constitutionality of forcing cable operators to carry objectionable content. If cable operators object to carrying, say, QAnon content, the case will be altogether different from, and harder than, Turner.

Much as parade organizers decide who may march, under what conditions, and in what order, social media sites algorithmically rank, order, and present a newsfeed “parade” of user-generated content. And just as organizers can exclude some would-be marchers whose views are antithetical to the message of the parade, social media moderators ban certain content, users, and groups whose views are antithetical to the message of the site.

Hurley itself raised another important distinction between parades and cable. “Unlike the programming offered on various channels by a cable network,” it said, while discussing Turner, “the parade does not consist of individual, unrelated segments that happen to be transmitted together for individual selection by members of the audience.” Although usually composed of distinct units, Hurley observed, a parade is expressive of “a common theme.”

Do social media sites have such a “common theme”? The platforms themselves clearly think so. Facebook sees itself as “a place for expression,” one that “give[s] people a voice.” Twitter, for its part, says that it aims to enable people to “participate in the public conversation freely and safely.” While these “themes” might make for a dull parade, they are nonetheless the makings of a specific, curated, expressive message—a message that is destroyed if calls for violence, harassment, misinformation and the like are allowed. Hurley should therefore protect the right of social media to decide what messages not to associate themselves with.

These are just some of the legal questions and factual details that Thomas does not address. More questions remain, such as what role the Takings Clause might play in any legislation that follows Thomas’s proposed model; indeed, the dissent Thomas joined in Turner specifically noted that Fifth Amendment issues would have to be addressed before cable networks could be treated as common carriers. Only when the arguments Thomas raises make their way to the Supreme Court—perhaps after a state legislature enacts the kind of law he proposes—will the justices have a complete legal and factual record on which to base sound and impartial analysis.

Correction: An earlier version of this article incorrectly attributed two quotes to the Supreme Court’s decision in Miami Herald Publishing Co. v. Tornillo. The article has been updated to reflect that the quotes were from a later decision's description of Miami Herald.


Berin Szóka is President of TechFreedom, a think tank dedicated to technology law and policy. Before founding TechFreedom in 2010, Berin was a Senior Fellow and the Director of the Center for Internet Freedom at The Progress & Freedom Foundation. Previously, he practiced telecommunications and Internet law at Latham & Watkins LLP and Lawler Metzger Milkman & Keeney, LLC, and clerked for a federal district judge. He is graduate of the University of Virginia School of Law.
Corbin Barthold is Internet Policy Counsel at TechFreedom, a think tank dedicated to technology law and policy. Corbin has also served as Senior Litigation Counsel at Washington Legal Foundation, a public-interest law firm in Washington, D.C., and as a partner at Browne George Ross LLC, a litigation boutique in Los Angeles. He lives in the San Francisco Bay Area.

Subscribe to Lawfare