Criminal Justice & the Rule of Law Cybersecurity & Tech

Social Media Isn’t a Public Function, but Maybe the Internet Is

Nick Nugent
Tuesday, March 14, 2023, 11:31 AM

Revisiting the public function doctrine is central to the task of protecting users from internet exclusion at the hands of private parties.

In 1946, the U.S. Supreme Court found that the First Amendment still applied in the privately-owned company town of Chickasaw, Alabama. (cecouchman, https://tinyurl.com/2p9878b8; CC BY-SA 3.0, https://creativecommons.org/licenses/by-sa/3.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

If the buzz over the “Twitter Files” has revealed anything, it’s that the prospect of finding state action in the workings of social media companies can be alluring indeed. For those who think today’s biggest platforms have gone too far in pressing their thumbs on the scales of public debate, state action, if found, provides a silver bullet. Simply import the First Amendment’s prohibitions against viewpoint discrimination, and then watch with satisfaction as social media companies are suddenly forced to welcome back countless viewpoints and users they previously banished. No need to muck about with messy private governance boards, build alternative platforms, or persuade capricious centibillionaires to purchase and reform your favorite websites. The First Amendment offers instant results.

But when it comes to private online services, attempts to find state action have generally failed. Clever arguments centered on the public function doctrine, joint action, and even jawboning have seen their day in court, so to speak, and been found wanting. The public-private dichotomy is alive and well when it comes to online spaces, and that seems unlikely to change anytime soon.

And yet, if we consider the current trajectory of online content moderation—in particular, its movement toward deep, infrastructural deplatforming—we begin to see the need for a new theory of state action, one that I believe has not yet been considered. That social media isn’t a public function seems clear enough, but what if the internet itself is? As I argue in a forthcoming article (and from which this article is adapted), the law may soon need to protect users from being excluded from the internet entirely at the hands of private parties. Revisiting the public function doctrine might prove central to that task.

Public Function

Before unpacking this internet-as-public-function idea, it’s worth analyzing previous attempts to find state action in cyberspace. Most public function arguments begin, as they should, with the 1946 case of Marsh v. Alabama, which concerned the plight of Grace Marsh, a Jehovah’s Witness who was arrested for trespassing after she refused to cease proselytizing on the streets of Chickasaw, Alabama. Her arrest stemmed from the fact that Chicksaw was a “company town” fully under the ownership of the Gulf Shipbuilding Corporation. Marsh, therefore, seemingly enjoyed no more right to speak and evangelize on its streets than she would have enjoyed in the lobby of any private office building. Yet the Supreme Court held that Marsh did not shed her constitutional rights the moment she set foot in town. Inaugurating the public function doctrine, the Court explained, “The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.”

Drawing on this language, some users have accused social media companies of violating their First Amendment rights by virtue of “censoring”—that is, moderating—their posts, tweets, or videos. As one litigant put it, because Google “hold[s] YouTube out to the public as a forum for ‘freedom of expression,’” Google is “engaged in state action under the ‘public function’ test.” Aggrieved users have made similar arguments in court with respect to Facebook’s content moderation policies and, decades earlier, against AOL’s email filtering practices.

But these attempts have failed because Marsh’s language does not in fact provide the current standard for the public function doctrine. After reaching its zenith in Food Employees v. Logan Valley Plaza, which recognized a First Amendment right to protest in private shopping malls, the Court walked back from Marsh’s broad language and from the result in Logan Valley, holding instead that public functions are limited to those activities performed traditionally (and in the Court’s most recent formulation, exclusively) by the state. As a result, only a handful of private activities have met this high bar—among them, administering elections, exercising eminent domain, and operating a municipal park—and none is likely to do so in cyberspace. Even if certain internet functions were historically performed by government agencies, no such functions are now performed exclusively by the state. And social media, the sole creation of scrappy entrepreneurs, has never been the province of the state, traditionally, exclusively, or otherwise.

Joint Action

Under the joint action test, state action may be found if the state “knowingly accepts the benefits derived from unconstitutional behavior.” Although it did not employ the label, one of the more creative arguments that has been offered for finding state action in cyberspace relies on this test. Writing in Lawfare, Jed Rubenfeld asserted that Section 230 (of Communications Decency Act fame) might have converted certain private technology firms into state actors by encouraging them to remove objectionable content while immunizing them for doing so. As Rubenfeld pointed out, a state could not enact a “Firearms Decency Act” that allowed private organizations to confiscate guns as an end run around the Second Amendment. Nor could Congress leverage an “Email Decency Act” to bypass the Fourth Amendment by protecting individuals who hack into CEOs’ inboxes and share the discovered emails with the government. In the same manner, the First Amendment prevents Congress from outsourcing to private platforms the task of censoring protected user speech by granting them immunity from private suit in return.

But as Alan Rozenshtein explained, Section 230 differs from Rubenfeld’s hypothetical decency acts in one crucial respect: Platforms have a preexisting right, under the First Amendment, to take down content they find objectionable. Moreover, the primary consideration that motivated lawmakers to pass Section 230 was the need to immunize providers for carrying content—for example, defamatory statements made by users—that might otherwise subject them to secondary liability and thereby encourage them to over-censor. Section 230, therefore, no more converts platform operators into state actors than do countless other laws that add protections around other forms of lawful conduct.

Jawboning

What about jawboning, which occurs when the government pressures a private party to block content that the government could not itself censor? Unlike other state action tests, jawboning arguments have seen some success when it comes to online actors.

In Dart v. Backpage.com, the U.S. Court of Appeals for the Seventh Circuit held that a sheriff had violated the First Amendment when he implicitly threatened to prosecute credit card companies unless they stopped processing payments for a website that hosted adult content. Results in cases like Backpage have therefore motivated some observers to scour the Twitter Files for evidence of government pressure. Their hope seems to be that if Matt Taibbi and Bari Weiss, two of the journalists at the center of the Twitter Files, simply surface enough emails from the FBI asking Twitter to take action against specific lawful user accounts, then the jawboning threshold will be crossed, and the First Amendment can do its work.

But jawboning has two main drawbacks that prevent it from converting social media into a free speech free-for-all. First, at least under one line of cases, jawboning requires an element of coercion. State actors targeting a private actor must leave that actor with “no choice but to act in a way compelled by the state.” More than strongly suggesting a course of conduct, public officials must issue “enforceable threats.” Thus far, despite juicy morsels from the Twitter Files like Adam Schiff’s eventually successful attempt to get a journalist suspended from Twitter, no evidence has yet surfaced that Twitter felt legally coerced into taking down content that it would have otherwise preferred to leave up.

It’s true, as Genevieve Lakier explains, that another line of cases has “interpreted the First Amendment to prohibit government pressure tactics that seek to intimidate rather than persuade private actors into suppressing objectionable speech, even when those tactics are not so strong as to leave their target with essentially no choice but to comply.” But even if the recently revealed interventions of the Department of Defense, FBI, and other government agencies are analyzed under this more relaxed standard, the second problem is the issue of remedies.

Even if a clean case of social media jawboning presented itself—suppose Attorney General Merrick Garland threatened to prosecute Twitter unless it removed my tweets—I might persuade a court that I had been unconstitutionally censored and get an injunction to have my tweets restored. But if, after the threat of prosecution had long passed, Twitter decided of its own volition to delete my content, it would be within its rights to do so.

In other words, a finding of jawboning might provide an injunctive remedy in individual cases (and for so long as the platform wants to host content that the state is pressuring it to remove). But jawboning by no means converts a private party into a state actor forever and all time. Put differently, if Donald Trump thinks that a finding of jawboning with respect to some other user’s content will force YouTube to reinstate his own account, he is mistaken.

Viewpoint Foreclosure

Case law aside, there’s another reason why we don’t need the Constitution to protect our tweets from private, sometimes misguided, content moderation: the availability of alternatives. If Twitter suspends your account, you can post on LinkedIn. If LinkedIn boots you, maybe Reddit will be more tolerant. Even if all the major platforms refuse to host your viewpoints, you can revert to secondary platforms like Rumble, ThinkSpot, or Mastodon. These might be poor substitutes for the reach of a YouTube or a TikTok, but they are avenues nonetheless for expressing yourself and potentially growing an audience. Or, as a last resort, if no one will have you, you can build your own website and speak online using that platform. This diversity of options stands in stark contrast to true censorship, which, by virtue of having the force of law behind it, is plenary.

But what if you suddenly found yourself booted from the internet entirely? What if you were prevented even from operating your own website because of the viewpoints you express (or merely allow) on it? If that were to happen, then private “content moderation” might indeed start to look far more like public censorship—at least as to its plenary effects on the internet—perhaps requiring us to take a fresh look at the issue of state action.

To be sure, such a thought experiment seems fanciful. Countless alternative platforms exist, at least some of which will host just about any kind of content. And even if they didn’t, nothing stops you from creating a new platform yourself and setting your own rules.

But while these statements accurately describe the current state of the internet, there are reasons to question whether this state will always exist.

Traditionally, content moderation took place almost exclusively within the application layer, the topmost layer of the internet stack, where user-accessible services like websites, mobile apps, and chat services can be found. Thus, the decision whether to host certain user speech ultimately fell to the operator of the hosting website. If Tumblr wanted to allow or disallow a given conspiracy theory or unpopular viewpoint on its site, Tumblr would make the call, and that would be that. Others might oppose Tumblr’s decision by publicly criticizing it or “voting with their feet,” but critics’ options generally stopped there. Provided it otherwise complied with the law, Tumblr could make its own content moderation decisions.

But within the past five years or so, this dynamic has begun to change as content moderation has moved down into the lower layers of the internet stack. For example, in 2018, Joyent, a cloud hosting provider and subsidiary of Samsung, booted Gab, a fringe social network, for permitting toxic (albeit lawful) user content. Cloudflare, which provides content hosting and security services for websites, likewise terminated security services for 8chan after the latter was exposed as a haven for overtly racist user content, including speech that celebrated violence.

Joyent and Cloudflare—unlike Tumblr, Reddit, and Snapchat—are not application providers. As denizens of the infrastructure layer, they provide important infrastructural resources—computing, storage, security, and the like—on which many applications depend. Thus, Joyent’s and Cloudflare’s actions could be said to represent a kind of second-order cancellation. They took down toxic user speech not by persuading Gab and 8chan to change their content policies but by removing key infrastructural resources those platforms needed to stay online.

To be sure, infrastructural providers have a valid interest in not being associated with platforms that host extremist content (however defined). And booted platforms still have options for staying online. They can find other hosting providers (although the universe of hosting providers is far smaller than that of application providers). Failing that, and assuming they can bear the capital and operating expenses, they can build their own infrastructure. They can purchase their own servers and other hardware on which to run their sites, housing such equipment in their own buildings or in colocation centers. In other words, they can vertically integrate.

But this continued availability of alternatives vanishes once we reach the bottommost layer of the internet stack, what we might call the core infrastructure layer. In this layer reside the key resources that infrastructure providers themselves need to stay online. Think networks, IP addresses, and domain names.

Even if deplatforming is now making a home in the mid-stack infrastructure layer (and even if some observers contend that the internet has always had a measure of infrastructural content moderation), it could be argued that the core infrastructure layer (again, the bottommost layer of the stack) has long remained neutral. Yet that too may be starting to change.

In November 2018, incels.me, a forum for “involuntary celibates,” was taken offline permanently after its domain name was revoked. Although domain name registrars like GoDaddy have recently taken to suspending domain names associated with lawful but offensive websites, this particular suspension came courtesy of the entity responsible for administering the .me top-level domain. Because only one such entity controls that top-level domain, incels.me could not simply move to another registrar. It and its website remain offline to this day.

In January 2021, Priest River, a small Idaho internet service provider (ISP), informed customers that it would block Facebook and Twitter by default. Ostensibly responding to customer demand for such blocking, its actions were clearly intended as a counterstrike against popular social networks for suspending then-President Donald Trump following the Jan. 6 Capitol riot. Although ISPs have, on occasion, blocked access to illegal content, and sometimes to lawful content for competitive reasons, Priest River’s actions broke new ground in the United States by blocking customers’ access to lawful content based solely on ideology—namely, the ISP’s moral objection to Twitter’s and Facebook’s content moderation policies. Such actions mimic those of Telus, Canada’s second-largest telecommunications company, which was found to have blocked access to a website supporting a labor strike against the company.

Most concerning of all, however, was an event in the Parler saga that attracted little public attention. While countless articles were written alternately criticizing or defending Amazon Web Services’s (AWS’s) decision to terminate cloud hosting services to Parler, less notice was paid to a far more consequential development. After being kicked off AWS, Parler eventually managed to find a new host in DDoS-Guard, a Russian cloud provider that has served as something of a refuge for exiled platforms. But in January 2021, Parler again went offline after the DDoS-Guard IP addresses it relied on were revoked by the Latin American and Caribbean Network Information Centre (LACNIC). The result was that DDoS-Guard was relieved of more than 8,000 IP addresses, taking down Parler and doubtless many other DDoS-Guard-hosted websites in the process.

These developments—where content moderation moves into the lowest layer of the internet stack—differ from previous deplatforming efforts in that they present the real possibility of viewpoint foreclosure, the complete banishment of certain lawfully expressed viewpoints from the internet. The reason is simple. Whereas the application and infrastructure layers boast ample alternatives, options begin to dwindle as one moves deeper down the stack. Five regional internet registries alone control the allocation of the world’s IP addresses. Although hundreds of different top-level domains exist, domain name administration ultimately rolls up to a single entity: the Internet Corporation for Assigned Names and Numbers (ICANN). And it matters little that some ISP somewhere in the world might be willing to provide you with commercial internet service to get your website online if your particular geographic region is serviced by only three ISPs, none of which will return your calls.

Nor can a beleaguered platform simply vertically integrate down just one more layer to become a “full-stack rebel,” that mythical online creature capable of withstanding all attempts to deplatform it. ICANN is not obligated to grant you your own top-level domain, there are no vacancies among the ranks of regional internet registries, and building your own terrestrial fiber network is far beyond the capabilities of most website operators. Put differently, given the current architecture of the internet, no rebel can truly be full-stack.

This is not to say that all of these core infrastructure providers are currently itching to use their control over internet architecture to stamp out any and all speech they dislike or that they wouldn’t have to go through nontrivial policy changes to become the internet’s content police. But it is to say that the current architecture of the internet theoretically concentrates such control in only a handful of powerful entities and that content moderation has been moving in this direction.

The Internet as a Public Function

All of this brings me to my central claim: If we reach the stage where certain unpopular yet lawfully expressed viewpoints are effectively banished from the internet due to the actions of private intermediaries entrusted with administering the core infrastructural resources of the internet, then the law should step in to prevent that exclusion. Moreover, if statutory law fails to provide for such a basic right to the internet for purposes of lawful expression, then it would not be an unreasonable extension of the public function doctrine to protect that right under the First Amendment. In other words, exclusion from any individual private website should not trigger constitutional scrutiny, but exclusion from the internet entirely should.

Of course, this internet-as-public-function theory suffers from some of the same flaws as previous attempts to find state action in cyberspace. There’s no evidence of joint action in the foregoing examples of deep infrastructural deplatforming. To my knowledge, U.S. agencies aren’t currently pressuring providers to cancel domain names and IP addresses used by offensive websites. And although certain core internet functions were traditionally performed by the state—in particular, the internet was born out of government-sponsored ARPANET and NSFNET research projects, and the Department of Commerce ran the early domain name system through federal contractors—none of these functions is now performed exclusively by the state. Not to mention the fact that users booted from the internet remain free to speak in offline fora, such as parks, streets, and traditional print media.

Still, one way of interpreting the concern animating the Marsh Court was that private parties might so thoroughly replicate and displace seemingly public spaces that, having drawn public activities completely onto private property, owners could then shut down those activities they disapproved of, including constitutionally protected speech and religious exercise. While critics of Marsh could quibble about this or that doctrine of property law, if one simply steps back to look at the broader picture, it seems hard to defend the proposition that if certain Chickasaw residents wanted to exercise their constitutional rights, they just had to move to another town.

The same is true of the internet. Just as the state of Alabama ceded operation of Chickasaw to a private corporation, the federal government incrementally handed over control of the public internet to private parties until no meaningful public cyber-territory remained. As a result, individuals now use the internet solely at the pleasure of private entities that can proscribe constitutionally protected speech and, similar to the Gulf Shipbuilding Corporation in Marsh, literally prosecute noncompliant users for trespass. While users remain free to express themselves offline, we should once again take a step back to ask whether it’s any easier to defend the proposition that if certain users want to exercise their constitutional rights, they just have to leave the internet.


Nick Nugent is a lecturer at the University of Virginia School of Law and the Program Director for the Karsh Center for Law & Democracy. He will join the University of Tennessee as an assistant professor of law in August 2023.

Subscribe to Lawfare