The Real Takeaway From the Enjoining of the Florida Social Media Law
A federal judge was right to block Florida’s social media law. But that doesn’t mean the First Amendment bars all government regulation of platform content-moderation decisions.
Published by The Lawfare Institute
in Cooperation With
On June 30, Judge Robert Hinkle of the U.S. District Court for the Northern District of Florida preliminarily enjoined a controversial Florida law targeting social media platforms. Senate Bill 7072 was enacted in May and would levy fines and impose additional penalties against social media platforms that blocked or otherwise inhibited content from political candidates and media organizations. The technology news website Ars Technica reported excitedly that the injunction “tears Florida’s social media law to shreds”; other media reports have been similarly complimentary of the decision. While Hinkle was correct to enjoin this particular law, and his novel analysis of the First Amendment position of social media platforms is a welcome addition, the real lesson of the Florida law is that government restrictions on the content moderation decisions of private platforms are more constitutionally viable than has generally been assumed.
The substance of Hinkle’s order is worth a close look. The first ground he identified for enjoining the law is that much of it is preempted by Section 230 of the Communications Decency Act, specifically the provision of Section 230 that prevents the imposition of liability on social media platforms for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This “Good Samaritan” immunity is generally assumed to grant platforms effectively unlimited discretion in what they choose to moderate and remove (although more recent arguments, including from Justice Clarence Thomas, argue that immunity for removing content should be construed more narrowly, including when platforms censor political speech).
Hinkle reserved the bulk of his opinion for the First Amendment issues. The most interesting part of the opinion is the judge’s nuanced analysis situating social media platforms along the spectrum of speech platforms, from hands-on publishers to neutral conduits.
On the publisher end are entities that exert substantial editorial control over the speech that occurs on their platforms and, in so doing, come to be identified with that speech. The classic example is a newspaper, which not only individually chooses each story it prints but also gives its imprimatur to those stories as worthy of public dissemination and discussion. As Hinkle notes, in Miami Herald v. Tornillo (1974), the Supreme Court struck down a “right of reply” statute (coincidentally also in Florida) that would obligate newspapers that criticized a political candidate to print that candidate’s reply. Hinkle cited two other Supreme Court decisions striking down laws that raised similar First Amendment concerns. In one, the court held that a privately organized parade could not be forced to include a gay rights group—in the court’s reasoning, the parade organizers could not help but be identified with each of the parade participants, so the organizers had to be permitted to exercise discretion in choosing who participates. In the other decision, the court held that the government could not force a public utility to include the view of a third party with which the utility did not agree in the same mailing envelope that the utility routinely used to communicate with its customers its opinions on matters of public concern.
It makes sense that the First Amendment would provide strong protections in these situations: The sorts of government compulsions in these cases have a high potential to distort public discourse. They can crowd out other kinds of discourse: with newspapers, editorial capacity; with parades, the number of participants; and with mailing material, the physical space to deliver a message. Affirmative publication obligations can also disincentivize entities from speaking to begin with. For example, a newspaper may choose not to cover politics at all so as to avoid right-of-reply obligations; parade organizations may decline to hold a parade at all so as not to include groups they dislike; and a utility may forego customer mailings so as to avoid having to include the messages of its opponents. And if affirmative publication obligations create the impression that an entity endorses speech it is required to carry but with which it does not agree, public discourse is distorted because listeners are misled as to who supports what speech.
On the other end of the spectrum of speech platforms are entities that operate as passive and permissively available conduits for the speech that flows through them. Here the Supreme Court has upheld laws that require private platforms to give access to would-be speakers. Florida’s argument in this case relied heavily on these cases. In Pruneyard Shopping Center v. Robins (1980), the Court held that a privately owned shopping mall did not have a First Amendment right to exclude members of the public who were canvassing for petitions, again leaning primarily on the grounds that the shopping mall remained free to disclaim any endorsement with the canvassers and “the views expressed by members of the public in passing out pamphlets or seeking signatures for a petition thus will not likely be identified with those of the owner.” Similarly, in Rumsfeld v. FAIR (2006), the Supreme Court upheld a law that denied funding to universities that did not permit on-campus military recruiters (usually because the universities objected to the then-ban on gays in the military); it held that the funding condition did not infringe on the First Amendment rights of universities because universities were free to communicate their opposition to the military policy and a reasonable observer was unlikely to confuse the presence of a recruiter on campus with the university endorsing the recruiter’s policies.
The logic of these cases is the natural converse of the logic of cases like Tornillo. Here, the government action increased the amount and diversity of speech—there was little chance that the private entities would shut down so as to avoid having to host speech they disagreed with, and there was little danger of misleading the public as to who actually supported the speech in question.
Combining these two groups of cases gives a set of factors that we can use to determine when a government restriction on a platform’s ability to exclude speech is constitutionally problematic. And the question thus facing Hinkle was where on the spectrum of private speech platforms to put social media companies like Facebook and Twitter. His conclusion—that social media platforms fall somewhere in the “middle” of this divide and that “it cannot be said that a social media platform, to whom most content is invisible to a substantial extent, is indistinguishable for First Amendment purposes from a newspaper or other traditional medium”—is the most important part of the opinion, because it recognizes, in ways that legal and political discourse around online content moderation rarely has, that social media platforms cannot easily be shoehorned into traditional First Amendment rules. Rather, thinking about the legal architecture supporting content moderation requires going back to the underlying principles of the First Amendment—its goal of increasing individual autonomy, creating a marketplace of ideas, and encouraging democratic self-government—and crafting a new set of doctrinal rules for these new kinds of speech platforms.
In his attempt to undertake that analysis for social media companies in the context of the Florida law, Hinkle got it partly right and partly wrong. On the one hand, he identified the most constitutionally problematic features of the Florida statute. For example, under the Florida law a social media platform may not “censor” any “journalistic enterprise based on the content of its publication or broadcast,” where censorship includes “post[ing] an addendum to any content or material posted by a user.” But this means that a social media platform could not attach a disclaimer to content that it finds offensive or with which it does not want to be identified. Platforms couldn’t even label content, which has become an important tool in the fight against misinformation and foreign interference. This provision thus violates the First Amendment requirement that the public be able to accurately tell who supports what speech.
Hinkle also convincingly demonstrates the deeply partisan motivations of the Florida government, pointing to Florida Governor Ron DeSantis’s signing statement railing against “the leftist media and big corporations.” Given the dangers of propaganda for democratic self-government, First Amendment concerns are at their highest when the government engages in viewpoint-based discrimination, including when it compels speech to favor a particular side of the argument.
Finally, Hinkle highlights the many drafting problems with the law, which “is riddled with imprecision and ambiguity”—not to mention an embarrassing carveout for theme parks, which in the judge’s view shows a legislature that, while purportedly deeply concerned about censorship, is “apparently unwilling to subject favored Florida businesses to the statutes’ onerous regulatory burdens” (and has led to tongue-in-cheek calls for social media platforms to start their own theme parks). Given all these problems, it’s not unreasonable to enjoin the entire law, rather than hunting and gathering for parts that can be salvaged.
But this analysis doesn’t necessarily extend beyond the Florida law. While these problems may have been central to the Florida law, they are not inherent to all attempts to limit the extent to which social media platforms can control their users’ speech. Unfortunately, the extreme problems with the Florida law led Hinkle to overstate the general case against such regulations, in a classic case of hard cases making bad law—or at least bad dicta.
Throughout the opinion, Hinkle undervalues the government interest behind laws limiting content moderation. For example, he dismisses Florida’s invocation of First Amendment values as “perhaps a nice sound bite,” but nothing more, because, under “accepted constitutional principles,” the First Amendment “does not restrict the rights of private entities not performing traditional, exclusive public functions.” The point about state action is correct, but that doesn’t mean that there’s no broader societal First Amendment value in limiting the First Amendment rights of social media platforms. Nor is legislation regulating private companies so as to increase the expressive capabilities of ordinary people a rarity. Rather, it has been so common throughout the 19th and 20th centuries as to constitute, in Genevieve Lakier’s phrase, a whole “non-First Amendment law of freedom of speech.”
Failing to appreciate this point leads Hinkle to mischaracterize the government interest at stake in limiting content moderation. He argues that “leveling the playing field—promoting speech on one side of an issue or restricting speech on the other—is not a legitimate state interest.” Whatever the partisan motives of the Florida law (and they were certainly front and center), part of the concern it was tapping into was not simply that conservative political voices are disadvantaged online (assuming this is in even true, which appears doubtful) but rather that the playing field is not level because the social media gatekeepers are so powerful relative to users. As Jack Balkin has observed, while traditional speech regulation occurred when the government tried to censor individuals directly, the rise of digital intermediaries has led to a “triadic” relationship in which social media platforms play a potentially even greater role than do governments in dictating what people can and cannot say. But this means that, as a practical matter, the ability of users to express themselves may depend on the ability of the government to restrain platform censorship. Given the size and power of these platforms, only the government can function as an adequate counterweight. The problem with the Florida law is not that Florida decided to act as that counterweight on behalf of its citizens but, rather, the way it did so.
Perhaps most problematically, Hinkle mischaracterizes the relationship of market power and barriers to entry to the constitutionality of government must-carry regulations, “arguing that the concentration of market power among large social-media providers does not change the governing First Amendment principles.” For this argument, Hinkle relies heavily on Tornillo, which recognized the problem of newspaper monopolies but nevertheless struck down the right-of-reply statute.
But Tornillo is a poor guide for applying the First Amendment to the content moderation decisions of social media platforms. The part of the decision relevant to limiting social media content moderation—in which platforms do not have to deal with the same limitations of space and editorial capacity as do traditional media outlets—is famously conclusory and under-reasoned. Here is the argument in full from Tornillo:
Even if a newspaper would face no additional costs to comply with a compulsory access law and would not be forced to forgo publication of news or opinion by the inclusion of a reply, the Florida statute fails to clear the barriers of the First Amendment because of its intrusion into the function of editors. A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. Accordingly, the judgment of the Supreme Court of Florida is reversed.
Other than a footnote quoting a scholar worrying that “editorial selection opens the way to editorial suppression,” that’s the entirety of the analysis. But the court provides no reason, except for the footnote’s oblique reference to a slippery-slope argument, for why editorial regulation always undermines First Amendment values. Even editorial “suppression” is sometimes fine. For example, anti-discrimination law prohibits newspapers from carrying racially discriminatory housing advertisements—a limitation that could directly impact newspapers’ bottom lines—but no one seriously argues that this constitutes, or is leading to, impermissible limits on editorial discretion.
This failure of Hinkle’s injunction order to explain when government interference in editorial decision-making is impermissible is particularly problematic since Tornillo’s expansive rhetoric cannot be read literally as a correct statement of First Amendment doctrine (let alone the First Amendment itself). Five years before Tornillo, the Supreme Court upheld, in Red Lion Broadcasting v. FCC (1969), the “fairness doctrine,” which required TV and radio broadcasters to present both sides of controversial issues in such a way as to “accurately reflect[] the opposing views.” Although the court emphasized that government regulation of broadcast media was inevitable given the limited amount of spectrum and thus the need for government licensing, it also stressed the broader danger of media concentration to public discourse, precisely the stance that Tornillo seemed to reject:
It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount. It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee.
Not only did Tornillo not overrule Red Lion, but it didn’t even mention it, so perfunctory was the analysis.
In relying on Tornillo, Hinkle also ignores the fact that Tornillo’s expansively laissez-faire vision did not survive in the doctrine. Two decades after Tornillo, the Supreme Court upheld in a pair of cases a government mandate that cable companies carry local channels. In the first Turner Broadcasting System v. FCC (1994), the court distinguished Tornillo on the ground that technological differences between cable and newspapers meant that monopoly in the cable market could justify interference with the editorial decisions of cable operators: Because cable providers control the physical infrastructure that enters consumers’ homes, and the economics of laying cable lead to natural monopolies, government regulation is not per se a violation of the First Amendment. This, however, implicitly abrogates Tornillo’s seemingly categorical dismissal of market concentration and monopoly power as relevant to First Amendment analysis. Thus, as Yochai Benkler has explained, the upshot of the Supreme Court’s must-carry First Amendment cases is this:
Government regulation of an information production industry is suspect. But government nonetheless may act to alleviate the effects of a technological or economic reality that prevents “diverse and antagonistic sources” from producing information and disseminating it widely. The necessary inquiry in each case is whether there is enough factual evidence to support the government’s claim that its intervention is needed to prevent centralization of information production and exclusion of “diverse and antagonistic sources.”
In other words, now courts are just haggling over the price, although of course the extent to which the major social media platforms are actual monopolies remains a contested empirical question.
The ongoing saga of SB 7072 offers several useful lessons, chief among them that First Amendment doctrine presents more opportunities for regulation than has generally been assumed. Some prohibitions on certain forms of content moderation may well be permissible. Notably, Eugene Volokh, who previously took an expansive position on the First Amendment rights of internet companies in the context of search engine results, has argued that certain forms of “common carrier” regulations on social media platforms would be constitutional. And Justice Thomas, no opponent of broad First Amendment rights for corporations, has similarly suggested that the government can regulate social media platforms as public accommodations (a concept similar to common carriage). The growing recognition among conservative elites of the need to limit the First Amendment claims of digital platforms is an indication that the multi-decade expansion of corporate First Amendment rights may finally be slowing.
Difficult questions remain, however, about precisely what sort of government regulation of content moderation decisions is permissible. SB 7072 highlights two issues in particular. The first is the extent to which a government restriction interferes with the platform’s ability to operate. For example, a blanket ban on any sort of moderation would quickly turn platforms into cesspools of misinformation, harassment and pornography. Thus, as Hinkle notes, “In the absence [of] curation, a social-media site would soon become unacceptable—and indeed useless—to most users.”
Similarly, the type of restriction on moderation matters. For example, in addition to preventing platforms from kicking political candidates off the platforms entirely, SB 7072 would prevent platforms from “shadow banning,” not removing content but making it less available on other users’ feeds. But the technological sophistication of social platforms is largely in the recommendation algorithms. Thus, government restrictions on shadow banning would have to be so fine-grained and intrusive that they may well impose a disproportionate burden on internet platforms and their users. By contrast, SB 7072’s requirements of transparency in content moderation decisions and giving users the option to view their feeds in chronological order, rather than based on recommendation algorithms, may be more feasible.
The second difficult legal issue is what role content neutrality should play in First Amendment analysis of government restrictions on content moderation. A key concept in First Amendment law is the distinction between content-neutral regulations (also called time, place and manner regulations) on speech and those that are content based. For example, banning parades in the middle of the night is a content-neutral regulation, whereas banning political parades is content based. Content-based restrictions are subject to strict scrutiny—the highest standard for constitutional review—while content-neutral regulations are subject to lower (but still demanding) intermediate scrutiny. The reason for this difference is that content-based restrictions, even if they purport to be neutral with respect to the actual viewpoint of the speech, more acutely raise, as the Supreme Court has noted, “the specter that the government may effectively drive certain ideas or viewpoints from the marketplace.”
The content-neutral versus content-based distinction is important not only when the government restricts speech but also when the government compels it. In particular, in the case of must-carry requirements, the concern with content-based mandates is that they could crowd out other views and perspectives. Thus, in Turner, the main reason that the Supreme Court evaluated (and ultimately upheld) the must-carry requirement for cable companies under immediate, rather than strict, scrutiny is that the requirement to carry local cable stations did not distinguish between the content that those stations carried. But as Hinkle notes, SB 7072, by focusing especially on political speech, is “about as content-based as it gets.”
To this, one might argue that Turner’s focus on content-neutral must-carry requirements should not carry over to social media, because social media platforms are not limited in the amount of speech they can carry in the way that cable companies are—Facebook doesn’t have to jam content into an hour-by-hour programming schedule. The problem with this argument is that it ignores the actual bottleneck for social media platforms: It’s not the ability of platforms to serve content (which is indeed effectively infinite) but, rather, the ability of audiences to consume it. Digital bandwidth is unlimited, but user attention (and the news-feed real estate that dominates it) most certainly is not. Thus, by supporting the speech of political candidates, SB 7072 would limit user attention for nonpolitical speech.
The better response is to accept that SB 7072 is content based, but to recognize that this is a feature rather than a bug. By limiting the must-carry requirement only to the speech of political candidates and journalistic institutions, SB 7072 imposes a lower burden on social media platforms than a broader, even if content-neutral, regulation would do. In other words, the more content neutral the must-carry requirement is, arguably the less narrowly tailored it is, at least from the perspective of the financial and administrative costs for platforms. Moreover, if there’s any category of speech that is core to the protections of the First Amendment, it is political speech, so it’s not crazy for SB 7072 to single out political (and journalistic) speech for protection.
The complexity of the legal and policy issues around platform must-carry regulations underscores why it’s important to not cede attempts to deal with the problem of platform censorship to the Trumpist right and other bad-faith actors (not to mention Trump himself, who recently sued the major platforms on the dubious theory that they are state actors). It is true that the Florida law is a poorly thought out pander to the bruised ego of Donald Trump and his political imitators. And so it’s tempting to crow over this setback for the law’s supporters. But this would be a mistake. It should not be a partisan position that, in a free society, it is intolerable for any entity—whether the government or a private company—to control access to the public square. Whatever their downsides, government restrictions on content moderation, whether enacted or merely credibly threatened, may well be necessary to ensure the health of the digital commons. Ignoring that reality simply leaves the policymaking to those least responsible to do it.