The Eleventh Circuit’s Acceptance of a Consumer Protection Approach to Social Media Regulation
The Eleventh Circuit’s ruling provides important guidance to legislators working on social media laws, but the most important takeaway is the vindication of a consumer protection approach to social media content moderation.
Published by The Lawfare Institute
in Cooperation With
The Florida and Texas social media laws passed in 2021 were widely seen by both their proponents and their detractors as attempts by conservative forces to discipline social media companies for deplatforming President Trump and refusing to distribute the New York Post’s Hunter Biden story that criticized then-presidential candidate Joe Biden on the eve of the November 2020 election. More generally the bills responded to conservative complaints of unfair treatment on social media platforms.
Both bills called for restrictions on the ability of platforms to engage in unfettered content moderation—the Texas bill banned viewpoint discrimination and the Florida bill required platforms to carry political candidates and journalistic enterprises. Both bills imposed various transparency measures such as mandated disclosure of content standards and explanations of content moderation actions.
Not surprisingly, the tech industry filed First Amendment challenges. District courts in Texas and Florida rejected both laws in their entirety as violations of platforms’ free speech rights and stayed implementation pending further court review. Surprisingly, however, on May 11, the U.S. Court of Appeals for the Fifth Circuit lifted the stay on the Texas bill. The industry filed an emergency appeal to reinstate the stay, which the Supreme Court granted on May 31, in a 5-4 ruling that did not detail its reasoning. Justice Samuel Alito, joined by Justices Clarence Thomas and Neil Gorsuch, wrote a dissenting opinion that referenced but did not endorse the Texas attorney general’s defense of the law’s constitutionality.
On May 23, the U.S. Court of Appeals for the Eleventh Circuit released a ruling on the Florida social media law in connection with an appeal of the lower court stay. It is the court’s first nuanced attempt to come to grips with the First Amendment issues in social media regulation. The court properly found that attempts to regulate social media firms cannot avoid First Amendment challenges by invoking the label “common carrier” as if it were a get-out-of-jail-free card. It used a stringent form of constitutional review to strike down all of the content restrictions in the Florida bill, but a less demanding standard to uphold some of the transparency rules. It allowed the provisions that were likely to withstand constitutional scrutiny to go into effect.
This ruling provides some important guidance to legislators working on social media laws. The most important takeaway is the vindication of a consumer protection approach to social media content moderation. Requirements to disclose content moderation standards are likely to survive a First Amendment challenge, the court ruled, because they provide needed information to social media customers. The ruling also suggests that precisely specified requirements for notices, explanations, and complaint processes would be constitutionally sound ways to hold platforms accountable for running fair content moderation programs. User rights to opt out of content curation, transparency reports, access to data for researchers, and risk analyses to identify potential harms to which social media users might be exposed would probably fit under the consumer protection umbrella as well.
Other measures that rely more on prioritizing user speech rights might very well respond to urgent public needs. I would put in this category the provisions in the state laws banning viewpoint discrimination and providing carriage rights for political candidates and journalistic enterprises, which privilege the interests of social media users in receiving a wide variety of information and perspectives. It would be a complicated legislative task to craft such measures in a way that properly balances the conflicting speech rights involved and avoids unconstitutional vagueness, and the state laws didn’t even make a serious attempt to do so.
The Eleventh Circuit opinion, reflecting mainstream First Amendment jurisprudence, found that such measures do not serve any legitimate governmental purpose and counseled policymakers to look elsewhere to protect social media users. But, in a perceptive Lawfare post, Minnesota Law School scholar Alan Rozenshtein says this reflects a “cramped” and “stingy” view of what counts as a legitimate governmental purpose that is not reflected in the Alito dissent in the parallel Texas social media case. Legislation privileging user speech rights might have more of a chance to survive court review than mainstream First Amendment “absolutism” would allow.
Are Social Media Companies Common Carriers?
In broad outline, the Florida social media law provides users with certain carriage and opt-out rights and separately mandates certain platform disclosures. The state’s defense against the First Amendment challenge is that these measures treat platforms as common carriers and the state can legitimately treat platforms as common carriers without triggering any First Amendment review at all. This is because common carriers are not engaged in protected speech but act merely as distributors of the speech of others.
The court rejected this reasoning and ruled that the platform’s content moderation and organizing activity constitutes speech for purposes of First Amendment review. The court, when looking at the actual conduct of social media companies in organizing and arranging user content, concluded that they “should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.” It also rejects the idea that legislation can transform communications companies into common carriers without triggering First Amendment review. “Neither law nor logic,” the court writes, “recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier.”
Public interest advocate Harold Feld at Public Knowledge argues, probably correctly, that the court’s treatment of common carriage is historically ill informed and overly broad and that it has a hidden agenda of undermining net neutrality rules. But the court was annoyed that Florida seems to have “bet everything” on its common carriers argument and had not even tried to meet any standard of First Amendment scrutiny. The court is right that merely uttering the magic words “common carrier” does not and should not absolve legislators from doing the hard work of formulating precise purposes for social media regulation and ensuring that its requirements are appropriately restricted to fulfilling those purposes.
“Must Carry” for Candidates and Journalistic Enterprises
The Florida law prevents social media platforms from banning, downgrading, or amplifying material “posted by or about” political candidates. It also bars platforms from taking action to censor, deplatform, or downgrade a “journalistic enterprise based on the content of” its posted material.
The district court rejects both of these “must carry” measures under heightened scrutiny, asking whether they seek to advance a substantial or compelling government interest and whether they do so in a narrowly tailored fashion that restricts no more speech than is necessary to accomplish the purpose.
It finds “no legitimate—let alone substantial—governmental interest in leveling the expressive playing field” through these provisions. It accepts that the different policy goal, articulated in the Supreme Court’s Turner Broadcasting v. FCC case, of “promoting the widespread dissemination of information from a multiplicity of sources” as legitimate and acknowledges that the Florida law’s “must carry” rules would ensure “that political candidates and journalistic enterprises are able to communicate with the public.” But carriage on social media, the court ruled, is not needed for them to reach the public, since these entities could use “other more permissive platforms, their own websites, email, TV, radio, etc.” to reach their audiences.
The court’s conclusions are probably well within the boundaries of mainstream First Amendment jurisprudence. But as Rozenshtein says, the opinion embraces “a highly restrictive view of what counts as a legitimate government interest” that reflects a “stingy view of what users can legitimately expect from their digital town squares.” He correctly says that this “cramped” view, while consistent with some earlier Supreme Court rulings, is not compelled by these precedents. Rozenshtein also points out that rejecting the poorly drafted “must carry” provision for journalistic enterprises did not require the court to hold that the entire question of the extent to which “social media giants have unfettered control over the digital public square” is devoid of “legitimate public and governmental issues.”
The court’s ruling also is hasty in its reasoning. It completely ignores the political broadcasting rules that require TV and radio stations to carry the uncensored campaign ads of qualified candidates for federal office, even though candidates have alternative outlets. And it barely mentions the actual finding of the Turner case, which required cable carriage of local broadcast signals even though these stations were technically able to reach the public using the frequencies assigned to them by the Federal Communications Commission (FCC).
But this is the preliminary injunction stage, and the Eleventh Circuit might be more thorough when it comes to the merits. As Rozenshtein notes, the dissent from Alito in the parallel Texas case certainly indicates that the Supreme Court will have to be more careful when this case gets there.
Review of Disclosure Requirements
The Eleventh Circuit broke new ground in determining that proper First Amendment review of social media laws might not mean strict or even intermediate scrutiny. The court reviewed the Florida bill’s disclosure requirements under a less demanding standard set out in Zauderer v. Office of Disciplinary Counsel. These disclosure requirements must be “reasonably related to the State’s interest in preventing deception of consumers” and must not be “unduly burdensome” or “chill speech.”
The court thought these disclosure provisions required only the disclosure of “purely factual and uncontroversial information” about platform conduct toward their users and the “terms under which” their services will be made available. The less stringent Zauderer level of review is appropriate, the court said, because these measures “impose a disclosure requirement rather than an affirmative limitation on speech.” The court acknowledged that this standard is usually applied “in the context of advertising” but thought it broad enough to cover social media disclosure requirements that “provide users with helpful information that prevents them from being misled about platforms’ policies.”
The Florida law requires platforms to publish their content standards, inform users about changes to these standards, provide users with view counts of their posts, and inform candidates about free advertising. The court ruled that these measures are not unduly burdensome and are reasonably related to the legitimate purpose of ensuring that users who are involved in commercial transactions with platforms are “fully informed about the terms of that transaction and aren’t misled about platforms’ content-moderation policies.”
The court rejected a poorly drafted requirement for notices and explanations of content moderation actions. But a more carefully drafted measure with implementation by a regulatory agency would almost certainly pass the test of being reasonably related to consumer protection without being unduly burdensome.
Consistency and User Choice
The Florida bill requires platforms to allow users to opt out of content curation in order to receive “sequential or chronological posts and content.” It also demands that platforms must apply their content moderation authority “in a consistent manner among its users on the platform.” The court reviewed these provisions under heightened scrutiny because they are not disclosures of “purely factual and noncontroversial information” but rather “affirmative limitation[s] on speech.” The court found, for instance, that the consumer opt-out provision would prevent platforms “from expressing messages” through its recommendation and content moderation algorithms.
The court found no government interest in these measures and rejected them. As noted above, Rozenshtein correctly observed that this finding reflects an excessively cramped and stingy view of a legitimate government interest. Moreover, it is oddly at variance with the rest of the ruling, since these measures seek to achieve the same consumer protection purpose the court embraced for disclosure requirements.
The consistency requirement works together with the standards disclosure requirement, seeking to prevent platforms from announcing standards that it does not apply to all, or that it applies arbitrarily. As I’ve noted before, enforcing a consistency requirement might very well raise constitutional issues. But that is a different matter from finding no consumer interest in having the consistent application of published content rules to guide consumer expectations.
The opt-out provision allows consumers to avoid platform algorithms they find distracting, confusing, abusive, or annoying or that limit their access to material they would like to see. This consumer protection measure is an idea whose time has come around the world, contained, for instance, in Article 29 of the Europe Union’s almost-adopted Digital Services Act and in Article 17 of China’s Internet Information Service Algorithmic Recommendation Management Provisions, which became effective in March 2022.
It would be an irony indeed if, in the name of freedom for social media companies to “express messages” through their algorithms, U.S. social media users had less control over their social media feeds than consumers in Europe and China.
The court’s embrace of consumer protection as a basis for social media rules should allow properly crafted consistency and consumer choice provisions to withstand constitutional scrutiny
Consumer Protection and the First Amendment
If sustained by the Supreme Court, the Eleventh Circuit’s consumer protection approach to content moderation would be a huge win for social media users. As legal scholar Eric Goldman has pointed out, transparency would be unconstitutional as applied to other media. Newspapers do not have to disclose their editorial standards to anyone, and they can change them without notice. Parade operators don’t have to disclose their criteria for deciding which groups participate in their events. Broadcasters and cable operators, which enjoy lower speech rights than newspapers, don’t have to disclose standards for carrying programs or channels. If legislatures required these entities to publish their standards, the courts would likely overturn them as chilling the entities’ speech by limiting their unfettered freedom to print, arrange parades, or choose programming as they see fit.
Indeed, in Washington Post v. McManus, the U.S. Court of Appeals for the Fourth Circuit rejected a Maryland law that required, among other things, websites to display the identity of the purchaser of a political campaign ad as “chilling the speech” of the websites.
So, it is welcome news that the Eleventh Circuit recognized the consumer interest in social media disclosure. But it is not the first time that courts have accepted a consumer protection rationale for restraining the First Amendment rights of communication companies.
The U.S. Court of Appeals for the District of Columbia Circuit relied on a consumer protection rationale in upholding the FCC’s 2016 net neutrality rules. It rejected a First Amendment challenge on the grounds that broadband providers held themselves out as providers of neutral, indiscriminate access to websites, not purveyors of curated websites filtered by editorial criteria such as being family friendly. The court ruled that broadband providers were not required to adopt such a neutral posture, but once they made promises of neutrality to their users, they were bound by these promises. “The First Amendment,” said the court in explaining this “if you say it, do it” theory, “does not give an [internet service provider] the right to present itself as affording a neutral, indiscriminate pathway but then conduct itself otherwise.”
The D.C. Circuit’s invocation of consumer protection law in the case of broadband companies easily extends to social media firms. Consumer protection law generally prevents companies from promising one thing to their customers and then delivering some different product or service. To the extent that social media companies represent themselves in certain ways, to the extent they promise their users certain things, they necessarily limit their First Amendment right to do something different from what they have promised. Social media companies hold themselves out as platforms for the speech of others subject only to compliance with social media content rules. This creates obligations for transparency and due process. Failure to disclose and apply content rules and to provide due process is consumer deception, a material misrepresentation of the nature of the product, which would affect a consumer’s ability to make an informed choice. As in all cases of consumer deception, social media companies do not have a free speech right to tell consumers they will do something and then not do it.
Conclusion
In its reaction to the Eleventh Circuit’s ruling, the Knight First Amendment Institute properly took a victory lap. The court embraced its nuanced approach that allowed for transparency and disclosure measures that respect platform speech rights, and even accepted the Zauderer standard the institute had recommended to assess disclosure measures. I feel somewhat vindicated as well since the court in effect adopted a version of the “consumer protection approach to content moderation” that I urged three years ago at Georgetown Law and that informed my white paper with the Transatlantic Working Group on Content Moderation Online and Freedom of Expression in 2020 and my report this year for the Centre for International Governance Innovation.
Court review imposes proper discipline on legislatures when they seek to regulate speech. It requires them to justify what they are doing in a precise and rigorous way. Policymakers aiming to regulate social media companies should welcome this court review and prepare for it by carefully specifying the interests they are pursuing and explaining with some particularity how the measures they adopt are needed to serve those interests.
But as Rozenshtein notes, the era of First Amendment absolutism may be coming to a close. He draws attention to the section of the Alito dissent saying that “it is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies.” Depending on how the remaining justices vote, the Supreme Court might very well find that First Amendment rules apply differently to social media companies than to newspapers. And in determining this, the Court might rule that legislatures addressing “the power of dominant social media corporations to shape public discussion of the important issues of the day” have considerably more leeway than if they were dealing with parade organizers.
Policymakers should take advantage of the consumer protection road map laid out by the Eleventh Circuit to craft needed social media regulation in a way that will pass constitutional scrutiny. Treating social media regulation as a necessary means toward effective consumer protection will likely enable it to withstand appropriate and inevitable court review.