Courts & Litigation Cybersecurity & Tech Surveillance & Privacy

TikTok v. Garland Opens the Door to Global Censorship

Anupam Chander, G. S. Hans, Edward Lee
Thursday, February 6, 2025, 1:00 PM

The case sets a dangerous legal precedent, empowering governments to police speech on platforms by invoking national security. 

TikTok app icon on a mobile phone.
TikTok app icon on a mobile phone. (Source: Solen Feyissa, CC BY-SA 2.0 <https://creativecommons.org/licenses/by-sa/2.0>, via Wikimedia Commons.)

Published by The Lawfare Institute
in Cooperation With
Brookings

TikTok v. Garland marks a low point for the First Amendment. Unless TikTok is sold, the ruling will harm at least 170 million Americans, many of whom depend on TikTok for their income, creative outlet, and discourse. The decision damages the Supreme Court’s own First Amendment jurisprudence, returning to the dark days of the notorious decisions during the early 1900s and the Red Scare. And it sets a dangerous legal precedent not only in the United States but also abroad.

Other countries will be empowered to police the speech on Facebook, X, and YouTube—by disguising their speech restriction as merely an issue of the U.S. companies’ “ownership.” Citing the Supreme Court’s TikTok precedent, those countries (which frequently cite American law in their own high-profile legal decisions) will target speech by imposing fines and forcing the U.S. companies to sell their operations to a company approved by the foreign government. Divest or shut down. That formula is as simple as it is pernicious: By dictating a platform’s ownership, the government controls its algorithm and what content it promotes or demotes (ironically, exactly what lawmakers accused China of doing to TikTok). To justify the law, just say the magic words “national security.”

The Law at Issue

Let’s examine this new formula for government control over social media platforms. Congress enacted the Protecting Americans from Foreign Adversary Controlled Applications Act (the TikTok law) out of dual national security concerns related to foreign propaganda and espionage. As the Supreme Court described, the law makes it “unlawful for companies in the United States to provide services to distribute, maintain, or update the social media platform TikTok, unless U. S. operation of the platform is severed from Chinese control.” Civil liability arose on Jan. 19, 2025, because TikTok’s owner, ByteDance, a company based in China that owns TikTok and the algorithm TikTok uses, did not divest its ownership of TikTok, a U.S. company.

The law’s challengers believed, however, that it violates the free speech rights of not only TikTok, the U.S. company, but also the millions of American creators who use TikTok to create and share content. (We, along with 32 First Amendment and internet law professors, filed an amicus curiae brief in support of this position.) In a rushed decision, the Supreme Court disagreed, rejecting the legal challenges. A day after the Court’s decision, TikTok shut down. Apple and Google removed TikTok from their app stores to avoid the “hefty monetary penalties” that could reach into the hundreds of billions of dollars for each company. TikTok went dark. Its service was restored only after a written assurance (initially on his own social media platform, TruthSocial) from then-President-elect Trump “to extend the period of time before the law’s prohibitions take effect.” Although the ultimate fate of TikTok remains uncertain as of this writing, the Supreme Court’s decision carries serious implications—far beyond TikTok.

The Supreme Court’s Analysis

When Congress enacts a law designed to target or affect what speech circulates on social media—as it admittedly did with the TikTok law—the Supreme Court must consider this purpose of Congress. Even a law that is content neutral on its face may mask a content-based motivation or purpose behind the law, thereby necessitating strict scrutiny.

But, here, the Court sidestepped a major reason Congress enacted the law: the U.S. government’s asserted interest to regulate “covert manipulation” of speech on the platform through TikTok’s algorithm. The TikTokkers’ brief devoted nearly 20 pages to this issue. No one denies that Congress enacted this law, in large part, to stop China from controlling or amplifying content on TikTok—what Sen. Mark Warner (D-Va.), a strong supporter of the law, called “Chinese propaganda.”

Yet the Court imagined the “counterfactual” that Congress did not have this content-based reason—stopping speech manipulated by China through the algorithm to spread its message—in enacting the law. With that hand-waving, the Court concluded “Congress would have passed the challenged provisions based on the data collection justification alone.” Under the Court’s logic, in this so-called mixed justification case, Congress’s purportedly legitimate (non-speech) basis in data security immunized Congress’s speech discriminatory rationale from any constitutional scrutiny. That is, by relying on the supposed “valid” rationale supporting the law, the Court wholly ignored the fact that the law targeted speech that Congress did not like.

But that illogic puts the cart before the horse. Under the Court’s own precedents, as the TikTok Court itself quoted, a law is treated as content based—and therefore subject to strict scrutiny—when the law was “adopted by the government ‘because of disagreement with the message the speech conveys,’” even when the statute is content neutral on its face. After the Court concluded the law in question was facially content neutral (a contestable conclusion in itself), it still must review whether the law was “adopted by the government ‘because of disagreement with the message the speech conveys’” and should therefore be subject to strict scrutiny.

Had the Court done so, it would have found ample evidence of a content-based rationale for Congress’s action. Indeed, statements from members of Congress, including one of the law’s sponsors, and the U.S. government’s own D.C. Circuit brief assert the government interest in policing “covert manipulation” of speech by China through TikTok’s algorithm, such as by “manipulating this country’s public discourse” and “amplifying preexisting social divisions.” Rep. Mike Gallagher (R-Wisc.), a co-sponsor of the law, openly admitted that he considered “the control of information” on TikTok, including “propaganda” of “the Chinese Communist Party” as the “greater concern” than data security. So did Senator Warner, who said that content manipulation was “really concerning me the most.” The Knight First Amendment Institute’s amicus brief before the D.C. Circuit in the TikTok litigation collects numerous statements by legislators who supported the law because of their fears about the content—in their words, “propaganda”—on TikTok. Echoing these lawmakers’ fears of Chinese propaganda on TikTok, the U.S. government even told the Supreme Court “[t]he [Chinese government] already has used social media to conduct ‘a campaign of harassment against pro-democracy dissidents in the United States.’”

Trying to control how an algorithm selects or promotes speech is an editorial decision protected by the First Amendment. As Justice Neil Gorsuch wrote in his concurrence in the judgment, “One man’s ‘covert content manipulation’ is another’s ‘editorial discretion.’” Such editorial decisions are protected under the First Amendment, even when “Americans (like TikTok Inc. and many of its users) may wish to make decisions about what they say in concert with a foreign adversary.” As the Court recognized inMoody v. NetChoice, “we have repeatedly held that laws curtailing … editorial choices must meet the First Amendment’s requirements.” And, as Justice Sonia Sotomayor’s concurrence concluded, citing Moody: “TikTok engages in expressive activity by ‘compiling and curating’ material on its platform.” Strict scrutiny should apply—not the intermediate level the Court used.

If Congress is serious about protecting Americans’ data, it should carefully draft and debate a generally applicable data privacy law protecting all Americans. But it should not use data security as an excuse to regulate speech online. Nor should the Court cite data security as a basis to avoid scrutinizing the government’s attempt to alter what content an algorithm amplifies. Otherwise, all the government has to do to achieve its content regulation goals is to tack on a data security goal and ask the courts to permit the censorship slide in a “counterfactual.” Given that almost all online newspapers or websites gather information, data security is a convenient concern for disguising speech restrictions.

The Fallout

The Supreme Court’s ill-reasoned decision will reverberate through time, regardless of what befalls TikTok. It will join the Court’s notorious decisions from the early 1900s, during the first Red Scare. The Court’s deferential approach in those cases to the government’s assertion of national security to police seditious speech in the United States has not withstood the test of time. Put simply, the First Amendment has no exception for national security. As Justice Oliver Wendell Holmes explained in dissent in Abrams v. United States, “as against dangers peculiar to war, as against others, the principle of the right to free speech is always the same.”

The Court’s decision sets a terrible precedent not only in the United States but also abroad. America will be one of the few democratic nations banning TikTok. India banned TikTok in 2020, but only after a skirmish in the Himalayas where Indian troops died. Critics in India contend the TikTok ban has spurred more government censorship. Albania recently banned TikTok for one year. Opposition leaders describe the temporary ban as “politically motivated,” part of the “prime minister’s crackdown on political dissent after a year of popular unrest.” Now, with the precedent set by the United States, other countries may be emboldened to ban TikTok or any app they dislike, including to stop messages promoting democracy.

And immediate damage of the Supreme Court’s decision will be felt here at home—absent a permanent reprieve from the president following a sale of TikTok’s U.S. operations or an amendment of the law by Congress. TikTok’s U.S. creators and users—all 170 million Americans—will lose their online communities and followings, which many creators built by devoting countless days and nights during the pandemic to establish. Many TikTokkers are ordinary individuals who discovered not only new communities but also new ways to earn a livelihood based on their videos and creative content. Individuals, businesses, nonprofits, and even U.S. political candidates (including both major-party candidates in the recent U.S. presidential election) who have built followings on TikTok may try to migrate to other platforms. But expecting these Americans to start all over and rebuild elsewhere, losing all their TikTok followers—for some creators, in the millions—is heartless.

Indeed, a U.S. shuttering of TikTok will, whatever the government’s protestations to the contrary, permanently destroy a massive amount of speech. True, the act permits users to download their content before the ban goes into effect. But given the act’s hasty timeline, many TikTok users will not know how to do so, thereby losing their videos, their TikTok followers, and their speech forever. Nor is it true that the shuttering of one speech platform can be cured simply by advising speakers to find new ones. Communities such as #BookTok—which has been a boon for U.S. authors and publishers alike—will be forced to try to find alternative venues to coalesce. But none of the other social media platforms comes close to providing an alternative for the short-form video, in part because of TikTok’s pioneering algorithm. As one business told the Wall Street Journal, “there’s no substitute.” TikTok’s and TikTok users’ choice of speech algorithm is protected under the First Amendment. But the Court ignored it.


Anupam Chander is the Scott K. Ginsburg Professor of Law and Technology at Georgetown University.
G.S. Hans is a clinical professor of law and founding director of the Civil Rights and Civil Liberties Clinic at Cornell University.
Edward Lee is a professor of law at Santa Clara Law.

Subscribe to Lawfare