Oral Argument Preview: Gonzalez, et al. v. Google and Twitter, Inc. v. Taamneh, et al.
On Feb. 21 and 22, the Supreme Court will hear arguments in Gonzalez v. Google and Twitter Inc., v. Taamneh
Published by The Lawfare Institute
in Cooperation With
On Feb. 21 and 22, the Supreme Court will hear arguments in Gonzalez v. Googleand Twitter Inc., v. Taamneh, a set of companion cases dealing with the liability of platforms for terrorist material hosted on their services. The first case, Gonzalez, concerns the scope of protections provided by Section 230of the Communications Decency Act of 1996, which provides that “interactive computer service” providers, like social media platforms, cannot be held liable as the “publisher or speaker” of third-party content on their platforms. Gonzalez addresses whether the algorithm-based, personalized recommendations that platforms like YouTube and Twitter provide to users fall within Section 230’s scope.
Taamneh, meanwhile, concerns the applicability and scope of the Anti-Terrorism Act (ATA) and the Justice Against Sponsors of Terrorism Act (JASTA), which allows victims of terrorism to pursue primary and secondary liability claims against any entity that assisted with an act of terrorism. In this case, Twitter is being sued for aiding and abetting ISIS by failing to remove terrorist content on its platform and promoting its circulation. In both Gonzalez and Taamneh, the solicitor general will also be arguing on behalf of the United States.
The majority of media and scholarly attention around the two cases has focused on Gonzalez, which offers an opportunity for the Court to rule on the scope of Section 230, a law whose protections for platforms against liability arising from third-party content has been highly contentious in recent years. Gonzalez is the first Supreme Court challenge to Section 230, which has largely remained unchanged since its enactment in 1996. Often called the “26 words that created the Internet,”Section 230 is credited with allowing the internet to flourish by indirectly promoting free speech online, providing the platforms with the freedom to choose their own content moderation policies without having to worry about the risk of legal liability.
But more recently, Section 230 has attracted political attention as members of both parties criticize the law. Conservatives often point to Section 230 as unfairly protecting the platforms as they “censor” conservative voices, while liberals accuse the platforms of being insufficiently aggressive in preventing harmful speech, violence, and democratic subversion. In 2020, the U.S. Department of Justice issued four recommendationsfor Section 230 reform, which included removing protections from civil lawsuits brought by the federal government, and there are multiple congressional proposals to limit Section 230.
Gonzalez Background
On Nov. 13, 2015, Nohemi Gonzalez—a 23-year-old American studying abroad in Paris—was having lunch with friends when three ISIS terrorists fired into the crowd, killing Gonzalez and several others. ISIS executed multiple attacks throughout the city that day, for which the terrorist organization claimed responsibility through written statements and YouTube videos.
Gonzalez’s family filed a complainton her behalf alleging that YouTube, a subsidiary owned by Google, “has become an essential and integral part of ISIS’s program of terrorism.” The family sued Google under the ATA as amended by JASTA, which allows American victims of international terrorism to sue any person or entity that “aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”
Gonzalez’s complaint focused on YouTube’s content recommendation system, arguing that the system provides assistance to ISIS by allowing ISIS to (a) amplify its messaging by directing users to terrorist content, (b) recruit members via materials shared on the platform, and (c) ultimately instill fear through attacks like the one that killed Gonzalez. The complaint also alleged that YouTube, by generating ad revenue via recommended content, materially benefitted from being the host of terrorist content.
In August 2018, a federal district judge in the Northern District of California granted Google’s motion to dismiss. In a prior 2017 ruling, the district court dismissed similar allegations because of Section 230’s immunization against liability from content created by third parties. In reaching its ruling, the court determined that, as a “publisher” or “information content provider,” Google’s algorithm-based recommendations were immune from liability.
On appeal, the U.S. Court of Appeals for the Ninth Circuit affirmed the lower court’s judgment, ruling that Section 230 completely immunized the platforms against all claims except those that specifically alleged revenue sharing with terrorist groups. The appeals court heard Gonzalez jointly with Taamneh(along with a third case, Clayborn v. Twitter, arguing for secondary liability against Google, Twitter, and Facebook after the 2015 shooting in San Bernardino, California). Readers can find a more detailed description of the Ninth Circuit’s analysis elsewhere on Lawfare.
TaamnehBackground
On Jan. 1, 2017, Abdulkadir Masharipov, an ISIS terrorist, fired into the Reina nightclub in Istanbul, Turkey, killing 39 and injuring 69 others. Nawras Alassaf, a Jordanian citizen visiting the city with his wife for the New Year festivities, was among those killed. ISIS ultimately claimed responsibility for the attack.
The Taamnehcomplaintwas filed by Alassaf’s relatives and originally addressed Twitter, Facebook (since renamed Meta Platforms), and YouTube. The Taamneh complaint mirrors the Gonzalezcomplaint, stating claims under the ATA and JASTA and alleging that “ISIS uses [Twitter, Facebook, and YouTube] to recruit members, issue terrorist threats, spread propaganda, instill fear, and intimidate civilian populations” and thereby aided and abetted the ISIS attack. The plaintiffs asserted that the platforms provided material support to ISIS by providing the infrastructure and services that allow ISIS to “promote and carry out its terrorist activities,” and by failing to proactively monitor and remove the terrorist content.
The federal court in the Northern District of California ruledthat the plaintiffs failed to adequately plead proximate cause for the direct liability claims and dismissed the claims on the grounds that there was no “direct relationship” between the injuries and the actions of the platforms. The indirect liability claims faced a similar fate. The judge denied Taamneh’s allegations that tech platforms had given “substantial assistance” and played a role “in any particular terrorist activities.”
Despite the fact that both Taamnehand Gonzalez address practically identical fact patterns, the appeal in Taamnehconcerned the applicability of the ATA and JASTA rather than, as in Gonzalez, Section 230. Defendants in both cases raised defenses under Section 230 as well as under the ATA and JASTA at the district level, but the cases diverged with the judges’ rulings on the motions to dismiss. In Gonzalez, the district judge dismissed the case on Section 230 grounds, whereas in Taamneh,the district judge’s reasoning focused on the plaintiffs’ failure to state a claim under the ATA and JASTA. Therefore, the Ninth Circuit—and, now, the Supreme Court—addressed distinct legal questions with respect to each case, with Gonzalezfocusing on Section 230 and Taamnehaddressing the ATA and JASTA.
Ruling in Taamneh, the Ninth Circuit determinedthat the plaintiffs “adequately state a claim for aiding and abetting liability,” based on six factors as laid out in Halberstam v. Welch, a 1983 case that has been used as a legal framework for determining secondary liability. However, two factors primarily drove the Ninth Circuit’s decision-making, these being (a) the “substantial” assistance Twitter allegedly provided to ISIS to enable the Reina attack, and (b) the “defendant’s state of mind.” The appeals court found that Twitter was aware it hosted terrorist content and did nothing to prevent its circulation leading up to the Reina attack.
Gonzalez v. Google: Party and Amicus Briefs
The petitioner’s Supreme Court merits brief argues that, within the context of recommendation systems of third-party content, Section 230(c)(1)’s immunity protections should be interpreted more narrowly than courts currently do—making Google potentially liable for ubiquitous recommendations provided throughout its services.
Gonzalez argues that the lower courts have been persistently overbroad in their interpretation of Section 230’s use of the term “publisher,” erroneously applying that definition “to virtually any activity in which such a publisher might engage, including making recommendations.” Gonzalez instead argues that courts should understand “publisher” using the term’s definition under defamation law—meaning a communicator of information, which can be held liable for the communication of that information. This would narrow the scope of Section 230’s protections because recommending information goes beyond merely communicating it. As Gonzalez writes, “a claim seeking to impose liability for a recommendation would not treat the defendant as a publisher if that recommendation did not involve merely disseminating third party material, or if the claim asserted that the recommendation itself was a cause of the injury to the claimant.”
In addition, Gonzalez suggests that Section 230’s protections should not extend to content that the platforms themselves produce. The brief argues that both the Ninth Circuit, in Gonzalez, and the U.S. Court of Appeals for the Second Circuit, in the 2019 case Force v. Facebook, were wrong in holding that a defendant can be liable only for content that is actually requested by users, rather than media that is unilaterally generated by the platform—such as YouTube’s recommendations. Under this logic, Gonzalez distinguishes between social media sites and traditional search engines, like Google. Search engines, Gonzalez argues, respond to user-requested searches and are thus “interactive computer services” under Section 230(c)(1), while recommendations from social media sites are generated by the sites themselves. In addition, the links located on search engines are managed by the party generating the website, while social media platforms have links that are “created by the site itself.”
Google’s response brief makes two arguments. First, Google notes that, if the Supreme Court reverses the Ninth Circuit’s ruling in Taamneh (on the basis of limiting liability under the ATA), Gonzalez’s suit against Google would fail—since the cases feature “‘materially identical’” claims—and there would be no need to reach the Section 230 question.
Second, Google argues that Google asserts that its algorithm-provided recommendations are protected under Section 230 and that to rule otherwise would be to treat Google “as the publisher or speaker” of third-party content, as is forbidden by the text of Section 230. If platforms could be held liable for algorithmic recommendations, Google asserts, much of the modern internet would be essentially inoperable. Such a ruling by the Court could have “devastating spillover effects” that would create “a disorganized mess and a litigation minefield” undermining websites’ ability to function. Were the Court to accept Gonzalez’s position and narrow the scope of Section 230 protections, Google warns that this decision would “upend the internet and perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech.” Google also counters the United States’s position (described below), arguing that no “‘implicit message’” is provided to users through YouTube’s individualized recommendations.
Brief of the United States in Support of Vacatur
In its brief, the government asserts that Section 230(c)(1) is “most naturally read” to protect platforms against content produced by third parties that is posted on their platforms, but not against the platforms’ own conduct. The government argues that Gonzalez’s claim does not treat YouTube as the “publisher or speaker” of third-party content. Rather, in the government’s view, YouTube's providing of recommendations to users is a “distinct message,” separate from the content of any video posted by a third-party on the platform. The brief emphasizes that “a video in a user’s queue thus communicates the implicit message that YouTube ‘thinks you, the [user]—you, specifically—will like this content.’” On this basis, YouTube could theoretically be liable for the recommendations it provides, though not for its hosting of the recommended content. The government argues that the Court should vacate and remand Gonzalez back to the appeals court to consider Google’s potential liability under the ATA for its recommendations alone, separate from the underlying content.
Gonzalez v. GoogleAmicus Briefs
The amicus briefs tend to fall into three broad categories: (a) briefs identifying technological distinctions between the internet as it existed when Section 230 became law in 1996 and the those technologies that have emerged since, (b) briefs focusing on the case’s implications for free expression, and (c) briefs advocating for judicial restraint.
Technological Distinctions
One group of amici focus on whether technological developments, such as the rise of algorithm-generated recommendations, should make a difference in interpreting Section 230. Briefs filed by Sen. Josh Hawley(R–Mo.), the Counter Extremism Project, and a group of former national security officialsargue that the tech companies that spread illegal content through an algorithm should not be covered by Section 230. Hawley argues that liability arises from “incendiary content” being highlighted, which, he writes, constitutes “actual knowledge” by tech companies. The Counter Extremism Project claims that Google intentionally “promote[s] divisive, extremist content to generate revenue” with nonneutral algorithms and thus should not be shielded by Section 230 for those recommendations.
A group of members of Congress, led by Sen. Ted Cruz (R-Texas), go even further in questioning the lower courts’ ongoing approach to interpreting Section 230. They argue that, if interpreted correctly, Section 230’s definition of “publisher” in Section 230(c)(1) should have a narrow construction that is “merely definitional” and that provides noimmunity protection for platforms. The members argue that Section 230(c)(2) would still provide a liability shield: that portion of the statute precludes liability for actions “voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” But the brief takes the view that only removal of the most egregious material would fall under this protection.
Another group of amici rejects these calls for a more aggressive reading of Section 230. For example, the Computer & Communications Industry Associationand its partners argue against the notion that organizing third-party content is an “affirmative endorsementor even adoption of particular content,” writing that such an approach, if adopted, would “render Section 230 meaningless.” Amici argue that so-called targeted recommendations are “merely the results of using dynamic content organization to make digital services more user-friendly.” When a digital service organizes third-party content, it does not transform the content into the service provider’s owncontent. Amici argue that “accepting petitioners’ theory” would “erase the line between mere organization and affirmative endorsement.”
Free Expression
Several briefs focus on the potential free expression implications of limiting Section 230. Multiple amici argue that movement away from Section 230 protection would dramatically change the basic structure of the internet. The online job marketplaces ZipRecruiter and Indeed, Inc.highlight how online platforms need algorithms to organize content in their giant databases and argue that such algorithm sorting has no more communicative value “than does sorting millions of documents into filing cabinets and then indexing the location of those materials.” The Center for Democracy and Technology(CDT) and the American Civil Liberties Union(ACLU), meanwhile, contend that content selection and its display is a quintessential activity of traditional publishers: Just as a newspaper picks which stories to highlight for readers, each interactive computer service provider must select which content to display or users will be overwhelmed.
Further, Columbia University’s Knight First Amendment Institutegrants that recommendation algorithms, at times, can have “pernicious effects” such as amplifying polarizing content. But, without these algorithms, the institute argues, the internet would consist of “useless jumbles of information” on an “almost unimaginable scale.” The organization TechFreedom, supporting Google, points to what it sees as a contradiction in the arguments made by critics of Section 230: While some despise Section 230 for enabling hate speech and misinformation, others detest it for enabling “censorship[.]” To this end, amici argue these “utterly disparate” imagined end results of “curtailing Section 230” should make the Court skeptical of taking this step. Redditargues that its own platform is special becauseone can find content that wasn’t necessarily sought out, and that its unique position demonstrates “why the work of human beings, supported by automated tools” is the most effective way to “moderate content and maximize the Internet’s usefulness.”
Not all First Amendment groups support Google’s position, however. For example, Free Speech for Peopleargues that Google’s recommendations should be treated as speech produced by the “information service provider” itself, even if they are automated. As a consequence, the group argues, the Court should not interpret these recommendations as deserving of Section 230 protection.
The Integrity Institute and Algotransparencyjointly argue for a narrow reading of Section 230, urging the Court to distinguish the multiple algorithmic systems that platforms use and to ground the opinion in specific factual allegations. Common Sense Media and Meta whistleblower Frances Haugenargue in favor of narrowing the scope of protection provided by Section 230 to platforms, emphasizing the harms caused by Google’s recommendations and the vulnerability of teenage users. The Lawyers' Committee for Civil Rights Under Lawargues for a balance between ensuring the voices of people of color are not unimpeded while ensuring that civil rights laws can be fully implemented, with a focus on lower federal court practice.
Judicial Restraint
Numerous amici emphasize judicial restraint: If Section 230 is to be revised, they argue, the job should belong to Congress, not the courts. A group of internet law scholarsargue that only Congress should make any changes to Section 230 and that the logical conclusion of petitioners’ argument would expose users who amplify third-party content to liability. They emphasize that Congress chose to protect a wide range of speech and there is no support within Section 230 for distinguishing between mere automated recommendations and recommended content and thus all algorithmic recommendations fall within the scope of Section 230.
By contrast, the Institute for Free Speech and Professor Adam Candeubargue that neither party’s interpretation of Section 230’s protections is correct. They urge the Supreme Court to remand to the district court, where it should develop the record to determine whether YouTube’s recommendations form its own speech (through the “creation of development of information,” which is precluded from protection under Section 230(f)(2)), or merely providing “transmitted information” to users.
Several parties also note issues of statutory construction. The Cyber Civil Rights Initiativeargues for a highly limited reading of Section 230 protections for platforms, with limited liability protections interpreted as a preemption provision applying solely to speech actions. The group urges the Court to hold platforms liable for third-party defamation if they were aware that the speech was defamatory.
Twitter v. Taamneh: Party and Amicus Briefs
Twitter argues that the Ninth Circuit made “two fundamental errors that cannot be squared with the plain language of the statute,” in reference to JASTA and the legal framework set out under Halberstam. First, Twitter writes, the Ninth Circuit incorrectly held that the “generalized assistance” Twitter provided to ISIS terrorists in the form of providing its regular business services met the liability threshold under JASTA, which states that the assistance must have injured the plaintiff by materially contributing to the “specific act of terrorism” in question. Second, according to Twitter, the Ninth Circuit erred in finding that Twitter knowinglyprovided assistance, which is a requisite of liability. Twitter argues that simply knowing that there are terrorists among billions of Twitter users is not sufficient to establish that Twitter “knowingly” provided assistance or failed to reasonably moderate its platform to prevent the Reina attack.
Petitioners Mehier Taamneh, et al., argue that their complaint establishes liability under JASTA as the “purpose of JASTA is to provide ‘the broadest possible basis’ for seeking relief against defendants who provided support for organizations that engage in terrorist activities against the United States.” Primarily, Taamneh claims that Twitter aided in the general terrorist campaign that enabled the Reina attack by disseminating and promoting content, recruiting new members, and raising money. Taamneh claims that JASTA does not limit the scope of assistance to just the attack itself but broadly covers any assistance “knowingly” given to a terrorist organization that committed an attack. Further, Taamneh states that Twitter did have the “requisite knowledge” to establish the applicability of JASTA, pointing to public reports from “statements including American and British officials, and reports in many of the nation’s major newspapers and television networks” as well as the volume of terrorist content on their platforms to claim that it was unlikely that Twitter was unaware it harbored terrorist content. Finally, Taamneh rejects Twitter’s claims that it cannot distinguish terrorist content from other types of content on its platforms as it has pointed in previous arguments to its technical ability to perform content moderation at scale. As such, Taamneh claims that Twitter was aware of the terrorist content on its platform and could have prevented the dissemination of terrorist content—but failed to do so, thereby contributing to the Reina attack.
Twitter v. TaamnehAmicus Briefs
The amicus briefs in Taamnehfall into three categories: (a) briefs focusing on the applicability of the ATA and JASTA on the case, (b) briefs concerned with the case’s potential consequences for national security and foreign policy, and (c) briefs identifying free speech implications of the case.
ATA and JASTA Applicability
A majority of briefs address the applicability of the ATA and JASTA to Taamneh under the legal framework set in Halberstam. Views vary among amici regarding whether Twitter “knowingly” aided and abetted the terrorists involved in the Reina attack, whether the assistance was “substantial” enough to enable the Reina attack in particular, and whether Twitter just enabled a general terrorist campaign. In support of Twitter, the United Statesargues that Halberstam “equates aiding and abetting with conscious participation in wrongdoing[,]” which cannot occur if the defendant “provided only routine business services in an ordinary manner”—as Twitter did in this case. The United States argues that, because Twitter did not “provide atypical services or bent their usual policies” to enable the Reina attack, but merely failed to adequately moderate its platform, the circumstances of the case do not rise to the secondary liability framework under Halberstam.
In support of Taamneh, a brief filed by Sen. Chuck Grassley(R-Iowa) argues that the Ninth Circuit “did not go far enough in recognizing the breadth of JASTA’s cause of action.” A group of scholarsof the ATA suggested that Twitter’s provision of regular business services sufficiently fulfill the knowledge argument.
National Security and Foreign Policy
Several briefs highlight potential national security concerns that could result from overly broad interpretations of JASTA. Facebook and Google, filing a brief as respondents supporting petitioner, point out that JASTA sets out particular conditions for U.S. victims of terrorist attacks to bring claims against a foreign state that enabled an attack to occur within the U.S., thereby abrogating foreign immunity. Should the Court adopt the Ninth Circuit’s broad interpretation, Facebook and Google argue, this would allow claims to be brought against foreign states for “insufficiently [taking] ‘meaningful’ or ‘aggressive’ steps” against terrorist activity within its borders. A brief filed by former State Department legal adviserswarns that by expanding liability and introducing costly litigation to foreign-state actors, including allies, the United States can expect reciprocal treatment by other jurisdictions under the principle of reciprocity. The brief claims that the “possibility of frivolous lawsuits could chill Americans’ willingness to serve overseas in diplomatic or military capacities,” therefore posing a risk to national security. The U.S. Chamber of Commerceadds that these lawsuits may incentivize companies to cease operations in conflict zones so as to avoid the risk of litigation. The brief argues that this result is “contrary to U.S. government policy that recognizes the benefits of commercial engagement in such regions.”
Free Speech
In contrast to the Gonzalezbriefs, relatively few amici in Taamneh focus primarily on the free speech implications of the case. One brief, filed jointly by the CDT, the ACLU, the Knight First Amendment Institute, and other partner organizations,worries that an overly broad interpretation of JASTA could constrain speech online. The groups suggest that compelling “online intermediaries” to be liable for terrorist content would encourage such platforms to adopt “overly cautious moderation” measures to avoid liability, introducing “blanket bans on certain topics, speakers, or specific types of content.” A brief filed by the Computer and Communications Industry Associationcomplements the CDT’s points by arguing that the market conditions that led to the development of the current model for content moderation systems allow platforms to improve at identifying harmful content by learning from mistakes—implying that by removing this corrective function, content moderation systems may get worse.