Courts & Litigation Cybersecurity & Tech States & Localities

New and Old Tools to Tackle Deepfakes and Election Lies in 2024

Kenneth Parreno, Christine Kwon, Victoria Bullock, John Langford
Tuesday, October 1, 2024, 2:00 PM
How new statutory prohibitions and long-standing tort claims can combat pernicious election lies when voter intimidation laws fall short.
(Photo: Pixabay, Free Use)

Published by The Lawfare Institute
in Cooperation With
Brookings

It will come as no surprise to Lawfare readers and listeners that deepfakes and election lies pose a serious and growing risk to voters and the integrity of our elections. In 2016, for example, Douglass Mackey attempted to suppress the vote by posting memes encouraging voters to cast ballots by text. In 2022, vigilantes in Arizona spread the lie that voters could legally deposit only their own mail-in ballots at drop boxes and accused voters who (legally) deposited multiple ballots of being “mules.” And this election cycle—after years of warnings from legalscholars and law enforcement—we are beginning to see generative artificial intelligence (AI) deployed in malicious efforts to deceive voters through deepfakes. 

By deepfakes, we mean image, video, or audio content that has been either technologically created or altered from its original in nonobvious ways using AI, such as the robocall emulating President Biden’s voice and encouraging New Hampshire’s Democratic primary voters to stay home in March. Although there is a legitimate debate about the extent to which deepfakes will actually disrupt the electoral process, the stakes are too high to ignore the growing risks. Bad actors have a growing array of tools at their disposal to create and distribute harmful deepfakes to a wide audience. This cycle, we have seen—and expect to continue to see—numerous deepfakes apparently aimed at deceiving voters: from dozens of fake images of former President Trump surrounded by Black voters (aimed at courting communities of color), to an image of former President Trump on Jeffrey Epstein’s plane (intended to dissuade would-be Trump voters).

While voter suppression and election disinformation are not new, many tools traditionally deployed to combat falsehoods and voter intimidation are not particularly well suited to timely countering two narrow, but pernicious, categories of election falsehoods: (a) lies about the time, place, and manner of elections unattached to broader voter intimidation efforts; and (b) deepfakes. Traditional tort claims (like defamation) take time to litigate and don’t cover falsehoods unattached to a specific person’s reputation or privacy, and voter intimidation laws don’t apply to speech that doesn’t rely on fear or coercion to deter electoral participation. 

Rather than trying to fit a square peg in a round hole, litigators can use some very new and very old tools that may better fill the gap this cycle and in elections to come: new statutory prohibitions on deepfakes, a new statutory prohibition on narrow election falsehoods, and the old tort of “interference with the right to vote.” These tools could prove valuable in combating deepfakes aimed at intentionally subverting the electoral process, while abiding by the protections of the First Amendment.

The Limits of Traditional Tools to Combat Non-Threatening Election Falsehoods

Tort law is the most obvious tool for private litigants to deter falsehoods. In the preelection context, however, the most traditional claims are not well suited to combating election disinformation before the election happens and the damage is done.

Defamation cases often take years to litigate, and those cases are viable only if the lies concern a person who is willing to bring suit. The same goes for other dignitary torts and related statutory claims that might also be deployed against election falsehoods, such as intentional infliction of emotional distress, false light, invasion of privacy, and appropriation of name and likeness claims. To be sure, famous artists have routinely taken, and continue to take, action to stop the unauthorized use of their work by political campaigns via demand letters, but we have yet to see political candidates threatening legitimate defamation or right of publicity claims over malicious deepfakes. As for other election disinformation unattached to claims about an individual (e.g., “Republicans vote for president Wednesday, not Tuesday”), the traditional dignitary torts that arise with harm to specific individuals’ reputations and privacy are not in play.

Existing voter intimidation law also has its limits. Section 11(b) of the Voting Rights Act prohibits efforts that “intimidate, threaten, coerce, or attempt to intimidate, threaten, or coerce any person for voting or attempting to vote.” Similarly, the “force, intimidation, or threat” prong of Clause 3 of 42 U.S.C. § 1985(3)—one of the “support or advocacy” clauses of the Ku Klux Klan (KKK) Act of 1871—allows plaintiffs to obtain damages from those who conspire to “prevent by force, intimidation, or threat” a voter from giving their support of a federal candidate for office. Section 11(b) and the “force, intimidation, or threat” prong of the KKK Act can be very effective tools to combat election disinformation when a voter suppression effort includes both disinformation and activities that look more like traditional voter intimidation or when the disinformation consists of a threat. But election lies and deepfakes can suppress and disenfranchise the vote through means other than intimidation.

Consider the robocall directed to New Hampshire Democrats in January, urging them, in President Biden’s voice, to skip voting in the primary because “[y]our vote makes a difference in November, not this Tuesday.” It’s not clear that courts will view that kind of message as a “threat” or “intimidation” actionable under traditional voter intimidation law. We may soon have an answer when a New Hampshire district court rules on the merits of the League of Women Voters of New Hampshire’s Section 11(b) lawsuit against those behind the January deepfake. Nor are federal regulators likely to stop similar conduct online. While the Federal Communications Commission (FCC) tookaction after the New Hampshire deepfake to penalize the perpetrators and stop those kinds of robocalls, the FCC doesn’t regulate content on social media; in the meantime, the relevant prohibition enforced by the Federal Election Commission (FEC) covers only impersonation of one candidate by another, and the FEC has so far refused to explicitly clarify that the prohibition extends to deepfakes. 

New Tools to Combat Election Disinformation

Spurred on by a combination of significant advances in generative AI and a growing public appetite for election lies, legislatures around the country have enacted new protections against the most pernicious election falsehoods. There are two important categories of new protections: deepfake statutes and a novel election disinformation statute in Minnesota. 

Deepfake Statutes

California moved early on deepfakes with a 2019 prohibition that has, to our knowledge, gone unused thus far. But this year, thanks in no small part to a push by Public Citizen, a raft of states have introduced and enacted statutes creating private rights of action to combat the use of deepfakes that spread disinformation around elections.

California’s law set the stage for many of the newest statutes. It provides that, within 60 days of an election, one may not “distribute, with actual malice, materially deceptive audio or visual media ... of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” This content is defined as appearing “to a reasonable person to be authentic,” causing them “to have a fundamentally different understanding or impression” than if they were consuming the original media. While only depicted candidates can sue, at least they can seek both injunctive relief and money damages. Importantly, California’s prohibition “does not apply if the audio or visual media includes a disclosure stating: “This [image/video/audio recording] has been manipulated.” The statute also carves out a number of other circumstances consistent with First Amendment principles, including media constituting satire or parody. 

This month, California enacted new legislation, AB 2839 and AB 2655, to provide a broader gloss to these protections. For example, AB 2839—an urgency measure that is in effect for the 2024 general election—expands the individuals eligible to bring suit (including any recipient of materially deceptive content or an elections official) and the enforcement window, such that the prohibition now applies beginning 120 days before any election and, in some cases, through 60 days after the election. In addition, AB 2655 authorizes some individuals—including candidates for elected office, elected officials, and elections officials—to seek, under certain circumstances, injunctive relief against large online platforms for their failure to identify and remove materially deceptive content.

New statutes in other states enacted this year mirror California’s laws in some key respects. They include a safe harbor from any prohibition under the statute, insulating a distributor from liability if the media includes a disclaimer indicating that the media has been technologically manipulated. They also allow at least certain private parties to file suit, rather than restricting that ability to only government officials. Finally, they explicitly authorize at least some private parties to seek injunctive or other equitable relief against the individuals distributing the deepfakes.

But these statutes also contain notable differences: 

First, there is some real variability on the statutory definition of what counts as an actionable deepfake. For example, some statutes require the actionable “materially deceptive media” to be an advertisement, while others do not impose that limitation for bringing a suit. Further, some statutes explicitly carve out parody and satire from liability, while others do not; but as we discuss below, the First Amendment likely requires courts to read that exception into statutes even where it is absent from the text. And in specifying the role of AI for the purpose of constituting actionable media, some of these new statutes potentially do little to address so-called cheapfakes—fakes that utilize less sophisticated technology—that may be similarly harmful. But all the statutes that private litigants could use provide some version of an objective viewer test, requiring a litigant to demonstrate that an objective or reasonable viewer would be deceived. 

Second, the statutes also have a range of enforcement windows. For example, Hawaii’s prohibition applies beginning on Feb. 1 of an election year, while New Mexico’s and Michigan’s kick in 90 days before a primary or general election.

Third, many of the statutes enacted this year authorize not only depicted candidates but also other individuals to bring suit. For example, in some states, organizations that represent the interests of voters likely to be deceived by the distribution of the deepfakes can file suit

Fourth, although each of these statutes permits certain private parties to seek injunctive relief, some of the statutes explicitly prohibit any eligible private party from seeking preliminary injunctive relief. In addition, some statutes restrict the circumstances in which a plaintiff may seek any form of injunctive relief. Further, some of these statutes do not authorize a claim for damages. 

Fifth, although all the statutes include safe harbor provisions, the requirements under those provisions for any disclaimer vary from state to state and, in some cases, by format of the media. For example, some statutes do not specify the exact language required of the disclaimer though they do provide some substantive guidelines.

Finally, there is variability across these statutes on their intent requirements. For example, in Michigan, a plaintiff must show that a defendant “intends” to distribute deepfakes to harm a candidate’s electoral prospects and to deceive and thus change electors’ voting behavior. By contrast, in Hawaii, a plaintiff need only show that the defendant “recklessly distribute[d]” the deepfake. 

The differences noted here are by no means exhaustive. But they highlight the various ways that states have begun combating deepfakes around the election season. 

Election Disinformation Statutes

For years, many states have had statutory prohibitions on spreading narrow falsehoods about the mechanics of elections. But these prohibitions generally impose criminal—not civil—penalties and generally authorize only government officials to enforce them.

However, in 2023, Minnesota enacted the Democracy for the People Act, which prohibits a broader range of election disinformation efforts than just manipulated media and permits certain private parties to file suit. The statute prohibits a person from intentionally impeding or preventing another person from exercising the right to vote by transmitting information the transmitter knows to be materially false within 60 days of an election. This prohibition “includes but is not limited to information regarding the time, place, or manner of holding an election; the qualifications for or restrictions on voter eligibility at an election; and threats to physical safety associated with casting a ballot.” The statute further prohibits conspiracies to violate this prohibition. 

Finally, the statute specifically provides a private right of action to “any person injured by an act prohibited by this section” and permits claims for both damages and injunctive or other equitable relief. 

A Novel Use of a Very Old Tool: Tortious Interference With the Right to Vote

In addition to bringing claims under the newly-minted state statutes mentioned above, litigators challenging election-related deepfakes and other disinformation might consider reviving a centuries-old but firmly established tort: interference with the right to vote. The tort allows plaintiffs to maintain actions against those who intentionally deprive or seriously interfere with their right to vote. Reviving a 300-year-old tort to protect against deepfakes and sophisticated election lie campaigns would be a novel application, but the tort is uniquely well suited for this moment.

First articulated in the foundational 1703 English case Ashby v. White, the tort was quickly and widely recognized in the early United States. As the U.S. Supreme Court explained in 1927: “That private damage may be caused by such political action”—denying a qualified voter his right to vote—“and may be recovered for in suit at law hardly has been doubted for over two hundred years, since Ashby v. White[.]” The Supreme Court has reiteratedthe point no less than 11 times, most recently in 2021.

We’ve surveyed all 50 states and the District of Columbia, and at least 31 jurisdictions have historically recognized tortious interference with voting rights in some form. Indeed, tortious interference with the right to vote has already been so widely recognized that it appears in the Second Restatement of Torts. As Section 865 provides: “One who by a consciously wrongful act intentionally deprives another of a right to vote in a public election or to hold public office or seriously interferes with either of these rights is subject to liability to the other.” Applying this articulation and existing body of case law to a contemporary context, deploying a tortious-interference-with-voting-rights claim against election-related deepfakes and other election disinformation requires (a) a consciously wrongful act (including fraud), (b) that the act is done with intent to deprive another of a right, and (c) a victim who possesses a right to vote. Plaintiffs may recover against both private parties and state actors and need not prove actual damages to do so.

Some courts have specifically recognized that fraudulent efforts to deprive individuals of their voting rights give rise to a claim. A Maryland appellate court upheld a judgment against elections officials who rejected a plaintiff’s vote based on his political party affiliation, holding the officials liable for fraudulently interfering with the plaintiff’s right to vote. Thus, plaintiffs may be able to use the tort against bad actors who publish or disseminate knowingly false or deceptive election-related deepfakes or other disinformation. While there will undoubtedly be complexities in effectively deploying a claim for tortious interference with the right to vote directly, any development in this area of law should be welcomed. 

In fact, the tort may not need to be deployed directly to be useful. As pointed out in a Lawfare article and in an amicus brief for the criminal prosecution of Douglass Mackey, the tort of interference with the right to vote provides a clear cabining principle to distinguish disinformation conspiracies that may be prosecuted consistent with the First Amendment from those that cannot. That is so because the Supreme Court has held that “injure” was meant to reach injuries recognized at common law—including intentional interference with another’s voting rights. For the same reasons, tortious interference with the right to vote opens another tool for private litigants suing those who conspire “to injure any citizen in person or property on account of such support or advocacy” of a candidate under the KKK Act. When it is clear that a group has conspired to use election lies to deprive individuals of their right to vote, litigators should be able to point to tortious interference with the right to vote as grounds for bringing a KKK Act claim based on the conspiracy itself. 

First Amendment Considerations

Any litigator pursuing deepfakes and election lies unattached to more traditional forms of voter intimidation is likely to confront First Amendment defenses. Indeed, just this week and less than 24 hours after Gov. Gavin Newsom signed California’s new AI election bills, a content creator filed suit arguing two of the bills violate the First Amendment and, a day later, filed a motion seeking to preliminarily enjoin AB 2839. We addressed three narrow points in an amicus brief in support of neither party; although the Court denied our motion for leave to file the brief with no explanation, you can read our brief here. Addressing every iteration of First Amendment defenses and challenges is well beyond the scope of this article, but four points are worth touching on briefly here.

First, a footnote in the Supreme Court’s Minnesota Voters Alliance v. Mansky decision strongly suggests that punishing or enjoining lies about the time, place, or manner of elections is unlikely to violate the First Amendment, as Richard Hasen has often pointed out. Even those supporting Mackey in the U.S. Court of Appeals for the Second Circuit agree on that point. And just this week, a federal district court upheld Minnesota’s new statute against a facial First Amendment challenge, citing Mansky.

Second, whether the First Amendment protects deepfakes that do not fit within one of the “historic and traditional” categories carved out from First Amendment protection (e.g., fraud, defamation, and true threats)—and that are primarily problematic for the act of impersonation itself—is a closer call. Statutes narrowly barring deepfakes from impersonating political candidates might well fall on the right side of the First Amendment for the same reasons that laws barring all impersonations of government officials and agencies do. Alternatively, statutes that mandate disclosure might be constitutional for the same reasons the Supreme Court upheld the campaign finance disclosure requirements in Citizens United v. Federal Election Commission. Or litigants might draw on ballot access case law, where the Supreme Court has repeatedly held that there are compelling government interests in “avoiding voter confusion” and protecting the “integrity” of elections. Perhaps most pertinently, the Court has held that the state “has a compelling interest in ensuring that an individual’s right to vote is not undermined by fraud in the election process.” These cases suggest that the First Amendment does not bar narrow restrictions on deepfake impersonations of candidates. 

Third, deepfakes that could reasonably be construed as parody or satire may well be protected under the First Amendment. Applying a parody label can make it difficult to hold a parodist liable, while the Supreme Court has made clear that there is no reason to require a parodist to label parody that is reasonably perceived as such. Although there is still a line between parody and impersonation, it remains to be seen how courts will distinguish protected parody from unprotected impersonation in the deepfake context and whether labeling requirements hold up. 

Fourth, prior restraint doctrine does not preclude enjoining unprotected election lies and deepfakes. The Supreme Court has “never held” that all injunctions of speech are impermissible. The typical problem with a prior restraint is instead “that communication will be suppressed, either directly or by inducing excessive caution in the speaker, before an adequate determination that their speech is unprotected by the First Amendment.” Thus, if a plaintiff can show they are likely to succeed in proving that speech is unprotected, a preliminary injunction directing a defendant to remove that speech does not violate the First Amendment (assuming certain enforcement protections remain in place). In the context of election lies, this precedent is undergirded by the Supreme Court’s recognition that the government may take prophylactic measures to protect elections and need not wait for subversion to act. 

All told, when pursuing election lies using these tools, litigators will have to exercise caution in challenging only those election falsehoods that are (a) clearly malicious, (b) clearly material (i.e., deceiving viewers about important facts as opposed to ancillary details), (c) highly likely to deceive recipients, and (d) not reasonably characterizable as parody. Particularly with respect to the new deepfake statutory prohibitions, it is important to avoid creating precedent that overly cramps or even strikes down these new statutory tools on First Amendment grounds right out of the gate.

Conclusion

As we enter the era of deepfakes and stealthily or mass-distributed lies about the time, place, and manner of elections, litigators need new tools to fight back against those who would undermine our democracy. This piece suggests the deployment of new prohibitions on deepfakes and election lies, as well as tortious interference with the right to vote claims, as two possible mechanisms. But it is equally important not to undermine important First Amendment protections or allow overzealousness to lead to bad decisions—striking down or severely cabining these new and old protections against election lies.


Kenneth Parreno is counsel at Protect Democracy. He previously was a Skadden Fellow and Staff Attorney at the Mexican American Legal Defense and Educational Fund and served as a law clerk to the Hon. Edgardo Ramos of the U.S. District Court for the Southern District of New York and the Hon. Debra H. Lehrmann of the Supreme Court of Texas. He graduated from Harvard Law School.
Christine Kwon is counsel at Protect Democracy. She previously served as Lecturer in Law and San Francisco Affirmative Litigation Project Fellow at Yale Law School and as a law clerk to the Hon. Kim McLane Wardlaw on the U.S. Court of Appeals for the Ninth Circuit. She graduated from Yale Law School.
Victoria Bullock is a paralegal and impact associate at Protect Democracy, focusing on advocacy and impact initiatives in key states. She previously worked in nonprofit management and with an immigration law firm helping children recently arrived in the U.S. to navigate immigration court, family court, and various federal agencies.
John Langford is counsel at Protect Democracy. He previously spent three years at Yale Law School as a Clinical Lecturer in Law in Yale’s Media Freedom & Information Access Clinic and served as a law clerk to the Hon. Robin S. Rosenbaum on the U.S. Court of Appeals for the Eleventh Circuit. He received his J.D. from Yale Law School.

Subscribe to Lawfare