Cybersecurity & Tech Democracy & Elections

Self-Regulation Won’t Prevent Problematic Political Uses of Generative AI

Samuel Woolley, Inga K. Trauthig
Wednesday, July 17, 2024, 12:00 PM
Interviews with digital political marketing experts show that AI is now a core tool for U.S. campaigns, but that its use is ungoverned.
Code on a computer screen. (https://www.piqsels.com/en/public-domain-photo-jcunm)

Published by The Lawfare Institute
in Cooperation With
Brookings

Employees of OpenAI recently blew the whistle on the company’s “culture of recklessness and secrecy.” They say that the Microsoft-partnered firm is overlooking a range of dangers associated with the technology in favor of outpacing competitors. While OpenAI’s usage policy states that its trademark generative artificial intelligence (GenAI) technology, ChatGPT, should not be used for political campaigning, disinformation, or disenfranchisement, there is evidence that some candidates, campaigns, and foreign political entities are getting around such guidelines.

U.S. presidential candidate Robert Kennedy’s campaign leveraged interconnected AI technology from Microsoft to build a (since deleted) chatbot. The bot told journalists that the candidate believed that the CIA was involved in the assassination of President John F. Kennedy and that vaccines cause autism. In May, OpenAI shared evidence that groups in Russia, China, Israel, and Iran used its tools in attempts to spread disinformation.

Our research team at the University of Texas at Austin recently released a report detailing how U.S. campaigns are leveraging GenAI like Chat GPT in the lead-up to the 2024 elections. Most of the candidates, digital strategists, and political technology vendors we spoke to said that they were abiding by an “unwritten” set of industry guidelines as they experimented with the tech: not using it to spread false content, to mislead people about how to vote, or without transparent proof targeted users were interacting with AI.

But interviewees across the political spectrum were also quick to point out that the space lacked genuine, meaningful regulation. Many spoke about serious concerns they had with the current regulatory state of affairs and made it clear that little stood in the way of people seeking to use GenAI for political manipulation.

Our Propaganda Research Lab team comprises social and computer scientists, open-source intelligence researchers, and media technology experts and is based within UT Austin’s Center for Media Engagement. We study how emergent technologies are used around the world for the purposes of political communication. We are particularly concerned with illuminating covert influence campaigns and anti-democratic uses of digital tools, while also developing solutions and positive use cases of new technologies.

Interviews we conducted with U.S. political groups building and using GenAI, and five years of related research, make something abundantly clear: Self-regulation among creators of AI will not effectively prevent malicious, illegal, uses of the technology. Our studies in the United States and numerous other countries have consistently found political organizations leveraging AI, GenAI, and related automated technologies to muddle understandings of voting processes, harass opposition, and attack protected groups. Oftentimes, the harms extend beyond the political realm and flout standing policies from the technology firms and trade groups aimed at curbing misuse.

Campaigns looking for an edge have an incentive to use whatever tools will help their cause, and generative AI is the tool du jour. The will to win often overrides the norms discussed by our interviewees because such strictures are nonstandardized, are opaque, and lack repercussions. Those who build custom GenAI systems for political use pointed out that online anonymity and gaps in social media firms’ terms of use can allow surreptitious, norm-busting uses to fly under the radar. Changes to authentication on X (formerly Twitter), for instance, now allow even nontransparently automated or AI-driven accounts to operate with the company’s blue checkmark of approval. Broad acceptance of anonymity (sans some kind of behind-the-scenes identity verification by firms) on that platform—and many others, including Reddit, TikTok, and Telegram—makes space for astroturf politics, often amplified by technological scaffolding such as social media bots and strategies like coordinated influence campaigns.

After the now-infamous fake Biden robocalls in New Hampshire, the Federal Communications Commission (FCC) imposed a limited ban on similar AI-calling efforts. Our interviewees emphasized how easy it is now to create robocallers that do not sound like robots anymore, but uncannily like real people. While the Ashley robocaller (used in New Hampshire) was intentionally created to sound like a robot for transparency purposes, Ilya, a political AI creator working for political campaigns, noted, “[AI-generated voice] already sounds almost exactly the same [as a human]. It will sound absolutely exactly the same.” And about such AI chatbots, he said, “There will be more, there will be many more, some of them will have robotic voices and some of them probably won’t.”

Ilya’s assertions begs the question: Will voters always know when they are speaking to a robot? Probably not. And the malicious use of AI-chatbots—online cousins of the robocalling bots—are not addressed by the recent FCC moves. Meanwhile, the FCC is attempting to move forward with disclosure requirements for radio and TV that would require transparency about political content created using AI. But their authority to enact these regulations is being questioned by the Federal Election Commission (FEC). FEC Chair Sean Cooksey sent a letter to the chair of the FCC arguing that the agency’s proposal would undermine the FEC’s role as main enforcer of federal campaign law.

With this, the control of AI use in and around political campaigns is caught up in power struggles between different federal authorities—partially along partisan lines. These federal power struggles are likely to dismay the Biden administration, which wanted to move rapidly on AI policy and hence issued an executive order in October 2023 asking for federal agencies to swiftly draft regulations on the use of AI technologies.

Computational propaganda—the use of automation, AI, and algorithms in efforts to manipulate public opinion over social media—has been a point of concern for U.S. lawmakers, civil society groups, academics, journalists, and others since at least 2016. That year, a number of foreign adversaries and domestic political groups leveraged social media platforms and tools including bots to spread misleading, inflammatory content about the voting process and elections. Tied to these influence operations were complementary efforts by organizations like Cambridge Analytica to illicitly gather and parse data about voters to then target them with highly personal propaganda.

Afterward, policymakers and experts issued numerous warnings that innovation in artificial intelligence was likely to supercharge both online deception campaigns and the surveillance capitalism that makes them possible. Not only could AI massively accelerate political data analysis, many cautioned, it could also ramp up and refine content generation and digital voter engagement. Of course, such uses could benefit democracy by allowing campaigns to reach more voters and increase turnout. But they could also be used to mislead, divide, and incite.

We need concrete federal regulations in the United States aimed at curbing political misuses of AI and other emerging technologies. Such rules would install safeguards for the American electorate at the receiving end of an encroaching future of GenAI political campaigning. Given the evolving debate around AI regulations and the First Amendment, regulators can even avoid weighing in on partisan spread of disinformation and misinformation and instead focus on clearly illegal uses of the technology, such as leveraging it to amplify voter disenfranchisement campaigns. We have documented many circumstances in the United States and across the globe in which AI, automation, and other digital tools are actively used to suppress people’s speech through both targeted trolling campaigns and sheer noise.

The various federal commissions involved in regulating communication technologies, elections, and trade must work together to clarify jurisdictional questions. When possible, they should work together from their various points of focus to triangulate efforts to regulate misuses of AI and digital technology. If needed, the president, as enforcer of the law, and the judiciary, as interpreter of the law, should provide clarity surrounding the parameters of this enforcement.

The FCC and the FEC, in particular, should work hand in hand to update existing guidelines—and consistently enforce penalties that work as deterrents. With these measures, political groups that have, until now, readily embraced and adopted deceptive uses of emergent technologies will be reined in. The use of GenAI for political manipulation by foreign political groups might not be as easily tackled, but American democracy would have at least installed a bulwark against misuse by homegrown entities.

Congress should move forward with bipartisan legislation like the AI Transparency in Elections Act and the Protect Elections from Deceptive AI Act. The former would require disclaimers on public communications across multiple media formats (including online and over social media) “substantively generated” using AI. The latter would prohibit the spread of “materially deceptive” political ads created by AI.

Our recent research underlines that the day that AI became a tool in the political campaigner’s arsenal has come and gone. Top digital consultants and technology builders and vendors working for a range of Republican and Democratic groups described using GenAI, in particular, for backend voter analytics but also front-end communication efforts. They spoke about its notable usefulness in microtargeting people at a local level, about local issues. They also said that GenAI and large language models (LLMs) could be refined, customized, and even bespoke-built in order to target minority voters in a variety of languages and with a high level of sociocultural nuance. “We have a motto, ‘look like, act like, talk like,’” said Craig, the co-founder and CTO of yet another political AI company.

Experience from 2016, but also from the 2020 and 2022 U.S. elections, suggests that malicious political groups—foreign and domestic—are especially willing to use the latest digital tools to target the most vulnerable in our democracy with disinformation and politically motivated harassment. Recent work suggests that AI is now playing a central role in similar operations aimed at upending elections.

While none of our interviewees openly admitted to using GenAI for such purposes, most had serious concerns about their opponents doing so. Mike, a Democratic consultant, said he had little faith in other political vendors abiding by ethical guidelines because, he said, “all they care about is making money.” Many of the politicos we spoke to said they felt compelled to use the technology because they feared being outpaced by competitors.

These motivations combined with financial incentives are also driving companies’ ongoing, relatively unfettered, innovation efforts in the GenAI space—per the recent OpenAI whistleblower revelations. Taryn, another digital political consultant, told us that the campaign AI space is “the wild west.” The AI lawlessness in the political arena is deeply tied to a lack of both internal and external regulation commercially.

OpenAI, though at the root of recent reporting about ethical failures in the AI space, has actually issued public disclosures about influence operations using its tools. But this practice, and data related to such AI-driven deception campaigns, is far from being the industry norm. Meanwhile, our interviewees said that the tacit nature of ethical guidelines for AI use in the political space meant that they could—and would—be bent when needed. And there was discrepancy in how they even defined such guidelines.

Some digital political consultants, for instance, excitedly discussed creating AI avatars that could stand in to speak for political candidates when they were otherwise engaged. Others denounced such practices as misleading. Several interviewees said GenAI was a strong tool for fundraising. But others pointed out corresponding legal and ethical concerns related to transparency and deception in and around AI-driven campaign finance efforts.

Based on what we discovered in our interviews, we advocate for regulations on using GenAI for political fundraising. However, we understand that such efforts are currently likely to run dry as all sides of the political spectrum have used and witnessed the potential in this regard.

Continued lack of digital regulation, alongside continuous bipartisan disagreement about what potential regulation should look like, may allow the current free-for-all landscape to continue well past 2024. When discussing tech policies, however, it is important to focus not only on the technology itself but also on the humans behind it—the people who develop and use it. What are their goals and intended outcomes? What are the attendant harms to society—to our trust in institutions and one another and, more generally, to our safety and privacy?

As Republican consultant Joe said, we should worry not about “the driving,” but about “the guy who decided to get drunk and get into his car because he doesn’t care about laws on the road.” We, therefore, join other researchers in calling for the passing of relevant policies to ban and effectively penalize voter suppression, imminent calls to lawless action or violence, defamation, fraud, and other harms that GenAI may facilitate, rather than for outright bans on the technology’s use in politics. Further, disclosure and transparency surrounding the use of generative AI must extend beyond its use merely in content creation to include data collection and data analysis as well. Lawmakers and regulators should mitigate such risks by creating and consistently maintaining (in accordance with technological innovation) legal and technical safeguards against illegal misuse. 

But our research also makes it clear that the use of GenAI in U.S. political campaigns is still in its early stages. There is still time for regulatory intervention. While many have preordained the mass use of deepfakes and other, similarly conspicuous, AI manipulation efforts—the most common uses our interviewees described were more mundane, under-the-hood means of leveraging the technology for data analysis

Those we spoke to who were working on federal election campaigns said they were reluctant to engage in front-end use of generative AI for communication with voters, even though it could help to put voter outreach on steroids. They told us that they were looking to local and state-based campaigns, who were already innovating in the voter communications space, to see what worked and what didn’t. They said that after rife misuses of tech during past political contests, researchers’ and reporters’ magnifying glasses were firmly focused on the national electoral stage. Local races avoid such scrutiny—but are no less important to democracy and, in particular, people’s daily experience of civic life. Finally, widespread and more creative manipulative uses of GenAI in politics will likely emerge in future cycles, after the 2024 U.S. election. As Democratic consultant Craig said, “The really extremely bad stuff is coming in 2026. We’re gonna see the beta version of it this year.”

This highlights the demand to draft meaningful federal legislation that guides fair campaigning for this year and in the future. Transparency and disclosures around ads is a central policy point in this regard.

Currently, a patchwork of state laws aims to curb the deceptive use of deepfakes and various political misuses of AI in particular jurisdictions. From Congress in D.C., there is no prohibition of the use of generative AI in political campaigning—although legislators have introduced bills to promote transparency regarding the use of GenAI in political campaigns.

Similar to child safety online, however, this policy angle carries potential for bipartisan action.

While the Republicans and Democrats we spoke to had different perspectives on regulation, they agreed on the untenability of the current situation. They should be encouraged not only by our report but also by the majority of American people who, according to a YouGov poll, are concerned about the use of GenAI in politics.


Samuel Woolley, PhD, is an associate professor of Journalism and Media and the Program Director of the Propaganda Research Lab in the Center for Media Engagement at the University of Texas at Austin. Woolley is the author of four books about how emerging technologies are used for both control and democracy. He has written on related subjects for the New York Times, Foreign Affairs, The Atlantic and numerous other publications.
Inga Kristina Trauthig is the head of Research of the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin. She received her PhD in Security Studies from King’s College London, where she is a senior affiliate fellow and adjunct faculty.

Subscribe to Lawfare