Cybersecurity & Tech Democracy & Elections Executive Branch

What Does it Mean to Ensure Election Integrity in 2024?

Quinta Jurecic, Eugenia Lostri
Friday, November 1, 2024, 2:10 PM
The government has leaned forward on countering foreign influence. But election workers are struggling to respond to homegrown rumors.
Election workers in a polling place in Des Moines, Iowa, during the 2020 Democratic caucus (Photo: Phil Roeder/WikiMedia Commons, https://commons.wikimedia.org/wiki/File:Des_Moines_Precinct_61_(49485944058).jpg, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

Two weeks before Election Day, a video began to circulate on social media purporting to show presidential ballots marked for Donald Trump being destroyed in Bucks County, Pennsylvania. Quickly, the Bucks County Board of Elections and the county’s Republican Party released statements debunking the video as a fake—though comments responding to the GOP’s post on X (formerly Twitter) were filled with blue-checkmarked users wondering how the county could be so sure. The next day, the federal government weighed in as well: in a joint statement, the Office of the Director of National Intelligence (ODNI), the FBI, and the Cybersecurity and Infrastructure Security Agency (CISA) announced the intelligence community’s assessment that “Russian actors” had “manufactured and amplified a recent video.” “Local election officials have already debunked the video’s content,” the statement clarified. 

This sequence of events says a great deal about how platforms, election officials, and the federal government are thinking about responding to election rumors during the 2024 vote. Compared to 2020, the intelligence community is taking an aggressive posture of releasing public information about foreign election interference with unusual candor and at a strikingly rapid clip. Local election officials are doing their best to debunk falsehoods personally by speaking directly to their communities.

Yet it can be difficult for these fact-checks to break through on an increasingly fragmented and opaque social media ecosystem: According to NPR, the fake video received hundreds of thousands of retweets, while the corrections posted by the Board of Elections and the county GOP received orders of magnitude fewer. And a close reading of the ODNI statement shows hints of how the federal government has pulled back from the assistance it offered in 2020. Compared to that year, when CISA leaned forward on addressing election falsehoods originating both domestically and abroad, the agency is now focused on foreign disinformation. In communicating to the public whether something is true or false, it’s also relying more heavily on election workers themselves—hence that closing note that Bucks County officials had already evaluated the fake video.

In February, we examined a worrying trend: the federal government, under political pressure from Republicans in the House of Representatives and state governments across the country, had pulled back from post-2016 partnerships with technology companies and local election officials designed to protect the integrity of the vote. Now, with the clock ticking down toward Election Day, we’ve taken another look. We found that government agencies have resumed contact with social media platforms and leaned forward on exposing foreign influence operations, but remain skittish when it comes to rumors and conspiracy theories originating within the United States.

The 2020 systems weren’t perfect by any means. But the systems that exist today to counter domestic falsehoods don’t seem to have been rebuilt out of a genuine wish to improve what didn’t work in 2020. Instead, it looks more like the federal government is motivated by a desire to avoid partisan backlash.

To understand where things stand today, it’s necessary to recall what they looked like in 2016 and 2020—and why they’ve changed. The systems built to protect the 2020 election were in many ways a response to the Russian election interference of 2016. Russia’s successful use of social media platforms like Facebook and X to distribute incendiary messages highlighted the lack of coordination between these platforms and the federal government—and both Silicon Valley and Washington, D.C. were determined to prevent similar shortcomings. The years following 2016 saw the development of a network of collaboration between technology companies, government agencies, and independent researchers that worked to identify and respond both to foreign influence campaigns and to election rumors originating within the U.S. itself. That network wasn’t perfect—among other things, it failed to adequately respond to the growing belief in the Big Lie of 2020 election fraud that would curdle into the violence of Jan. 6—but on the whole, it notched a great number of successes.

After the 2022 midterms, though, this work came under attack from the political right, which increasingly characterized such efforts as censorship. The standard-bearer of this campaign is Rep. Jim Jordan (R-Ohio) and his Republican-led House Select Committee on the Weaponization of Government, which has deluged scholars with harassing subpoenas and worked to transform their research into something politically toxic. In court, a group of Republican attorneys generals brought a lawsuit challenging government communication with platforms as unconstitutional “jawboning”—that is, government pressure on private entities—forbidden by the First Amendment. At the same time, social media platforms, led by Elon Musk’s X, began to pull back from moderating their platforms and laid off many of the employees who had worked on these efforts. 

The result was that, when we reviewed the state of play in February, the systems built to protect the 2020 election from harmful falsehoods had been largely dismantled or frozen in place. According to Meta, the FBI-run Foreign Influence Task Force stopped communicating with the company entirely as of July 2023. CISA, too, went dark as far as outreach to social media companies was concerned. CISA Director Jen Easterly also indicated that the agency was reorienting away from misinformation and disinformation and focusing more exclusively on cybersecurity issues. And the hits kept coming: in June, Stanford University pulled the rug out from under the university’s flagship Internet Observatory and the Election Integrity Partnership, a coalition of researchers that provided a clearinghouse for reporting and responding to election falsehoods and rumors in 2020.

The system that exists to counter falsehoods today—the same one that sprung into action following the fake video of ballots in Bucks County—has been built up around the legal and political limitations imposed by this backlash, much like tiptoeing around the edge of a pit that suddenly opened up in the floor. In March 2024, NBC reported that the FBI’s Foreign Influence Task Force had resumed at least some contact with social media companies—“in a way that reinforces that private companies are free to decide on their own whether and how to take action on that information,” according to an FBI spokesperson. (A Justice Department Inspector General report released in July confirmed that the Justice Department had been at work with the FBI in designing procedures for regulating the bureau’s interactions with tech firms, which were implemented in February.) In July, the Court threw out the “jawboning” lawsuit, Murthy v. Missouri, on standing grounds. The decision provided little guidance as to how much government pressure on a platform might be too much, but the majority appeared skeptical as to the plaintiffs’ arguments that the mere fact of communication between the government and a platform might in itself be constitutionally impermissible. 

This ruling, indeterminate as it was, appears to have allowed the government to exhale a little. The bureau released its new guidelines publicly in August, and administration officials told the New York Times that same month that renewed coordination between platforms and the government had led companies to disrupt two foreign influence operations utilizing their services. An international effort to disrupt a Russian social media bot farm promoting pro-Russian messages included the voluntary suspension of bot accounts by X. When the Justice Department announced the indictment of the three Islamic Revolutionary Guard Corps actors for the hack and leak operation targeting the Trump campaign, it noted the assistance provided by private sector partners: Google, Microsoft, Yahoo, and Meta. In both of these cases, the foreign actors were attempting to influence the American public, either through the spread of disinformation or by attempting to undermine a presidential campaign. 

This insight into the intelligence community’s assessment of foreign influence schemes goes well beyond what the federal government has provided in past elections. It’s part of the same aggressive, quick-turnaround posture of disclosure that resulted in that ODNI statement on Bucks County. Speaking to the New Yorker, one anonymous official noted that the disclosures by the Office of the Director of National Intelligence’s Foreign Malign Influence Center demonstrated a level of transparency that “is like standing there naked compared to what we have done in the past.”

That said, these new efforts bear the marks of Murthy and the Republican-led controversy over “censorship.” Again and again in its public statements, the intelligence community has emphasized that platforms can make their own decisions on how to handle information about these threats, and that it is responding only to foreign influence campaigns—not infringing on the domestic speech that was at issue in Murthy.

That makes sense, given the intelligence community’s mandate to address threats from abroad. Though the FBI can step in if a crime occurs, the bureau and other agencies in the intelligence community have no authority to take action against falsehoods domestically. What’s striking, though, is that CISA—an agency that did work to address rumors and lies originating from within the U.S. during the 2020 election—is also emphasizing a focus on foreign dis- and misinformation and has moved away from addressing domestic falsehoods. Over the last few months, CISA has repeatedly joined ODNI and the FBI in statements detailing Iranian election influence efforts—akin to the statement issued for Bucks county. When it has details about the different ways in which malicious actors intend to undermine confidence in the election and sow partisan discord, CISA regularly publishes PSAs. 

But CISA’s work responding to domestic falsehoods looks very different—and far thinner on the ground—than it did in 2020. Part of the coordination effort in 2020 involved providing a more centralized clearinghouse for election workers seeking to engage with platforms, through the Stanford Election Integrity Partnership and the Election Infrastructure Information Sharing and Analysis Center (EI-ISAC), a public-private partnership developed in coordination with CISA. But the Stanford project is gone and EI-ISAC has reoriented its work to focus on cybersecurity, according to NPR. And as a Department of Homeland Security Inspector General report from this summer indicates, CISA “discontinued its efforts to work directly with social media companies to counter disinformation after the 2022 election.” The agency told the inspector general that it had “changed its efforts to counter disinformation based on the evolution of the disinformation mission during that time.” 

Instead of the more direct engagement on mis- and disinformation that the agency provided in 2020 to engage with platforms and election workers, CISA presents itself as more focused on public education, providing high-level materials that only address “common disinformation narratives” about how elections are conducted. Responding to the inspector general report, Easterly wrote that CISA would also be “amplifying accurate election security-related information shared by state and local officials.” Similarly, in an interview on Lawfare Daily, Easterly’s senior advisor Cait Conley explained, “In this incredibly fractured information environment … this signal through all of that noise is your state and local election official.”

For their part, election workers are working hard to engage in public communications. “I think we’ve seen, over the last several years, a more concerted effort from election workers to proactively communicate about elections,” Amy Cohen, the executive director of the National Association of State Election Directors, told Lawfare in a phone interview. And as the Bucks County example shows, this work can be effective in responding to falsehoods. But it also leaves significant gaps. 

CISA is not wrong that election workers themselves are the best sources of information on election administration—after all, they’re the people on the ground doing the work. The problem is that many election workers simply don’t have the time or the budget to play this public communications role. Local election administrators “have faced major budget shortfalls that have compounded in recent years as the scope of the role and threat landscape expands,” Keara Mendez of the Center for Tech and Civic Life told us. Her organization worked to distribute grants to election officials in 2020—and became the subject of right-wing conspiracy theories about election fraud that led to many states banning this kind of private funding, leaving election offices further strapped for cash. Election officials “urgently need resources to communicate at scale,” Mendez said. 

What’s more, election officials are also up against a social media ecosystem that is tilted against them. NBC reported recently that election workers trying to provide factual information on X are struggling to break through after Elon Musk reworked the platform to favor the spread of incendiary political content—including by boosting falsehoods himself. A quick fact-check from a local election office may not get the same circulation as a rumor boosted by a big account. As Renee DiResta—formerly of the Stanford Internet Observatory—put it in an interview with CNN, election officials are at a “structural disadvantage.”

Even when the system works as intended, the fractured information environment can facilitate a different set of challenges. In Georgia, a fake video showing Haitian immigrants being given American IDs so they could fraudulently vote started circulating on Oct. 31. The video was quickly debunked by experts, who linked it to a Russian actor. Georgia Secretary of State Brad Raffensperger supported the debunking and shared a statement on social media informing the public that this was a false narrative, likely foreign disinformation. He also asked X to take down the video, and ODNI, the FBI, and CISA soon released a statement confirming Russia’s involvement and emphasizing that “The Georgia Secretary of State has already refuted the video’s claims as false.” So far, so good—yet X did not remove the video until the morning of Nov. 1, when the account that spread it had gained 650,000 followers. Not only was the false video allowed to spread, but even the request from the election officials was then subjected to backlash from the right as a “censorship demand.” 

While X is the most extreme case, other platforms have also become less eager to collaborate with election workers and less aggressive about responding to election falsehoods—part of the same broader retrenchment that we describe above. There are legitimate questions to be raised about whether the federal government should be communicating with platforms behind the scenes—but in the absence of the systems that characterized 2020, it can be more difficult for election workers to flag issues for platforms in the first place. Now, election workers can still submit complaints through the public portals that are available to everyone, but there’s far less assurance that a platform will see that complaint or take action on it. Meanwhile, platforms have also rolled back transparency measures that allowed researcher access to data in 2020, meaning that public understanding of what rumors are traveling and how has been sharply curtailed. 

This doesn’t mean that all is lost. Even if CISA has stepped back, there are civil society groups working to fill some of the gap, like efforts to track rumors on social media by the University of Washington’s Center for an Informed Public and a program by the Brennan Center to provide clarifications about election falsehoods and examine which public messaging might be most persuasive. If coordination took place in part behind the scenes in 2020, this time around it’s right out in the open. Ideally, that could help both better inform the public and assuage conspiratorial allegations about shadowy forces working to censor Americans—though as the Raffensperger example shows, even statements made entirely in public can be misrepresented by ideological actors looking to sow doubt in the integrity of the election.

In our last article, we raised the question of whether this movement away from addressing domestic misinformation might raise problems for public-private partnerships in the cybersecurity space as well. So far, this retrenchment on the part of the federal government has actually resulted in greater focus on cybersecurity issues. Conley emphasized in her podcast interview that ensuring the cybersecurity of election infrastructure is one of CISA’s top priorities. While there is plenty of information that election officials can find in CISA’s one-stop shop website, field officials have also carried out exercises covering scenarios that election officials can face, focused on improving the cybersecurity and preparedness of the election community, and even conducted physical security assessments of election offices—the last measure being particularly significant in an era where election administrators have been increasingly targeted with threats. Easterly herself has highlighted this work in multiple media appearances, and has even taken the opportunity to inform Elon Musk about the safeguards in place to prevent and detect attempts against election infrastructure. 

There are good reasons for the government to focus its efforts on cybersecurity: protecting the security of the election is, obviously, important. But it may also be politically easier to reorient around cybersecurity and away from mis- and disinformation. Both are ways of protecting the election, but cybersecurity is less controversial. 

Or, at least, it is for now. The line between misinformation and cybersecurity is not so clear as it might initially seem. Just as public calls to remove material from foreign influence campaigns can be framed as censorship, even run-of-the-mill cybersecurity work can become politicized. The announcement that Department of Homeland Security and CISA officials might attend a conference which included a pre-conference cybersecurity exercise on Nov. 5, Election Day, led to baseless accusations of election interference from right-wing influencers.

CISA is trying to convey the message that Americans should feel confident in the security of the elections and counter narratives suggesting that the systems citizens will use to cast their votes are somehow compromised. Along these lines, the ODNI issued an election security update 15 days prior to Election Day, reassuring the public that it is unlikely that foreign actors actually intend to compromise the electoral process.

But the same ODNI update warns that “[t]he IC furthermore expects foreign actors to continue to conduct influence operations through inauguration denigrating U.S. democracy, including by calling into question the results of the election.” From what we’ve seen, it seems likely that the federal government, alongside local election officials, will stand ready to reassure voters. What happens, though, if domestic political actors are the origins of claims about hacked votes, as they were in 2020? If a second “Big Lie” about election fraud surfaces and gains traction within the Republican Party, will the government be confident enough to take a stand against it? 

At the end of the day, ensuring the security and integrity of elections is not just about physical resilience—it’s also a matter of trust. If foreign actors are planning to call into question the integrity of the election, as ODNI suggests, it is because they understand that to be an area ripe for exploitation domestically. The collaborations between federal, state and local governments, the private sector, and civil society should be encouraged, not rolled back. Everyone has a stake in ensuring the protection of the democratic system. Allowing those partnerships to be undermined by conspiracy theories and domestic political pressure only helps malicious actors attempting to further divide the country.


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Eugenia Lostri is a Senior Editor at Lawfare. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.

Subscribe to Lawfare