Cybersecurity & Tech Democracy & Elections

Iran Hack Illuminates Long-Standing Trends—and Raises New Challenges

Renee DiResta
Monday, August 26, 2024, 10:50 AM

Iran’s sustained digital interference in U.S. elections now includes hack-and-leak tactics. Here’s how its strategy has evolved over time.

The Iranian flag. (Blondinrikard Fröberg, https://www.flickr.com/photos/blondinrikard/14450902921; CC BY 2.0, https://creativecommons.org/licenses/by-nc-nd/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On July 22, someone named “Robert” reached out to Politico to share internal communications from a senior Trump campaign official. The campaign alleged that the source of the materials was a hack by Iran, a claim corroborated by Microsoft and Google reports and later confirmed by the FBI and other government agencies. A few days later, more details emerged: The hackers had targeted Roger Stone, who had previously encouraged the release of documents from a different state hack-and-leak operation in 2016, even exchanging direct messages with the hacker. After compromising Stone’s Hotmail account, the suspected Iranian hackers conducted a spear phishing attack on current campaign officials. They also made similar, though unsuccessful, efforts to target the Harris campaign.

The irony is cinematic: The tactics that Stone and the Trump campaign once cheered have been deployed against them. Yet the hack is a reminder of the ongoing reality of foreign election interference—a challenge that the media and policymakers alike must reckon with regardless of the perpetrator or target. Understanding Iran’s evolving strategy in election interference is crucial, as Iran has become a persistent player, despite being overshadowed by Russia. As the final stretch of the 2024 campaign approaches, it’s clear that influence operations are here to stay—U.S. adversaries are evolving, and policymakers and the media must respond accordingly.

State actors have long used all available tools to conduct influence operations, both overtly through state media and covertly via front media outlets and infiltrators. With the rise of social media, these efforts have become cheaper and more targeted, as anonymous accounts can easily reach specific audiences. As U.S. political discourse fractured into factional online battles, foreign actors seized the opportunity to exploit these divisions, not primarily by persuading their targets to believe something new, but by deepening and amplifying existing rifts.

Russia was one of the first state actors to deploy this strategy effectively. Recall the Internet Research Agency, the troll factory of deceased oligarch Yevgeny Prigozhin. It directly targeted the American public: spinning up dozens of front media websites, hundreds of Facebook and Instagram pages, and thousands of Twitter accounts to infiltrate communities and produce sustained divisive influence operations. This propaganda effort operated throughout the hotly contested 2016 presidential election, amplifying unwitting Americans, manipulating social movements, and occasionally managing to incite real-world protests to advance the Kremlin’s interests.

But the fake persona accounts, news websites, and agents provocateur were only part of the action. The Russians also attempted to hack voting machines. And, perhaps most impactfully, a division of Russian military intelligence known as APT28—Fancy Bear—apparently hacked the Clinton campaign and the Democratic National Committee, reached out to journalists and influential figures like Roger Stone, and dumped documents on WikiLeaks. The first batch of emails was leaked almost immediately after the release of Trump’s “grab ’em by the pussy” tape, shifting focus from that scandal. Media covered the emails in detail; in 2016, Politico set up a live blog. The raw emails also provided the foundation for Pizzagate, an inane conspiracy theory that emails ordering pizza were really secret missives to procure underage children. Although the Russian efforts didn’t deliver Trump his victory, the hack-and-leak did influence major media news cycles, and the discovery of the Russian troll campaign undermined confidence in the integrity of the election among many on the left.

The recent hack by Iran does not represent a significant departure from state practice. Although it hasn’t drawn the same attention as Russia, Iran—well resourced and adept at state propaganda—has repeatedly used similar strategies with some creative twists. Ayatollah Khamenei prioritized digital manipulation early on; studies of Iranian capabilities describe “content promotion tactics” and “cyber battalions” active before 2017. The first formal attribution of a network of Iranian fake accounts was in August 2018: Facebook, acting on a tip from cybersecurity company FireEye, took down accounts linked to Iran’s state media outlet Press TV. These efforts mirrored Russia’s, featuring fake American personas and domains, but were smaller in scale and impact. The Iranians also included attempts, Facebook noted, “to hack people’s accounts and spread malware.”

In the years to follow, Iran conducted dozens of influence operations. By August 2024, Facebook reported that it had suspended 30 of these operations by Iran—a close second to Russia, at 39. (China came in third, with 11.) Many of the Iranian efforts on social media platforms focused on Middle East regional politics: undermining Saudi Arabia, boosting Palestinian rights, and positioning itself as a bulwark against a neocolonialist West. In these regional operations, Iran worked to bolster a perception of the regime as a formidable force and a defender of Muslim values. But in the elections of 2020 and 2022, Iran attempted novel strategies to sow discord and sway U.S. voters in its preferred direction.

In 2020, a research effort that I helped to lead, the Election Integrity Partnership (EIP), received outreach from the NAACP noting that some of its members had received threatening emails claiming to be from the Proud Boys. Media reported on the broader effort, describing messages sent to Democrats that told them to “vote for Trump or else.” Some of the emails demanded that the target change their party registration, claiming that the sender was in possession of personal information, or had “gained access to the entire voting infrastructure.” An investigation revealed that the emails used spoofed addresses and compromised systems—they did not come from the Proud Boys. A video also circulated, claiming to show live-action footage of voter data being hacked; it, too, was a deception. The FBI and the Cybersecurity and Infrastructure Security Administration attributed the effort to Iran. Two Iranian nationals were subsequently indicted.

This wasn’t Iran’s only bold move: The indicted individuals also sent fake Proud Boys messages to Republican lawmakers and Trump campaign members, falsely claiming that the Democratic Party was going to exploit insecure voter registration websites to “edit mail-in ballots or even register non-existent voters.” Another Iranian effort (which also imitated the American far right) involved creating a hit list website, “Enemies of the People,” that published the names, addresses, and photos of 38 government and election officials who’d become the targets of right-wing conspiracy theories about Trump’s loss. And, of course, Iran operated social media accounts on Facebook and Twitter that pretended to be Americans, posting messages about elections, COVID-19, the Black Lives Matter protests, and other divisive issues. A declassified report about foreign threats to the 2020 election suggested that Iran was operating with the twin goals of sowing unrest and undermining President Trump.

By the 2022 election, Twitter (X) had changed hands. Elon Musk still had a team working to disrupt election interference, even as his fans increasingly decried such efforts as censorship. In 2022, the EIP investigated two Chinese, one Russian, and three Iranian networks operating on the platform. One Iranian network made efforts to chat with, endorse, and boost unwitting progressive-left candidates in multiple congressional and down-ballot races, involving itself in hashtag conversations dedicated to specific states and districts. It created a front civil advocacy organization to “sponsor” petitions and protests. One account in the network even apparently managed to become a high-Karma moderator in r/Political_Revolution. In these efforts, Iran appeared to be working to boost candidates who it thought might be useful toward advancing preferred policy objectives—the hashtag #FreePalestine was present, as was content related to Afghanistan. The down-ballot activity was particularly interesting; if deliberate, it perhaps suggests a longer term vision to boost an aligned pipeline.

Ultimately, this spate of election interference efforts, while at times quite novel, was not particularly impactful when it came to generating attention or engagement—which is why it has been interesting to see hack-and-leak operations in service to an election added to Iran’s tactical toolkit. Hacking has long been undertaken for the purpose of obtaining information that might benefit the attacker, perhaps financially, or through obtaining trade secrets or classified knowledge; indeed, the same Iranian hacking group is suspected to have targeted energy-related sites belonging to the state of Utah. Hack-and-leak efforts serve a different objective today: They are a demonstrably effective tool for attention capture. We might term this the “Found Documents” trope: In the movies, hacked documents usually reveal some sort of hidden scandal, secret evil government program, or nefarious cabal. In real life, the substance of most emails is anodyne and boring—the pizza orders of Clinton’s campaign, for example—but the fact that the messages were “secret” and illegally obtained transforms them into something fascinating. Freedom of Information Act trolls and people who write blogs for audiences of conspiracy theorists know this well: The mere existence of an email exchange can be spun into a scandal by just asking questions.”

Which brings me to a second point about what’s new in this effort by Iran: The media reacted very differently. It did not run the material. This inspired considerable consternation among some audiences, particularly given the Trump campaign’s pivot from 2016’s “Russia, if you’re listening” to 2024’s “Any media or news outlet reprinting documents or internal communications are doing the bidding of America’s enemies.”

The strategic shift in coverage is no accident. After 2016, journalism scholars, technology companies, and media outlets made a concerted effort to reevaluate the ethics of how to cover hacked materials. Some suggested frameworks for guidance: “Focus on the why in addition to the what.” Don’t bulk-dump emails. Determine what is newsworthy and contextualize it. Report the story of the hack or disinformation campaign itself. Be aware of adversarial efforts to redirect focus from one newsworthy event to another.

These calls for a shift in procedure, however, became the target of not only outrage but a full-blown conspiracy theory by Trump’s political allies when platforms and journalists exercised caution (though at times too much) as the leaked contents of Hunter Biden’s laptop emerged. Breathless congressional testimony before Jim Jordan’s Committee on the Weaponization of the Federal Government insinuated that academics who had proposed these new policies, and organizations that had “wargamed” what a 2020 hack-and-leak operation might look like, were actually part of a vast “deep state” plot to preemptively suppress the contents of the laptop. To date, Jim Jordan et al. have not made this same argument about media outlets electing not to release the Trump documents.

With Russia likely favoring Trump, and Iran seemingly leaning toward the progressive left (while still targeting Harris—perhaps opportunistically), we should anticipate more influence operations targeting all sides, and perhaps further efforts to galvanize protests. These operations will not necessarily be impactful, but social media platforms, researchers, and the government must continue to disrupt them. Other technology providers are finding themselves in a position similar to social media companies: OpenAI has reported that state actors, including Iran, have begun to misuse its ChatGPT platform (ironically, one domain Iran was creating articles for was a site we’d analyzed in 2022). Hack-and-leak efforts, as well as more novel methods, will continue, and certain influencers or newsletters will publish the material—whether for ideological reasons or because the temptation of an exclusive is overwhelming.

It’s important to recognize that this is not a partisan issue: Regardless of where one stands politically, all Americans should agree that foreign interference in our elections is unacceptable. The Trump hack wasn’t a “win”—and given that the Harris campaign was also targeted, it should be clear that Iran is not acting to help Democrats. They are acting to help Iran—like Russia, an authoritarian and oppressive regime—and they’re willing to incite social discord and potentially even political violence within the United States if necessary. Regardless of the actor, target, or methodology, the objective of foreign interference in our elections is always to weaken and destabilize our democracy.


Renée DiResta is the technical research manager at the Stanford Internet Observatory, a cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies. Her work examines the spread of narratives across social and media networks; how distinct actor types leverage the information ecosystem to exert influence; and how policy, education, and design responses can be used to mitigate manipulation.

Subscribe to Lawfare