Democracy & Elections Intelligence

A Government Practitioner’s Guide to Countering Online Foreign Covert Influence

Gabriel Band
Thursday, July 22, 2021, 1:35 PM

How can the practitioner actually counter online foreign covert influence operations?

Kremlin, 2012. (Larry Koester, https://flic.kr/p/srp8yT; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Emerging from the fog of two decades dominated by counterterrorism, another national security wind is sweeping through public discourse: online covert influence operations. Like terrorism, the threat of covert influence can only be mitigated but never eliminated, both require persistent countermeasures, responses are often perceived as inflating the threat, and the solutions typically offered can be broad and abstract. Yet counterterrorism has discernible tactics. What is the corollary for countering foreign covert influence?

Discourse surrounding “counterinfluence” often focuses on broad yet critical policies like advancing media literacy, but right now, today, how can the practitioner actually counter online foreign covert influence operations? There are specific steps and strategies to effectively neutralize the threat, and that is the scope of this exploration.

The Online Threat

First, an overview of key threat trends the online counterinfluence practitioner faces. Foreign adversaries covertly weaponize information to alter opinions, exacerbate tensions, or simply to create enough noise to inspire doubt in all sources of information. They do this in three primary ways: They may amplify “misinformation” (unintentionally false information) already existing in the target environment; introduce “disinformation” (intentionally false narratives weaponized with the goal of inflicting harm); and exploit “malinformation” (surfacing truthful information to achieve a disruptive goal, such as a “hack and leak”).

Personas and Platforms

While online covert influence is often characterized as inauthentic social media accounts—bots or trolls/personas spewing noise into the void—this is an incomplete mental model, and whether such tactics significantly impact target audiences is an open question, especially as social media companies become more adept at disrupting the behavior. Adversaries still employ bots and trolls, including as components of an operation, but researchers have begun to focus on (and ought to continue) scrutinizing websites that engage in “narrative laundering,” such as those revealed by the State Department’s Global Engagement Center. Nefarious actors, including foreign intelligence services and their proxies, push narratives to target audiences through networks of sites that coordinate—whether wittingly or unwittingly, or perhaps in between—and amplify each other’s content, often referencing or linking back to each other’s platforms. This technique obfuscates the original foreign source of the content (deceiving the reader into believing the content reflects authentic voices) and creates a false perception of credibility due to the sheer volume of sites promoting the same views (also referred to as “boosterism”), a behavior studied and detailed extensively by Renee DiResta and Shelby Grossman at the Stanford Internet Observatory. These ecosystems can increase the reach of foreign operations, as sites with greater readership can drive target audiences to lower visibility sites, and they are more resilient to private-sector actions, since independent websites can endure while social media companies play whack-a-mole removing accounts that violate terms of service against “coordinated inauthentic behavior.” Online platforms can also serve as components of covert influence operations executed by an adversary’s human assets—hosting content that actors attempt to inject into target audiences—or as stand-alone operations entirely, often to make inflammatory or alarmist claims to sow discord.

“Cyber” as a Tool for Influence

Adversaries may also leverage real or alleged cyber operations to influence audiences, as demonstrated by Iran in the 2020 U.S. elections. Termed a “perception hack,” foreign actors may compromise sensitive systems without damaging or manipulating any content or processes that rely on the targeted infrastructure, but with the intent to have the hack discovered. The simple fact of the compromise can cause the target audience to doubt the integrity of the outcomes relying on the compromised systems. Adversaries can use such cyber compromises to fuel narratives that undermine public confidence, and they can launch disinformation operations to make false allegations of such compromises to achieve the same effect. One can argue that a cyberattack—an operation that actually disrupts or degrades infrastructure and its associated processes—conducted for the sake of influencing a target audience, rather than solely to achieve an outcome based on the disruption, is itself an online influence operation (and can be covert).

Prioritizing the Threats

Determining the relative priority of online covert threats can be difficult, since these operations inherently employ obfuscation, and relevant data that drive assessments are often nonpublic and owned by private companies. But there are several parameters the counterinfluence practitioner can use to guide limited resources. First, target audience and degree of reach: Is the operation designed to target the population of concern, and has it broken into public discourse? The latter can be difficult to measure, but metrics on social media engagement, website traffic, and collaboration with industry and academia can help. Be cautious in assessing audience reach, since an operation with low visibility can be amplified by a more popular online actor, as discussed previously. Second, leveraging citizens of the target audience to wittingly or unwittingly execute the operation: This adversary tactic adds challenges to disrupting (and even identifying) foreign-launched operations due to speech protections in a democratic society, and may add perceived credibility to the operation by exploiting authentic voices from the target community. Other parameters to consider include how professional the online operation appears; the likelihood that the operation may achieve virality through amplification by online influencers or elected officials; or the potential for the operation to weaponize truthful information, since that may garner significant attention and cause damage.

Counterinfluence Tool Box

As Thomas Rid’s comprehensive history of Soviet “active measures” demonstrates, the covert influence tactics seen today are far from new, but the online environment necessitates new countermeasures. Disrupting a foreign covert influence operation usually requires layered operations from multiple action arms (across government or the private-sector), as part of an intentionally phased campaign to degrade and defend against the threat. As such, effective coordination requires built-in means of consistent dialogue, including interagency liaisons across government partners to facilitate collaboration and agility. The countermeasures typically require exposure, disruption, and defensive tactics, with the former serving as a key pillar that both represents and enables the latter two. In my experience, the key ingredient is an action-oriented approach.

Exposure

Exposing covert influence prevents the actor from operating in the shadows. Once the government or private sector unveils the threat—through a variety of different exposure tactics—the public can choose to disengage from the online activity and the private sector can enable defenses and disrupt the foreign actor from using commercial platforms for the operation. In the face of this pressure, the actor may cease the operation or moderate incendiary narratives to avoid additional pressure. A culture of consistent exposure may also deter adversaries from engaging in such behavior in the future for several reasons. First, this culture can message to existing and prospective influence actors that these operations are easily disrupted, despite the significant time and resources poured into them. Second, the cumulative effects of public shaming can tarnish a threat actor’s image within its own government and create diplomatic rifts that may impact the overall risk calculus. Executing exposure by focusing attention on the threat actor rather than on the operation can maximize this latter benefit while mitigating the attention funneled to weaponized narratives. Appropriately designed exposure—especially when it highlights an operation’s failure—can lead to these constructive outcomes, which outweigh the cons of potentially driving ephemeral attention to the covert influence operation itself.

Exposure: Gravity of Threat

Critically, counterinfluence exposure must be commensurate with the gravity of the threat, and this is where the practitioner must select the appropriate strategy. The most significant foreign influence operations—those that successfully reach the target audience and impact public discourse—may warrant an authoritative government statement making a public splash.

For less significant threats, government or private-sector actors should expose the activity in a way that mitigates attention to the operation and its associated narratives. The counterinfluence action arm can accomplish this by exposing multiple operations at once, thereby spreading out the attention to each individual operation; focusing the spotlight on the actor behind the operation rather than on the operation or narratives; or quietly exposing the activity through private industry notifications, defensive briefings to the foreign actor’s prospective co-optees, and diplomatic channels to the foreign government. Such “quiet” exposure mitigates the attention driven to covert influence operations, and it carries numerous benefits, including advancing private-sector, academic, and independent investigative analysis; disrupting a foreign actor’s efforts to co-opt citizens as operational tools; and possibly prompting the adversary to cease the activity or moderate weaponized narratives to avoid scrutiny.

Foreign influence operations are often ineffective at reaching and engaging with the target audience in impactful ways. Inflating the threat may encourage the adversary to continue the malign behavior, deflects public attention away from potentially greater threats such as domestic disinformation, and may evoke public concerns of politicization. The exposure strategy must be responsibly tailored and should highlight the adversary’s failure and ineffectiveness when appropriate.

Exposure: Government or Private Sector

There are varying benefits to the government and the private sector serving as vehicles for exposure, and the strategy to counter a specific operation should factor in which may be more appropriate. Government exposure may (a) lend additional credibility, as the government may be perceived as having access to authoritative evidence, (b) be coupled with legal ramifications that have follow-on disruptive effects on the operation and on individuals supporting the operation, and (c) generate significant public attention, which may be warranted to better inform the public depending on the threat level.

Government exposure offers another critical ingredient to disrupt covert influence: detailed attribution of the foreign actor behind the activity. Such attribution can go beyond a “foreign nation-state” or a specific foreign country, down to the specific department of an intelligence service, for example. Detailed attribution may be hard to establish without a government’s unique sources, and it lends yet additional credibility to the exposure, imposes costs on foreign actors that may prefer to operate in the shadows, and supports the open-source community’s analysis and understanding of the threat picture.

In some cases, private industry exposure may be the most appropriate vehicle. When the threat is not great enough to warrant significant public attention, private-sector exposure may make a smaller splash in the information environment; and when the influence operation is politically charged, the public may deem the exposure more credible from the private sector, since the government may be perceived as playing partisan politics. Of course, when building a counterinfluence strategy, government entities cannot rely on private-sector actions, since the latter operate independently from the government in free societies. However, bidirectional communication can enable each side to make informed decisions.

When crafting an exposure plan, the acting entity should explain why knowledge of the foreign adversary activity matters. For private-sector entities, simply being transparent about actions taken against abusive behavior on their platforms contributes to trusting relationships with their customers. But for government entities, the exposure plan should include some hook into current events: What is the threat actor doing, and why is the malign covert influence a public threat? Exposure should make clear why the disruption is relevant to the audience, rather than simply publicizing the operation.

Exposure is sometimes criticized as a tactic because it drives attention to an operation that may be inconsequential. While there may be risks associated with this countermeasure, they can be mitigated, and the pros of appropriately tailored exposure usually outweigh the cons.

Disruptive and Defensive Countermeasures

Governments and industry possess numerous levers to disrupt and defend against foreign covert influence operations and the actors behind them. Government action arms can impose legal ramifications for domestic-based support to foreign operations (such as through the Foreign Agents Registration Act); impose punitive financial costs to disrupt the foreign actor’s resources; collaborate with allies to promote disruptive actions against threats emanating from their soil if they violate local laws; and execute other actions—possibly asymmetric in nature—against the foreign adversary as a punitive response to the covert influence. This latter option is effective as a countermeasure only if designed and executed in a manner in which the adversary is cognizant that the actions relate to their malign operations; otherwise, the adversary may not receive the message or may receive an incorrect message.

The private sector possesses obvious powers to disrupt and defend against foreign inauthentic online activity. For example, strictly enforcing terms of service against “coordinated inauthentic behavior” prompts social media companies to aggressively remove accounts used for covert influence operations on their platforms (“deplatforming”); algorithmic adjustments to decrease virality and other technical solutions such as link bans could reduce the ability of these operations to reach target audiences; web hosting companies can rescind and deny services to foreign covert influence actors and their online operations for terms of service violations, as they have in other contexts (unrelated to foreign influence); and companies can tag accounts or pages with a foreign government label to better inform audiences about what they are seeing, and do so in an enduring way.

In the run-up to the 2020 elections, numerous government agencies including the Office of the Director of National Intelligence, the Department of Homeland Security, the FBI, the Treasury Department, and the State Department exposed Russian and Iranian covert online influence operations, including detailed attribution of the foreign actors involved. Notably, in response to influence operations targeting the 2020 elections, the Treasury Department sanctioned Russian covert influence platforms that engage in narrative laundering, spreading the exposure—with detailed attribution—across multiple actors. These exposure actions enabled subsequent defensive and disruptive effects, including disrupting financial resources as a result of sanctions; deterring witting or unwitting contributors from continuing to write for covert influence websites; and interrupting the narrative laundering network, as other sites in the ecosystem distanced themselves from the exposed and sanctioned foreign operations. In addition, social media companies such as Facebook and Twitter actively exposed foreign influence actors and implemented defensive measures, sometimes with the assistance of media, academia, and other organizations such as Graphika and DFRLab. These private-sector exposure actions were coupled with comprehensive deplatforming, link banning, and labeling, degrading the tools that the foreign actors wielded to covertly reach target audiences. The totality of such actions can cause the actor to cease the operation, especially if the disruption occurs early in the operation’s development.

Key Takeaways for Counterinfluence Strategy

In the counterinfluence space, there are several key principles to increase effectiveness. First, the counterinfluence discipline, both in and out of government, should be action oriented in nature. This mindset shifts the approach from describing the problem to countering it, while messaging to adversaries that covert manipulation is intolerable and subject to punitive countermeasures; as discussed, not all threats merit the same attention, but responses can be prioritized and tailored.

Second, exposure should highlight failure and ineffectiveness when appropriate. Many foreign online operations attain far less reach than desired and thus fail to meaningfully impact the target audience, even in often-cited cases of online covert influence such as the Internet Research Agency’s trolling activity in the 2016 elections. Highlighting the adversary’s failure may indeed be the most accurate characterization, but it may also degrade the actors’ standing within their own organizations or countries.

Third, the practitioner should strive to build measures of performance and effectiveness for counterinfluence operations. These are efforts to assess the degree to which a countermeasure achieved its intended effect and goal, respectively, and enable the practitioner to adjust countermeasures as necessary. Did the countermeasure moderate or disrupt the adversary’s behavior? Did it degrade the actor’s ability to reach target audiences? While these measures can sometimes be challenging to ascertain depending on the nature of the threat and the response, they are critical to assess effective tactics and to identify evolving target behavior; however, an inability to build these measures into a strategy should not be a reason to take no action.

Fourth, counterinfluence requires layered operations, likely from multiple entities, intentionally phased to build on each other. The practitioner should not assume the threat is neutralized after one countermeasure, nor be discouraged from continued efforts when one countermeasure is ineffective. The practitioner should approach the problem with a campaign mindset: Assess the relative priority of the threat, determine appropriate countermeasures, build the coalition of relevant action arms, and execute the campaign in an intentional way for the countermeasures to augment and enhance each other’s effects, assessing success along the way and maintaining pressure as necessary.

Fifth, government-academic partnerships should be promoted. Both sides of the equation have unique insights and skills to bring to the table, and effective partnership between government and academia leads to better insight on the threats and more informed countermeasures. While government actors may have unique methods to generate insight into adversaries, academia reigns supreme in understanding the open-source information environment and identifying new tactics.

Lastly, the practitioner should prioritize identifying and disrupting foreign adversary attempts to co-opt witting or unwitting citizens of the target audience to execute the covert influence operation; as discussed previously, using citizens of a free and open society poses challenges for identifying and disrupting the operation.

Unresolved Questions

The aforementioned strategies nevertheless reveal challenges—many of which other authors have discussed in depth. First, private-industry actions can exacerbate fracturing of the online space, pushing the actors and targets to online environments that afford less visible speech and more anonymous activity, such as 4chan, Gab, Parler and encrypted apps. This fracturing may be unhealthy for society, but it also confounds future efforts to identify and counter covert operations. Second, since the “online highway” moves quickly and information is ephemeral, exposure must be enduring and discoverable. How will new users who stumble into the covert influence platform be aware of prior public service announcements? Lastly, despite the debate about Section 230 reform, regulatory efforts may miss the core design features of Big Tech platforms—which are built for scale, promote virality, and permit inauthenticity—that allow these operations to reach audiences. Absent a change in the algorithmic approach to content spread, efforts to meaningfully disrupt the covert adversary may keep falling short.

Final Thoughts

Many of the above countermeasures can be applied not just to online influence operations but also to all sorts of malign adversary activity. The point is that there are specific strategies to effectively combat foreign covert influence—the discipline just requires tactics tailored to the threat and an inclination toward taking action to neutralize it. Ignoring this adversary behavior risks a relentless threat actor burrowing into a vulnerable audience to wage low-level, corrosive information operations that amplify or alter perceptions, and sometimes go viral. It is the government’s job to protect its people from foreign threats, and it is incumbent on private industry to protect its users from platforms serving as nation-state weapons. Appropriate exposure and other defensive and disruptive measures can neutralize these foreign asymmetric warfare tactics. As information travels increasingly faster, through an increasing number of suspicious sources, to increasingly divided populations, the opportunity space for foreign actors to exploit divisions using weaponized narratives is growing.


Gabriel Band worked on 2020 election security at the U.S. Department of Defense. All views and opinions expressed are his own and do not necessarily represent the views of the Department of Defense or the U.S. government.

Subscribe to Lawfare