Developing Responses to Cyber-Enabled Information Warfare and Influence Operations
This article defines information warfare and influence operations as the deliberate use of information by one party on an adversary population to confuse, mislead and ultimately influence the actions that the targeted population makes. Information warfare and influence operations are hostile activity, or at least an activity that is conducted between two parties whose interests are not well-aligned. At the same time, information warfare and influence operations do not constitute warfare in the Clausewitzian sense (nor in any sense recognized under the U.N.
Published by The Lawfare Institute
in Cooperation With
This article defines information warfare and influence operations as the deliberate use of information by one party on an adversary population to confuse, mislead and ultimately influence the actions that the targeted population makes. Information warfare and influence operations are hostile activity, or at least an activity that is conducted between two parties whose interests are not well-aligned. At the same time, information warfare and influence operations do not constitute warfare in the Clausewitzian sense (nor in any sense recognized under the U.N. Charter or the laws of war); this accounts for use of the term “influence.” Information warfare and influence operations have connotations of soft power: propaganda, persuasion, cultural and social forces, confusion and deception. If the patron saint of traditional warfare is Clausewitz, Sun Tzu, who wrote that “The supreme art of war is to subdue the enemy without fighting,” is the comparable figure of importance to information warfare and influence operations. By definition, information warfare and influence operations take place without kinetic violence, and they are necessarily below any threshold of armed conflict.
In the context of U.S. military doctrine, information warfare and influence operations are most closely related to information operations or military information support operations. But the primary focus within U.S. military doctrine on these operations is tactical. A canonical example would be psychological operations such as leafleting or propaganda broadcasts intended to induce adversary troops to surrender or to weaken their will to fight.
By contrast, the focus of this article is information warfare and influence operations that seek to affect entire national populations. A first lesson is that there are no noncombatants—every single individual in the adversary’s population is a legitimate target. Moreover, cyber-enabled information warfare and influence operations in the 21st century strongly leverage the capabilities of modern computing and communications technologies such as the Internet to achieve their effects.
A broad theory of cyber-enabled information warfare and influence operations is presented in a paper that I co-authored with Jaclyn Kerr. This article focuses on the weakest part of that paper—the paucity of remedies for information warfare and influence operations (IWIO)—and proposes a research agenda for such remedies. (IWIO should for the remainder of this article be understood to mean cyber-enabled information warfare and influence operations.)
For a nation defending against an IWIO onslaught, it is helpful to consider three separate but interrelated areas of focus: detecting information warfare and influence operations in progress, reducing the impact of IWIO on the targeted nation (such as through defensive actions), and U.S. use of information warfare and influence operations against adversaries as a defensive response to their use of IWIO against the United States.
Detection
A necessary but not sufficient condition for responding to IWIO is knowing that an adversary is conducting such a campaign. When an adversary uses kinetic weapons, a country usually knows that it has been attacked, at least if the kinetic attack has caused significant death or destruction. A cyber campaign may not be detectable. The internal workings of computers are invisible to the naked eye, and a malfunctioning computer, even when noticed, may be malfunctioning for reasons other than hostile cyber activities (such as user error). A successful cyber campaign may become noticeable if its effects were intended to be noticed, even if the cause of those effects remains hidden—such would be the case if the cyber campaign sought to cause physical damage. But a successful cyber campaign may not be noticeable if its effects were intended to be kept secret (e.g., if espionage were the goal of the cyber campaign).
An adversary’s IWIO campaign—if successful—is likely to be invisible, because a primary goal of IWIO is to persuade the target’s population that its desires and preferences are well-aligned with those of the adversary. That is, the adversary seeks through manipulation to turn much of the target’s population into unwitting accomplices: They serve the adversary’s interests but do not know that they are being duped.
Of course, societal preferences evolve, and not all changes in societal orientation and preference are the result of information warfare and influence operations. But IWIO is a concerted effort by a foreign adversary (state or non-state) to alter indigenously driven evolutionary processes and thereby gain greater influence over the target society’s destiny.
Accordingly, the identification of a foreign hand in such efforts is a central aspect of detecting an IWIO campaign in progress. One element of such detection is recognizing parties that might have something to gain from conducting such campaigns. Mere recognition of who gains is not evidence that a party is undertaking an IWIO campaign, but a party that does not stand to gain from such a campaign is unlikely to be involved in one.
A second signal could be the detection and identification of automated IWIO weapons in use. For example, the rapid emergence of large numbers of automated social chat bots promulgating similar political messages could signal the start of a concerted campaign, as automated chat bots do nothing but amplify discourse rather than contributing new content. For example, researchers at Indiana University and the University of Southern California have developed a mechanism to distinguish between trending memes that are organic or that have been promoted by advertisement, and research continues on this problem.
More generally, one might imagine that some combination of volume (messages per day), content, platform and so on could identify with high probability automated IWIO weapons such as chat bots carrying divisive or inflammatory messaging. A related class of research could point to ways of identifying the national affiliations of parties responsible for operating such bots. The rapid emergence of automated IWIO weapons is not a definitive signal itself but could be suggestive of an IWIO campaign. Again, this topic is the focus of some researchers today.
A third element of an IWIO campaign would be found in efforts to undermine the legitimacy of institutions that provide societal stability and continuity. Under normal circumstances, citizens argue over politics and the meaning of various events. But under IWIO attack, citizens do not necessarily agree on the events that have transpired—each side may well have its own version of the facts to drive its narratives and its own logic and paradigms to frame those narratives. Recognition that legitimizing institutions are under attack is an element of detection. Such institutions often include media outlets that adhere to professional news journalistic standards and ethics and historically respected universities.
Coordination among intelligence-gathering agencies is also likely to improve capabilities for detecting IWIO campaigns. Identification of similar tactics, technologies, attack origins and content across different suspected venues of conflict may provide early warnings of an impending IWIO campaign in yet another venue.
Of course, IWIO campaigns are unlikely to be accompanied by strong and unambiguous indications of their presence, especially if—as would be expected—the originators of such campaigns wish to leave no trace of their identity. Thus, subjective intelligence analyst and community judgments will have substantial influence on whether a campaign is detected. In part because of this subjectivity, any determination that the nation is under IWIO attack will have a strong political dimension. Political considerations are not supposed to influence intelligence judgments, but it would be naive to believe that the likelihood of such influence is zero. Such concerns can be ameliorated only by the development of relatively nonpolitical indicators of attack; it follows that research intended to develop these indicators should be one focus of counter-IWIO research.
Defense
Any useful defense against information warfare and influence operations, even a partial defense, has to start with this fundamental fact: Although the volume and velocity of information has increased by orders of magnitude in the past few decades, the architecture of the human mind has not changed appreciably in the last few thousand years, and human beings have the same cognitive and perceptual limitations that they have always had.
Possible defensive measures against IWIO fall into two basic categories: measures to help people resist the operation of IWIO weapons targeted at them; and measures to degrade, disrupt or expose an adversary’s arsenal of IWIO weapons as they are being used against a target population.
Helping humans resist IWIO
The subfield of social psychology that focuses on how people make intuitive judgments was pioneered by Amos Tversky and Daniel Kahneman. Kahneman’s 2011 book, “Thinking, Fast and Slow,” provides an introduction to the field’s findings and conclusions, while a 2002 book edited by Thomas Gilovich, Dale Griffin and Kahneman entitled “Heuristics and Biases: The Psychology of Intuitive Judgment” provides a scholarly review of the field. The one-line summary of the field’s conclusions is that humans are subject to a variety of systematic cognitive and emotional biases that often distort their ability to think rationally and clearly.
Lin and Kerr argue that these biases are at the root of human vulnerabilities to IWIO. If this is true, understanding these biases may be a good starting point for developing techniques for helping individuals resist IWIO while they are exposed to such campaigns or thereafter.
One approach to remediating the effects of such biases is to find ways that make it easier for people to engage their capabilities for rational thought—an approach often described in psychological literature as “debiasing.” For example, it may be possible to “inoculate” target audiences against fake news. One form of such inoculation described by John Banas and Stephen Rains consists of simultaneous delivery of an initial message and a preemptive flagging of false claims that are likely to follow as well as an explicit refutation of potential responses. Going somewhat further, Troy Campbell, Lauren Griffin, and Annie Neimand argue in their 2017 paper, “Persuasion in an ‘Post-Truth World,” that messaging must do more than simply relay factual information. These researchers argue that messaging must also involve presenting information in ways that do not threaten the core beliefs and values of the intended audiences, advocating storytelling as a way to bypass the audience’s mental defenses, and demonstrate how the audience’s core beliefs are still compatible with new ways of seeing a problem.
Tools that make it easier to ascertain the truthfulness or falsity of claims made online (whether by traditional media or IWIO perpetrators) are unlikely to impact the thought processes of hard-core partisans but may facilitate more rational thought for individuals who have not yet become impervious to reason and fact. Along these lines, researchers at Indiana University have provided an example of tools to support computational fact-checking that help humans to rapidly assess the veracity of dubious claims.
The second general approach is to find ways to use these biases to counteract the effects of an adversary’s IWIO weapons. For example, Paul Slovic and others have described the “affect heuristic” (see here and here), which drives people to make decisions and draw conclusions based on how they feel about a situation. When a person’s feelings toward an activity are favorable, the affect heuristic will predispose the person to judge the risks of the activity as low and the benefits as high; if the person’s feelings toward the activity are unfavorable, the opposite judgment occurs—the risks are seen as high and the benefits as low. Slovic has suggested that one way to combat the use of the affect heuristic taking advantage of a positive feeling about a situation is to create a negative feeling about the same situation. By so doing, Slovic speculates (in a personal conversation with me) that individuals may have to reconcile two contradictory feelings and thus could possibly use a more deliberative mode of thought; this question is, of course, researchable.
It is also possible to take biases into account when formulating a response strategy. For example, the repetition of false statements often increases belief in those falsehoods, even when such repetition occurs in the context of refuting the statements. This is why social scientists often advise fact-checkers to emphasize truth (such as saying “Obama is Christian”) and to downplay rather than emphasize a false statement (that is, refrain from saying something like “Report that ‘Obama is Muslim’ was faked”).
A second example might involve responding to commentators pointing to a doubling of the likelihood that a bad event would occur in a given time frame (that is, a probability that p may become 2p). But if p is small in magnitude, the likelihood of the bad event not happening, or 1-p, is not very different from the quantity 1-2p. Emphasizing the latter point may prove more effective in political discourse than arguing about the consequences of the former point. A claim that the risk of failure is doubled (when the original likelihood of failure is 1 percent) simply means that the chances of success are still very good (going from 99 percent to 98 percent). Again, the value of this type of reframing is researchable.
A third approach calls for educational efforts to improve the ability of a populace to think critically about their consumption of media. For example, Nina Jankowicz argued in a 2017 New York Times op-ed that to defeat disinformation campaigns, “the United States should work to systematically rebuild analytical skills across the American population and invest in the media to ensure that it is driven by truth, not clicks.” She went on: “The fight starts in people’s minds, and the molding of them. In K-12 curriculums, states should encourage a widespread refocusing on critical reading and analysis skills for the digital age. Introductory seminars at universities should include a crash course in sourcing and emotional manipulation in the media.”
Degrading the operation of IWIO
The second broad category of measures to defend against IWIO involves measures to degrade, disrupt or expose the arsenal of weapons being leveraged against a target population. For the most part, the party responsible for taking these measures would be infrastructural entities in the information environment: social media companies, news organizations and the like.
One category of such measures includes support for fact checkers. For example, the Poynter Institute, a nonprofit entity that includes a journalism school, has established what it calls the International Fact-Checking Network and sought to promulgate a code of principles that promote excellence in fact-checking that will in turn promote accountability in journalism. According to the Poynter website, these principles include commitments to nonpartisanship and fairness; transparency of sources; transparency of funding and organization; transparency of methodology; and open and honest corrections. In late 2016, Facebook announced that being a signatory to this code is a condition for providing fact-checking services to Facebook users. Facebook has also introduced a button that makes it much easier for users to signal that they regard a given story as fake news. By combining such indicators with other signals, Facebook seeks to identify stories that are worth fact-checking and send such stories to fact-checking organizations. Stories that these organizations identify as fake will lead to such stories being flagged as disputed and to be ranked lower in users’ Facebook feeds.
A second category consists of measures to disrupt the financial incentives for providing fake news. For example, the New York Times reported in November 2016 on a commercial operation in Tbilisi, Georgia, intended to make money by posting a mix of true and false stories that praised Donald Trump. The site’s operator reportedly expressed amazement that “anyone could mistake many of the articles he posts for real news, insisting they are simply a form of infotainment that should not be taken too seriously.” In response, Facebook announced it was eliminating the ability to spoof domains (thus reducing the prevalence of sites that pretend to be real publications) and also analyzing publisher sites to detect where policy enforcement actions might be necessary. Google has also announced plans to prevent its advertisements from appearing on fake news sites, thus depriving them of revenue.
A third category involves measures to reduce the volume of automated amplifiers of fake or misleading information (e.g., automated entities that retweet fake news on Twitter or “like” fake news on Facebook). Of course, a precondition for any such measure to work is the ability to identify automated amplifiers (as discussed above). To date, such study has been performed by independent researchers. It stands to reason, however, that infrastructure providers such as Twitter and Facebook would be in a better position to identify automated accounts, and there is no particular reason that a different set of enforceable rules (i.e., terms of service) could not apply to automated vs. human-operated accounts. Research is needed to help providers distinguish more effectively between legitimate and illegitimate automated accounts and how different terms of service might be applied to them.
A fourth approach is aimed at greater transparency of political traffic carried on social media. For example, the Washington Post reported as early as 2014 that Facebook was seeking to develop the capability to show tailored political ads to very small groups of Facebook users. With this capability, it is possible to convey entirely different messages to different groups of people, thus enabling political campaigns to target messages precisely calibrated to address the particular hot-button (and motivating) issues of interest to specific groups. Indeed, such messages could even be contradictory and the broader population would never know about it if the ads were not made public.
To increase transparency, Facebook has imposed a set of requirements on political advertising. Specficially, ads with political content appearing on Facebook are required to include information about who paid for them. All such ads appearing after May 7, 2018, will also be made available for public perusal in a Facebook Ad Archive. Along with each ad is displayed information about the total amounts spent, the number of ad impressions delivered, demographic information (age, location, gender) about the audience that saw the ad and the name of the party that paid for the ad.
An interesting question is the definition of a political ad. Ads that mention particular candidates for office are an easy call. Harder-to-handle cases include issue-oriented ads that say nothing about particular candidates or political parties but are nevertheless intended to promote or detract from one side or another in an election. Facebook has developed an initial list of topics that it regards as political. These topics include abortion, budget, civil rights, crime, economy, education, energy, environment, foreign policy, government reform, guns, health, immigration, infrastructure, military, poverty, social security, taxes, terrorism, and values, but Facebook explicitly notes that this list may evolve over time. The research question is whether it is possible to craft a definition for “political issue ad” that does not depend on explicit enumeration.
Facebook also requires that a political advertiser pass an authorization process that verifies his or her identity and residential mailing address and also discloses who is paying for the ad(s) in question. To deal with advertisers that should have gone through the authorization process but did not, Facebook is investing in artificial intelligence tools to examine ads, adding more people to help find rogue advertisers, and encouraging Facebook users to report unlabeled political ads. However, the database on which these tools will be trained to recognize political ads has not been made public, so it is impossible to judge the efficacy of such tools. Moreover, it is unlikely that such a database will ever be made public, as it might provide valuable hints to a party seeking to circumvent Facebook’s identification requirements for sponsors of political ads.
A fifth approach, likely more relevant for future, rather than current, IWIO threats focuses on forensics to detect forged email, videos, audio and so on. During the 2016 election campaign, a variety of emails were leaked from the Democratic National Committee and Hillary Clinton’s campaign chairman, John Podesta. While the authenticity of those emails was not an issue at the time, consider the potential damage if altered or otherwise forged documents had been inserted into those email dumps. Such messages might contain damaging information that would further the goals of those behind the IWIO campaign, and conflict would arise when public attention was later called to them.
According to the University of Toronto’s Citizen Lab, this approach was demonstrated against David Satter, a prominent journalist and a longtime critic of Russia. In October 2016, Satter’s email credentials were compromised in a phishing attack. His emails were subsequently stolen and later published selectively with some intentional falsifications—what Citizen Lab calls a “tainted leak.” The falsifications suggested that prominent Russian anti-corruption figures were receiving foreign funding for their activities. Russian state-owned media reported on the leaked emails, saying that they showed a CIA-backed conspiracy to start a “color revolution” in Russia and that reports on corruption within Vladimir Putin’s inner circle were part of a deliberate disinformation campaign on behalf of foreign interests.
This tactic would be an effective method for those behind IWIO campaigns to work faster than the news cycle. While legitimate investigators, analysts and journalists pored over the documents, the adversary would be able to point directly to the falsely incriminating emails. The hacking victim or victims would meanwhile have a difficult time persuading or proving to the (mostly inattentive) public that the messages were false because they were found in the context of legitimate emails.
It is not difficult to imagine similar techniques being used to create false video or audio clips of officials or candidates saying or doing things that would be damaging to their campaigns or political standing—what is often known as the “deepfake” problem. Wide distribution of such clips would be even more difficult to refute because of widespread prejudices and confidence in the reliability of visual or audio information. Visual and audio information associated with specific events once had dispositive value for authenticating events, conversations and other exchanges, but with Photoshop and audio and video editing software widely available—and constantly advancing—this assumption of certitude is simply no longer valid, and the authenticity of images and recordings will be increasingly debated rather than automatically trusted.
Offensive Responses
The use of IWIO by the United States is controversial, and official U.S. policy for many years has sought to avoid any hint that the government engages in such activities. It is U.S. policy even during armed conflict that the United States does not send false information into its adversary’s public information channels, out of concern that such information might find its way back to the American people and mislead U.S. citizens. (This is not to say that the United States has never engaged in such operations, only that is it officially eschewed.)
The United States has traditionally engaged in open information warfare and influence operations under the rubric of public information programs—the development of narratives to counter the messages delivered through adversary IWIO. (Of course, the targets of such U.S. efforts regard the operations as hostile propaganda.) Some analysts cite with approval the activities of the Active Measures Working Group, an interagency group founded in 1981 and led by the State Department and later by the U.S. Information Agency to counter aggressive Soviet propaganda and disinformation during the Cold War. The group’s counter-disinformation efforts included public descriptions of Soviet active-measure techniques and disinformation themes and supporting examples; a series of State Department Foreign Affairs Notes that were distributed to journalists, academics and others both within the United States and abroad; press conferences to discuss Soviet disinformation campaigns (at which evidence of these campaigns was distributed to attending journalists); reports on Soviet active measures; frequent presentations describing Soviet disinformation activities; and an annual NATO meeting concerning Soviet disinformation activities.
More recently, the State Department Global Engagement Center was established by executive order in 2016 to “lead the coordination, integration, and synchronization of Government-wide communications activities directed at foreign audiences abroad in order to counter the messaging and diminish the influence of international terrorist organizations.” Funding for this center was increased from $5 million per year to $80 million per year in the fiscal 2017 National Defense Authorization Act.
Another type of offensive response seeks to impose costs on the perpetrators of information warfare as individuals. For example, Executive Order 13757, issued on Dec. 28, 2016, provides for economic sanctions to be levied against persons engaging in cyber-enabled activities with the purpose or effect of interfering with or undermining election processes or institutions. If a few people are responsible for a substantial portion of election-related disinformation or influence activities, financial sanctions directed at these individuals could have a disproportionately large effect on the perpetrators while also deterring others from engaging in such activities. Other, more forceful sanctions could be levied against key perpetrators if not under the auspices of this executive order then perhaps under covert-action authority. In all cases, of course, policymakers should be mindful of the risks of escalation.
A third type of offensive response redirects the techniques of IWIO against perpetrators. For example, after a large number of Macron campaign e-mails were leaked at a pivotal point just before the French presidential election in May 2017, the campaign announced that it had preemptively defended against those seeking to compromise its emails by inserting significant amounts of falsified information into email accounts it knew would be compromised. In this instance, as the Daily Beast reported, the use of IWIO techniques against the perpetrators created uncertainty about the information they were trying to publicize and reduced the effectiveness of their manipulations.
Discussion and Conclusion
The above sketch of possible responses to cyber-enabled information warfare and influence operations is, of course, not exhaustive. Nevertheless, it is possible to suggest that some are less likely to succeed than others.
Absent other consequences, “naming and shaming” actors responsible for such campaigns is unlikely to have much impact. White IWIO campaigns—that is, those conducted with open acknowledgement of the source—are by design immune to naming and shaming. Also, it is hard to imagine that much if any reporting about certain actors’ activities could embarrass them in any meaningful way. Evidence pointing to their involvement would be challenged publicly using the same IWIO techniques, resulting in even more confusion and uncertainty. For example, by September 2016, the Obama administration had concluded that Russia was taking an active role in supporting Trump’s candidacy, but some White House officials reportedly believed that publicly naming Russia unilaterally and without bipartisan congressional backing just weeks before the election would make Obama vulnerable to charges that he was using intelligence for political purposes.
Efforts to promote greater media literacy and critical thinking in the consumption of media are always welcome. But such efforts presume a willingness and desire to engage in additional mental processing, and those who respond intuitively without reflection avoid precisely that additional mental processing. At best, efforts to promote greater media literacy will help those who are not yet too far gone. And promotion of media literacy and critical thinking is a long-term process.
More generally, cyber-enabled information warfare and influence campaigns today often are aimed at liberal democracies, where individuals have—and are supposed to have—considerable freedom as well as the legal right to choose their information sources. Approaches to countering IWIO will have to refrain from exercising government control over private-sector content provision (in the United States, for example, the First Amendment would constrain government controls on information, especially political information) and governments will have to tread carefully in incentivizing private-sector actors to take measures in this direction.
Traditional public information campaigns are not likely to be particularly effective. The use of cyber-enabled information warfare and influence operations by Russia, foreign terrorist groups, and extreme political movements encourage and celebrate the public expression of raw emotion—anger, fear, anxiety—and thereby channel large and powerful destructive and delegitimizing forces against existing institutions such as government and responsible media.
With no obligation to be consistent in their messaging, these IWIO users can promulgate messages rapidly, while traditional public information campaigns are obligated to be consistent. The desire for government-wide coordination is understandable—public information campaigns benefit from consistent messaging, and uncoordinated responses may be mutually inconsistent. But rapid response—especially important because responding to adversary IWIO operations is by definition reactive—is arguably incompatible with coordination through an entity as large as a national government. It may be necessary to consider the possibility that rapid government responses to adversary IWIO campaigns will have to be “grey” (that is, unacknowledged) rather than white.
As for detection of forged emails or “deepfake” videos, technology is available to ensure the integrity of email. But the technology that addresses that problem, digital signatures, is not widely deployed, nor are its affordances widely understood. Technologies for authoritative detection of visual or audio forgeries are still in their relative infancy—those that are available are difficult to use, and the case for forgeries is hard to make in an easily comprehensible way. Nor is it clear that the availability of authentication mechanisms or forgery detection would be useful—a great deal of psychological research indicates that once an individual is exposed to and absorbs erroneous or false information, corrective mechanisms (such as notices of error or retraction) often do not do much to dispel the impressions induced by the original exposure.
Apart from those mentioned immediately above, it seems to this author that the claims about better or possible ways to address IWIO are researchable. But a special research focus is warranted on remediating the effects of psychological biases. As argued earlier, although the psychological biases that afflict most people play a central role in the success of IWIO campaigns, the mechanisms by which such campaigns succeed are uncertain in a number of cases discussed above.
Furthermore, experience demonstrates that people are indeed able to rise above their cognitive and emotional biases in some circumstances (that is, debiasing has some value), but in practice the effort to help people to do so is often a labor-intensive practice that involves a considerable amount of one-on-one engagement and interaction (Socratic dialogue would be an example of such engagement). Research that yields methods for reducing the labor-intensiveness of interventions that help people to rise above their biases and for conducting such interventions at scale would be quite useful. Also, many debiasing techniques show weak rather than significant effects. Research might focus on assembling packages of debiasing techniques whose effectiveness might be more significant than any one technique used individually.
Comments on any of these approaches are welcome.