The Intelligence Community’s Role in Countering Malign Foreign Influence on Social Media
Social media presents potential pitfalls that the intelligence community should seek to avoid.
Published by The Lawfare Institute
in Cooperation With
In November 2017, Twitter leveraged a mixture of algorithm and analysis to scrub its vast user base to identify fraudulent accounts run by an infamous Russian troll farm. Under increasing scrutiny in the aftermath of Russian online manipulation in the 2016 U.S. presidential election, the social media behemoth reported its findings to Congress.
Over subsequent months, however, researchers from Clemson University placed a large asterisk next to Twitter’s findings: Several of the accounts it had identified as inauthentic appeared to belong to genuine, unsuspecting U.S. citizens. The episode clearly illustrated a missed opportunity for collaboration—one that might have saved face for all involved and might have spared several Americans from being wrongfully sullied as agents of foreign influence. But even moreso, it illustrated the stakes that all policymakers and researchers—including those within the intelligence community—must consider when combating online manipulation.
The social media platforms themselves are relatively new on the scene, but the types of intelligence challenges they raise have appeared time and again. Much has changed since December 1981, when President Reagan signed Executive Order 12333, the authority upon which the collection, analysis and dissemination of intelligence largely hinge. The universe of intelligence has since evolved, and updates to the executive order have refined the makeup of the intelligence community and bounded its role relative to domestic communications and U.S. persons. But perhaps the most vastly expanded universe since 1981 is the realm of publicly available information (PAI). At the time the executive order was released, few could have predicted the explosion in PAI over the ensuing decades or its emergence as a key battlefront in great power competition.
From the Church and Pike Committees to the Patriot Act, to WikiLeaks to the Snowden affair, perhaps no issue has drawn more public scrutiny of the otherwise-secretive intelligence community than its role with respect to U.S. persons—nor has any issue elicited more transparency reforms. The correct balance between privacy and security was fodder for debate long before the formal establishment of U.S. intelligence-gathering agencies well over a half-century ago. Here, I aim not to settle this debate but to urge intelligence community practitioners, leaders and stakeholders to engage with it proactively to ensure that their tradecraft is enhanced by strategy, not dictated by scandal.
Decades worth of trial, error, controversy, oversight and exposure have followed the intelligence community’s claim on discrete modes of communication—from an enemy radio frequency to a terrorist’s email traffic—as legitimate targets for surveillance in the service of national security. As the internet became more ubiquitous, the intelligence community laid a similar claim to this sweeping domain. Social media, in turn, have come to serve as a means not only of two-way communication—like tagging a friend in a photo—but also of public personal and political expression—like posting one’s partisan leanings or campaign contributions—introducing an entirely separate and extensive range of equities. While parallels with previous efforts to counter extremist content online offer useful lessons to draw from, the push to counter foreign malign influence online remains separate and distinct in key ways, with its own unique set of challenges. For adversaries who seek to turn free expression and an open internet—cornerstones of the U.S. political economy—into vulnerabilities to be exploited, social media present an endless playground of opportunity to sow disinformation and amplify societal discord. But on the internet it is often difficult to distinguish between foreign and domestic activity.
Given such blurry lines, proactive introspection by the intelligence community is necessary not just to avoid political controversy—and to safeguard the sources and methods put at risk by crisis-induced, post-hoc examination—but also to preserve the intelligence community’s credibility. Simply put, social media present potential pitfalls for the intelligence community that it should seek to avoid.
The intelligence community must continually strive to secure a preeminent status as an objective and apolitical arbiter, particularly as the contours of a shared sense of reality among the American public continuously erode by what Rand Corp. researchers have dubbed “truth decay.” As Stanford University’s Amy Zegart and former CIA Deputy Director Michael Morell have asserted, the prime directive of the intelligence community in the coming era should be to “do no harm to [its] most valuable asset: [its] commitment to objectivity, no matter the policy or political consequences.” This certainly holds true when it comes to intelligence activity involving social media.
The reinforcement of that compact with the American public—not just to meet the minimum constitutional requirements but also to aim for the optimum standard of public credibility—must become a guiding intelligence community standard. The intelligence community must therefore examine how it counters foreign malign influence on social media through the prisms not only of oversight and law but also of relevance and durability for the times in which we live. Rather than yielding to an outdated, binary framing of privacy versus security, any prospective approaches to combating foreign interference in online discourse must factor for a third element: legitimacy.
The Intelligence Community and Social Media
Consider, for example, the so-called Russian troll farm. Russia’s attacks on the 2016 and 2018 U.S. election cycles served as critical demonstrations that merely safeguarding the machinery of the voting process is only half the battle; securing public confidence in the vote’s integrity and legitimacy is the other.
Russian social media operations have since evolved away from promulgating fabricated narratives and toward amplifying extant societal and political fissures among the target demographic. “Creating something out of nothing is really hard,” notes disinformation expert Ben Nimmo. There is ample reason to assess that China, Iran and other adversarial actors—state and non-state alike—are following suit.
In response, the intelligence community has rightfully marshaled forces against foreign malign influence. However, when it comes to securing the machinery and legitimacy of the democratic process, the same logic that prompted this mobilization must now also be used to shape it. When it comes to the intelligence community’s role in national security, capability and credibility are also mutually dependent—one reinforces the other. With regard to countering foreign malign influence online, there is a compelling need to prevent political preferences, as expressed by U.S citizens on social media, from becoming the basis for collection and analysis itself.
This is particularly true in the era of big-data collection and analysis. Intelligence officers throughout the community undergo rigorous training to sustain a framework for collection, retention, and dissemination of foreign intelligence that upholds the First and Fourth Amendment rights of U.S. persons. This framework often increasingly involves machine-learning to shoulder the burden in unprecedented new ways. And social media (among other PAI) present a wide-open playground to test and refine machine-driven analysis of interaction, amplification, distribution and sentiment.
A patchwork of guidelines issued by the attorney general and the Department of Defense govern intelligence community elements’ approach to publicly available information under Executive Order 12333. Whether these guidelines are sufficient to address the challenges posed by the expanding universe of PAI remains an open question. Moreover, the values instilled in the intelligence community workforce are not easily hard-coded into algorithms. The social media domain is vast, vulnerable to foreign manipulation, and replete with legitimate domestic political expression—features that must be grappled with in concert, lest the intelligence community’s response be found to be ineffectual at best and politically biased at worst.
This underscores the need to consistently examine how big-data analytics of PAI might be leveraged, with an eye toward the public trust. As foreign malign actors work both to artificially amplify existing themes and to intersperse generated content into U.S. domestic social media space, the intelligence community must be able to defensibly articulate to overseers and the public how its efforts to monitor, analyze and forecast such interference distinguishes protected U.S. speech from foreign-orchestrated expression. Whether current technology is up to that “enormous, if not impossible” task, as characterized by Sen. James Risch of the Senate Select Committee on Intelligence, remains deeply in question.
Moreover, while modern society is increasingly comfortable ceding some vestiges of privacy to maintain connectivity and self-expression, the expanded universe of PAI does not, in and of itself, signal broad public comfort with intelligence community collection of it. Misapprehension regarding intelligence community efforts in the social media space not only could spur distrust but also could have a chilling effect on the exercise of free speech itself. The U.S. Supreme Court opined in 1972 that any government-sponsored surveillance activity—irrespective of its underlying rationale—can have an effect on expression “aris[ing] merely from the individual’s knowledge that a governmental agency was engaged in certain activities or … concomitant fear that, armed with the fruits of those activities, the agency might in the future take some other and additional action detrimental to that individual.” Forty years later, David Omand (former head of the U.K. Government Communications Headquarters) and his colleagues at King’s College London wrote, “[T]here is a danger that a series of measures—each in themselves justifiable—together creep towards an undesirable end-point: a publicly unacceptable level of overall surveillance; the possession of a dangerous capability; and the overall damage to a medium that is of obvious intrinsic value beyond security …. To meet the challenge of legitimacy, the public must broadly understand and accept why, when, and with what restrictions [they are] taken.”
The Importance of Efficacy
Beyond the question of legitimacy, the intelligence community must also explore efficacy. The community’s methodologies must be rigidly bounded and defined as it seeks to root out foreign manipulation of online behavior. However, there remain doubts as to whether bulk-data collection and analysis can reliably “anonymize”—or excise the personally identifiable information of—U.S. persons. Researchers from Imperial College London and the Catholic University of Louvain in Belgium concluded recently that “heavily sampled anonymized datasets are unlikely to satisfy the modern standards … and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.” Meanwhile, the very overabundance of data requiring parsing and assessment by the intelligence community means it must prioritize those areas in which it has both competitive advantage and the exclusive capacity to deter or respond.
Such due diligence is the bare minimum U.S. taxpayers expect from the intelligence community as responsible stewards. “Leaving aside … arguments about excessive intelligence collection being a threat to civil liberties and privacy,” the Rand Corp. posited in 2018, “collecting more information than the intelligence community enterprise can reasonably process may be a waste of precious resources and likely adds to the burden of analysts and analytic tools that are already overtaxed trying to process and make sense of the volume of data pouring in daily.” Whether automation is sophisticated enough to procure needles from an ever-expanding haystack—and whether the resources required to do so are truly worth the investment—remains to be seen. As Omand et al. warned, the intelligence community may find itself becoming “much better at counting examples of online human behavior than critically explaining why they are and what it might mean.”
In the aftermath of Russia’s interference in the 2016 election, a veritable wellspring of academics, think tanks, nonprofits, journalists and foreign governments—and, to varying degrees, social media platforms themselves—dedicated resources and expertise toward countering manipulation and disinformation, resulting in comprehensive recommendations for legislators, policymakers, regulators and users alike. Amid the scrum, the intelligence community must ask not just what its role could be, but what it should be. How can its combined human and technological capital bring a unique value-add to this fight, while fostering trust and credibility in the eyes of the American people?
The intelligence community’s relationships with social media platforms will likely benefit from prominent signals that the government is addressing this question with an eye toward privacy and civil liberties. This may be especially true of social media platforms that may otherwise be reticent toward productive collaboration with the intelligence community. As a positive first step, a standing committee chaired by a cross-section of senior intelligence community stakeholders could be established, with a specific charge to vet new advanced tools and tradecraft—including artificial intelligence—to evaluate civil liberties risks.
Principles for Moving Forward
Nearly a decade ago, Alexander Joel, the first civil liberties protection officer for the Office of the Director of National Intelligence, noted that it is “a daunting task to pose to lawyers, policymakers, and the rulemaking process to capture the essence of technology’s implications …. Core principles in our tried-and-tested rules [should be applied] to new changes in the technological landscape, and … those principles [should be used] to help us clarify and, where necessary, update our rules and develop new protections.” In other words, intelligence community stakeholders—both as intelligence practitioners and public servants in a rapidly evolving technological domain—must not wait for guidance that will certainly be late in coming, if it comes at all. They must seize the initiative themselves to conceptualize the proper intelligence community roles and responsibilities in this space, or risk being imposed on from outside, as experience suggests. Toward that end, the community should adhere to these three mutually reinforcing principles in countering foreign malign influence on social media:
First, the intelligence community should focus primarily on foreign actors. A range of nongovernmental entities have proved adept at charting pathways from PAI to foreign points-of-origin. However, no entity outside the intelligence community has the same strategic remit or tools to ascertain foreign adversary intent, capability and tradecraft under the authorities enumerated in Executive Order 12333. The intelligence community must not only play to that advantage and enhance it but also vigilantly avoid distraction from it. The universe of PAI is far less likely to provide strategic warning of a looming foreign influence campaign than it is of one already well underway.
Second, the intelligence community must stay content neutral in its focus. Attributing foreign orchestration to any social media activity, particularly that of U.S. persons, carries significant risks, not least in the court of public opinion. Given that politically charged issues have become routine ammunition for adversary malign activity, the risks to the intelligence community likely outweigh whatever gains it may glean by using content (for example, narratives, hashtags or other sentiments) as the basis for collection or analysis. In this regard, independent civil society, industry and academic partners are likely better postured as executors of content-focused assessments and as public messengers regarding findings. Instead, an objective emphasis on the technical tactics, tradecraft and procedures (TTPs) of exclusively foreign actors would play to the intelligence community’s strengths, benefit its partnerships and protect its credibility.
Finally, the intelligence community should prioritize enabling industry, academic and civil-society partners. The intelligence community cannot be the only entity tasked with forewarning, disrupting and defusing the impact of foreign-orchestrated social media operations. Nor can platforms compete with wide-scale manipulation by U.S. geopolitical adversaries in isolation. Stakeholders must cultivate and expand symbiotic relationships between themselves. To help partners play to their own respective competitive advantages, the intelligence community should enable the systematic exchange of information both between the intelligence community and other stakeholders, and among the stakeholders themselves. This exchange of information should have a bias toward sharing strategic, TTP-based insights—as opposed to insights based on specific accounts or content. This approach not only will prompt action while protecting sources but also will engender trust, tailor expectations, and mitigate misperceptions, bridging a long-standing gap between civil society and the so-called surveillance state.
Threats evolve, events dictate, and technology advances exponentially. The intelligence community’s push to innovate and adapt to address foreign malign influence is both noble and necessary. Its ability to keep pace will depend on the values forged through the hard-learned lessons and hard-fought battles of the past. When it comes to the intelligence community’s role in monitoring and countering these threats in the social media space, principles must guide progress—lest progress undermine credibility.