I Wrote About Russian Election Interference. Then I Was Trolled Online.
A colleague and I recently published an op-ed in Svenska Dagbladet, one of Sweden’s leading daily newspapers, about Russia’s attempts to influence elections in democratic countries. Among the Russian tactics we described was the use of “troll factories” to distribute misinformation. So perhaps we shouldn’t have been surprised when we were attacked online, probably by Russian trolls, after our article posted.
Published by The Lawfare Institute
in Cooperation With
A colleague and I recently published an op-ed in Svenska Dagbladet, one of Sweden’s leading daily newspapers, about Russia’s attempts to influence elections in democratic countries. Among the Russian tactics we described was the use of “troll factories” to distribute misinformation. So perhaps we shouldn’t have been surprised when we were attacked online, probably by Russian trolls, after our article posted.
Our op-ed argued that, as countries become aware of foreign influence tactics and seek to counter them, nefarious actors are likely to pursue new technologies to disseminate propaganda. Aggressors such as Russia might seek to leverage artificial intelligence and deep learning to generate false or doctored video content, for example. Mixing these fakes in with a stream of genuine material would make the fakes not only hard to distinguish from the original but perhaps quite credible to most of the targeted consumers. As with fake news campaigns, the purpose of such operations would be to mislead the population, reduce trust in mainstream media, spawn conspiracy theories and increase tensions.
To most Swedes, our op-ed should not have read as overly controversial or far-fetched. The prevalence of Russian influence operations in the cyber environment has been extensively reported on by media as well as government agencies in Sweden and elsewhere. Over the first few days after publication, however, we received a flood of reactions to the article in the paper’s online comments section, through social media and by way of personal email. Some of the feedback was positive, mainly from colleagues in our field and readers interested to learn more. But the vast majority of comments were negative—and quite forcefully so. We found ourselves the targets of what was almost certainly a troll campaign.
Although we had not previously studied trolling from a specifically academic perspective, my co-author, Gabriel Cederberg, and I are both members of research projects at the Belfer Center at Harvard’s Kennedy School that study cyber conflict and the protection of democratic processes. We were well aware of the phenomenon—but had never been targets ourselves. Within the Swedish intelligence community, where I have my professional background, it is well known that journalists, bloggers and other commentators who write about Russia—especially those who are not aligned with the Kremlin narrative—are under close observation across the Baltic Sea. As an example, the Russian Institute for Strategic Studies (RISS), formerly a part of the Foreign Intelligence Service (SVR), publishes reports on the “anti-Russian vector,” a survey of Western media outlets that allegedly describe Russia in a negative way. One of my colleagues from the Swedish Air Force, who blogs on regional defense and security policy issues, has had the dubious honor of making it onto the RISS top-10 list of “aggressive” anti-Russian authors in Sweden.
Besides being victims of online trolling, writers critical of Russia commonly report receiving contact requests from well-crafted fake accounts on social media platforms, including Facebook and LinkedIn. Similar “friending” attempts are frequently directed to military officers from accounts that have been linked to Russian surveillance operations. A recently published cable from the U.S. State Department said that Russia has focused “significant resources” on Sweden and suggested that Russia is behind efforts to “infiltrate Sweden with distorted and false information about NATO in the Swedish press, at think tank events, and on social media.” So our suspicion that such resources could also be employed to retaliate against critical writing on Russia is perhaps not far-fetched.
We are of course far from alone in being targeted by online aggressions bearing hallmarks of the Russian social media army. In a recent Hoover Institution article titled “My War With Russian Trolls,” professor Paul R. Gregory details five years of continuous attacks on his publications that employed the same techniques as those identified in indictments by Special Counsel Robert Mueller. Similar testimonies have been given by Swedish writers, including journalists and bloggers covering Russian military interventions and domestic politics, the possibility of Swedish membership in NATO, and Baltic Sea region security.
Why might our op-ed have attracted such a strong response from trolls? We suspect that one factor may have been the upcoming Swedish national elections, scheduled for Sept. 9, which could be causing Russian-aligned actors to keep a closer eye on Swedish media. Sweden’s gradually closer ties to NATO (something Russia has announced it perceives as a “red line”); an enhanced defense agreement with the United States and neighboring Finland; and significant increases in military spending—including a recently closed $1 billion Patriot missile deal—could be other reasons our finger-pointing touched a nerve.
We compiled the op-ed comments we received, analyzed and categorized the content, and then made a basic attempt at tracing the authors. Roughly half of some 50 studied comments stood out as noticeably similar. They were written in Swedish, in an aggressive tone, but they used words or phrasing in a way that suggested the writer is not a native Swedish speaker. The messages were sent from anonymous social media accounts or through free email services. In some cases, a Swedish contact telephone number was provided—but these numbers led only to anonymous pre-paid lines, and our attempted calls went unanswered. As for the content of the messages, they attacked the op-ed but did not focus their criticism on the arguments we presented.
All this strongly suggests that at least some of the people behind these comments were trolls. Although we cannot be certain that this is true of all or even most of the writers in question, their posts used five tactics common to online trolling. The messages provide some insight into how a trolling campaign works—and the modus operandi of actors trying to attack statements they perceive as detrimental to their own cause.
The public commonly understands internet trolls as “haters,” or vocal and malicious actors who spread insults and adversarial messages through digital channels, targeting people or institutions representing opinions not in line with their own. However, a “troll” does not necessarily have an aggressive or antagonistic appearance. The objective of trolling is to get someone to unconsciously act in a certain way, often by putting the person off balance. This is commonly achieved through rhetorical tricks, apparent subject-matter expertise or by distraction. A troll can thus instigate online hatred by triggering the masses to write aggressive comments, but the troll itself is seldom the person directly submitting those comments.
The most common theme of the replies to our op-ed followed the “whataboutism” approach. This approach uses a strategy of false moral equivalences, seeking to discredit the opponents’ position by charging them with hypocrisy without contesting their central argument. Examples in our case include assertions that countries other than Russia—specifically the United States and other Western nations—also carry out cyberattacks and that those attacks are worse than those potentially sponsored by Russia. Other whatabouts included remarks that the European Union and NATO have their own “hybrid war centers” (specifically pointing at the European Centre of Excellence for Countering Hybrid Threats in Helsinki, Finland); allegations of the “continuous and deeply immoral” U.S. funding of media outlets and civil-society groups that meddle in Russian affairs; and accusations that the West has its own dark history of engaging in influence operations and regime-change attempts around the world. Deemed as practically a national Russian ideology, whataboutism is a traditional part of Soviet-era propaganda tactics that is still thriving in the digital age.
Another form of attack was “the straw man.” One social media post charged that our wish to “limit free speech” was what really motivated us to write, whereas another stressed that there is no “evidence that Russia has created fake videos of Swedish politicians.” These statements intentionally misconstrued the content of the op-ed and distorted our position. The goal of this type of attack is to falsely give the impression that the target advocates certain action, or that the target’s line of argumentation is false, by refuting arguments or assertions that the target never made.
Other respondents used the association fallacy method. In an attempt to imply guilt by association, our affiliation with a U.S. academic institution was pointed to as confirmation that our op-ed was “in reality an attempt at American left-wing liberal influence on the Swedish elections, exaggerating threats to democracy to scare people from seeking information outside of the increasingly propagandistic traditional media.” Other writers suggested that we were “clearly representing a liberal interventionist strain of U.S. foreign policy.” Another type of association fallacy is the “red herring,” a diversionary tactic aimed at leading readers toward a false conclusion. One comment, focusing on our mention of Russian “troll factories,” demanded information on employees, financers and concrete results of the “disinformation campaigns.” By focusing on details that are in themselves interesting but irrelevant to the issue at hand, the troll creates a measure of distraction.
Finally, and perhaps most interestingly, the commenters used ad hominem attacks, or attempted to weaken our content by attacking our character, motive or credibility. In the op-ed, we referred to a publication released by the Belfer Center, the Cybersecurity Campaign Playbook, as a research outcome. An anonymous respondent contacted the editor of the publishing paper, questioning the legitimacy of this publication and demanded details of our contribution. While likely not intending to be able to prove a point—in fact even citing a different publication than the one referred to—this is an example of how a troll can undermine credibility, tie up resources and limit an opponent’s effectiveness.
So how do you interact with a troll? A common recommendation is to suspend interaction completely when you realize the person or account has motives other than dialogue. According to Pulitzer Prize-winning journalist Clarence Page, responding to a troll is a “rookie mistake,” and doing so publicly will risk compromising your public image. Danah Boyd, principal researcher at Microsoft Research, has said that “in an attention economy, publicly critiquing people whose sole goal is to get massive attention does them more justice than harm.” However, ignoring a troll may not always be a viable course of action, if, for instance, you are required to respond to public communication, as a civil servant or member of a public affairs department, or if you are unsure whether the other party has malign intent. In such cases, an option could be to reply, but to do so briefly, refraining from emotional engagement and avoiding polemical confrontation. As trolls commonly seek to provoke an emotional response, keeping cool and replying in a matter-of-fact way avoids “feeding” the troll.
A more aggressive, and perhaps more effective, way to respond to a troll is to return fire. By countering the troll’s assertions or questions with well-crafted questions, targets have the opportunity to set the discussion agenda. Why is this topic important to you? How have you come to your conclusion? The goal should be to force the troll into taking a defensive position, consuming energy by accounting for its view in detail. By adopting this approach, targets gain another benefit: They need not concern themselves with whether the account they are in correspondence with is actually a troll or someone interacting with them in good faith. Whereas trolls usually reply to questions with further provocative arguments, sincere counterparts commonly give honest answers in the hope of reaching an understanding, though they may initially be distraught or particularly emotionally engaged in an issue. When trolls realize that they are not receiving the desired response, or that the conversation requires an increasing amount of effort, they are more likely to give up.
In our case, we opted for a combination of the above-mentioned approaches. Whereas the bluntest comments were left without response, we replied to some in an attempt to discern whether the author had valid concerns. Through our engagement we deduced that a few of the responses could be removed from the “trolling” category. In a couple of other cases, such as those that used the ad hominem attacks, we concluded after additional communication, including follow-up questions, that the author was a troll.
Online trolling as well as separate attempts to interfere with democratic processes through the digital space are not going away. As national elections approach in Sweden and midterm elections draw closer in the United States, Russian efforts to sow discord, drive wedges between Western allies and push a pro-Kremlin agenda are likely to ramp up. Although our specific experience may be mild compared with what others have endured, it is important to recognize that Russian influence operations sometimes labeled as “trolling” are part of a larger, ongoing hybrid warfare effort to reach strategic goals through the weaponization of online media. Finding ways to deal with trolls—both on a personal and societal level—will be an important challenge for democracies. While awareness is a good first step, robust protection can probably only be achieved through a whole-of-nation approach, in which governments, political campaigns and tech companies must actively work together.