Criminal Justice & the Rule of Law Democracy & Elections

How Platforms Can Prevent Misinformation Like #dcblackout

Alex Engler
Monday, June 15, 2020, 2:52 PM

During protests in Washington, D.C., a conspiracy theory spread on Twitter that the federal government had cut off communications within and from the city. Twitter users could have been warned.

Protestors in D.C. march on May 30, 2020, in response to the death of George Floyd. (By: Victoria Pickering, https://flic.kr/p/2j7eHpq; CC BY-NC-ND 2.0, https://creativecommons.org/licenses/by-nc-nd/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On June 1, citizens in the nation’s capital awoke to terrifying news after a night of protests. According to many Twitter users, late in the night, government security services had cut off communications and protestors had disappeared in the ensuing blackout. In the wake of federal law enforcement and National Guard troops deploying across the city to respond to protests over the death of George Floyd—scenes that included police using chemical agents against peaceful protestors in front of the White House and a helicopter hovering low over Washington, D.C., streets in a military maneuver—the alarming reports added to a sense of anxiety and dread.

There was just one problem: No blackout had taken place. The story wasn’t real.

The conspiracy theory was thoroughly debunked over the course of the day. It had spread on Twitter with the hashtag “#dcblackout”—paired with “#dcprotests,” which local protestors had been using to document the ongoing demonstrations against police brutality. This brought extensive attention to the #dcblackout claims, resulting in hundreds of thousands of tweets exposing millions of people to the false information. As early as 9 a.m. on June 1, it was difficult to discern that many of the original tweets and retweets were from suspicious and bot-like accounts.

The accounts struck me as suspicious. But when I shared my skepticism on Twitter, many users insisted that the blackout was real. People had bought the ruse, and I watched as retweets of the suspicious posts jumped into the tens of thousands. NPR quickly reported on the spread of false information, but #dcblackout still raged out of control, leading to around 500,000 tweets in its first nine hours.

The #dcblackout incident offers a warning for the months to come. The 2020 election looms over a highly partisan political environment with weakened institutions and a reduced number of journalists—all concerning factors for a truthful public discourse and healthy democracy. Years of investment by social media companies have taken down the networks of accounts used by organized disinformation campaigns in 2016, but propagandists can still hijack developing political conversations with ease.

Yet human psychology, which itself enables the spread of disinformation, can also be employed to defeat it. An emerging body of research suggests that asking people to be skeptical and exposing the strategies of manipulators can make them far more resilient to disinformation. To beat opportunistic disinformation, social media companies should harness the critical thinking of their users.

The #dcblackout Hashtag

The tweets that started #dcblackout did not come from a long-standing network of accounts. Early tweets came from new accounts with few followers and incomplete profiles. Many of the suspicious accounts pushed a screenshot of a #blackout tweet from a then-deleted account, sarahxo85267698. While the deletion of sarahxo85267698 may have been intended to disguise the dubious nature of that account, the supporting cast of bot-like accounts asserted that sarahxo was being suppressed by Twitter. They used their own cover-up as part of their censorship narrative.

The confusion continued. Skeptical voices, including my own, started pushing back against the bogus claims. As they did, a series of accounts—including hacked profiles of real people—began tweeting a poorly written statement arguing the #dcblackout was a hoax, using the emerging counter-hashtag #dcsafe. There were so many of these identical tweets that onlookers quickly noticed the accounts, leading to more confusion. Many people fell for this second wave, thinking that if bots wanted them to think #dcblackout was a hoax, then there must be some truth to it. The result was chaos.

All this adds up to something that looks a lot like an intentional disinformation campaign—though it’s hard to say for sure. It is not entirely clear if the original #dcblackout posts were genuine, but the choice to spread that theory seems to have been calculated. By hijacking the #dcprotests hashtag, the #dcblackout tweets were strategically placed where they would be found by D.C. residents checking in on new developments early in the morning on June 1. This behavior, especially after four days of ongoing protests, was easily predictable. Protests and arrests had continued late into the night on the previous days, leading many to check for updates first thing in the morning.

This created an opportunity for abusing what a recent Data & Society report calls a “data void.” Data voids are gaps in authoritative content, where innocent online searches result in users stumbling across problematic and manipulative content because there is nothing to outrank it in search results. One example, documented in 2018, was the phrase “black on white crimes.” Since this wording was used almost exclusively by white supremacist organizations, the results from this search were skewed toward white supremacist disinformation.

Breaking news can also create a data void. In this case, a search of “#dcprotests” on Twitter returned popular and recent tweets spreading alarm, with no guarantee of content quality or veracity. Soon the data gap was filled. Reporters noted that their phones never lost internet service, and analysts showed cellular connectivity levels were stable. However, these corrections did not reach nearly as many people as the original rumor—a result that is consistent with research on rumors on Twitter.

In the end, Twitter suspended hundreds of “spammy accounts” responsible for “coordinated activity” in spreading the hashtag. As to who was behind the accounts, that remains unclear. Any further probe would require access to Twitter’s data or an investigation by the platform itself.

A New Strategy for Influence Campaigns?

#dcblackout was a different kind of influence operation. Past campaigns have slowly and deliberately accumulated thousands of unsuspecting followers over years while subtly interspersing disinformation between memes and popular content. But #dcblackout created a burst of disinformation, taking advantage of sustained attention on a specific hashtag (#dcprotests) related to an impassioned political issue.

This is not a fluke. Rather, it reflects a changing environment for disinformation campaigns. These campaigns have to put content where they know people will see it. And they cannot rely on their own followers, because they don’t have any. The bot-like accounts used to disseminate #dcblackout had almost no followers. Many had been created that very morning.

Increased scrutiny by social media companies has forced this change in tactics. The largest social media platforms, especially Facebook and Twitter, are no longer oblivious to the propaganda networks on their platforms.

It was these networks that allowed the spread of disinformation to more than 100 million Americans in the run-up to the 2016 election. Starting in 2015, the Russian state-backed Internet Research Agency (IRA) operated hundreds of accounts, primarily across Instagram, Facebook, YouTube and Twitter. Through this cross-platform effort, the IRA was able to build an organic following, with more than 3 million followers on both Instagram and Facebook, by mixing popular content and harmless memes along with more nefarious political messaging. IRA-linked posts earned more than 70 million engagements on both Twitter and Facebook, and 185 million engagements on Instagram. While the IRA did use some paid advertising, the reach of its unpaid content was much greater, with accounts like @blackstagram_ often getting more than 10,000 likes per post in 2017. These networks were so effective because they were able to build followers over years and rely on the organic spread of material by that community.

But these expansive networks of accounts are now frequently taken down. For influence campaigns, this can undermine years of work to gather followers. A takedown deletes the suspicious accounts, which therefore lose their community of unwitting followers, upon which the disinformation campaign depends. Since takedowns are effective at undoing the patient work of building propaganda networks, it is unsurprising that disinformation campaigns need a new approach. That new approach appeared with #dcblackout, which targeted an intense political moment, giving propagandists a brief opportunity to engage wider audiences.

While takedowns are valuable, platforms usually suspend and ban accounts based on the presence of coordinated inauthentic behavior, not the content of their posts. There are a few exceptions to this. Facebook reduces the distribution for some of the small number of articles that it fact checks, and many of the platforms are proactively managing misinformation about the coronavirus. However, most misinformation—that is, any false information—goes unaddressed by Facebook. So, for example, Instagram makes no effort to reduce the enormous amount of pseudoscientific health advice and misleading lifestyle content that pervades the website.

Instead, platforms focus on disinformation, which is defined as misinformation that is being intentionally spread in order to mislead people. But this creates an ongoing problem for social media companies. When disinformation campaigns amplify authentic, but misleading, messages and convince normal people to share the material, it enters a gray area of enforcement. The platforms may ban the bots, as Twitter did following #dcblackout, but they typically won’t restrict the content. This is why researchers and platforms need to look past the bot accounts and consider how to better engage the public about their role in spreading disinformation.

Why #dcblackout Is Dangerous

Perhaps even more so than in 2016, the domestic conditions are ripe for disinformation. Some research suggests that polarization, partisan media, weak public broadcasting, low trust in journalism, the presence of populist politics and high social media use can all undermine resilience to disinformation. This does not bode well for the United States. As of 2018, 68 percent of Americans get news from social media, though only 20 percent do so often. American politics, especially conservative media, has become far more partisan, which can make the truth seem less relevant. Trust in media is down to close to 40 percent. Newspaper journalism is taking a beating, too: Newsroom employment dropped by 51 percent between 2008 and 2019, and that staggering decrease does not include the thousands of journalists who have lost their jobs due to the coronavirus pandemic.

The individual vulnerabilities of human psychology are also hard to improve. Disinformation efforts target impassioned political debates to exacerbate existing divisions but also because these circumstances enable disinformation. These moments are characterized by motivated reasoning, where critical thinking is undermined by the emotional desire to believe arguments that agree with preexisting conclusions. Further, a series of experiments show that people are less skeptical of claims when they are in group settings, including on social media.

Showing corrections to people exposed to misinformation is worth doing—the academic consensus now supports the idea that corrective fact-checking is effective. Unfortunately, even when people see corrections, some of the damage persists. The illusory truth effect suggests that repeated assertions become familiar and quickly enter listeners’ understanding of the world, even if they are exposed to countervailing facts. Repetition leads to recall: Once a person has heard the claim many times, it comes to mind quickly, giving it a false sense of validity.

This means that those who saw the #dcblackout tweets may be more amenable to the idea that the government can cut off communications and dispose of troublemakers. Generating this wariness of authorities also enables future disinformation, as believing conspiracy theories is associated with distrust for authoritative and expert figures.

The immediate effect of the #dcblackout operation is not trivial. Eye-catching, but ultimately false, assertions that the government was abducting protestors can disillusion neutral observers and distract from the completely legitimate grievances aired by the protestors and the Black Lives Matter movement. However, it is more pernicious in the long term—disinformation doesn’t explode; it festers. It stokes conflict, reduces trust in institutions and undermines public knowledge.

What Can Platforms Do?

While the social media platforms are improving in their response to misinformation and disinformation, their current approaches cannot stop the opportunistic strategy that the #dcblackout campaign appears to have employed. That effort used new and hacked accounts, so there was nothing to take down in advance. It also amplified a new claim about an event that had supposedly just happened, meaning there was no realistic way to fact check the tweets. Although the situation was ripe for disinformation, existing solutions used by Twitter could not combat it effectively. However, that this was so predictable creates an opportunity for a different intervention: inoculation.

An emerging body of research demonstrates that putting people on guard for misinformation through so-called inoculation messages can make them more resilient to persuasion. In three experiments, when young adults were asked to act like fact checkers, they became less susceptible to disinformation and the illusory truth effect. This approach can work by guarding against misinformation in specific domains, such as by explaining the consensus agreement of climate change scientists. It can also work more broadly, by explaining the misleading tactics used by propagandists. A group of researchers built a game called “Bad News” meant to expose users to various strategies of disinformation campaigns, such as impersonating authorities and appealing to emotional and polarizing opinions. A nonrandomized analysis of those users found that it improved their ability to detect misinformative headlines.

The circumstances in which disinformation campaigns thrive are well understood, and social media platforms have access to enormous historical databases of when they have occurred. This makes it quite possible for the platforms to build effective predictive systems to identify when information operations are likely to try and spread disinformation. The #dcprotests discussion would have checked every box. It was fast moving, high attention and politically charged. It even involved the Black community and police brutality, a topic that has been the target of disinformation campaigns many times in the past.

The ability to identify the circumstances that allowed #dcblackout to spread suggests how platforms might be able to prevent future, similar campaigns from gaining traction. Modern methods of natural language processing and network analysis can be used to detect when emerging online conversations are getting heated, toxic and partisan. They can also discover other factors associated with vulnerability to disinformation, like common topics or hashtags. By employing these tools, social media companies could create systems to automatically identify circumstances ripe for disinformation. They could then employ inoculation messages that would appear in a user’s feed and, for example, remind users to consider accuracy or ask them to act as fact checkers for their community. Messages might also explain common strategies of disinformation campaigns, so that users can be on guard against them. Even without knowing any specifics about potential disinformation to come, this can generate a more skeptical and prepared audience. A/B testing would even allow the digital platforms to evaluate which messages were most effective and to improve over time.

By implementing inoculation, social media companies may be able to substantially reduce the spread of disinformation in fast-moving and contentious political debates. This strategy is specifically appealing to fill the data gaps created by emerging conversations, giving journalists and experts the necessary time to create more authoritative and evidence-based content.

This approach can meaningfully improve platform quality. Increasing the sharing of reliable information will strengthen online communities and even make users more loyal to a platform. It’s also not a particularly invasive strategy, since it requires only a banner on the top of a feed and only for some conversations. Platforms do not even need to use their own materials but can adapt inoculation messages from existing sources like Craig Silverman’s Verification Handbook, the News Literacy Project and Bad News.

In the run-up to the 2020 election, the disinformation landscape has shifted. Social media platforms are certainly more robust than they were in 2016, but human psychology and the speed of emerging stories offer a clear path for propagandists to undermine public knowledge. However, using targeted inoculation messages provides a promising solution. To best fight these opportunistic disinformation tactics, platforms need look only as far as their own users.


Alex Engler is a David M. Rubenstein Fellow at the Brookings Institution, where he studies the implications of artificial intelligence and emerging data technologies on society and governance. Engler also teaches classes on large-scale data science and visualization at Georgetown’s McCourt School of Public Policy, where he is an Adjunct Professor and Affiliated Scholar. Most recently faculty at the University of Chicago, he previously ran UChicago’s MS in Computational Analysis and Public Policy and designed the MS in Data Science and Public Policy at Georgetown University.

Subscribe to Lawfare