Democracy & Elections Foreign Relations & International Law

Sen. Cardin's Russia Report Wants to Hold Social Media Companies Accountable, but Its Recommendation Falls Short

Evelyn Douek
Friday, January 19, 2018, 9:00 AM

On Jan.

Published by The Lawfare Institute
in Cooperation With
Brookings

On Jan. 10, the Democrats on the Senate Foreign Relations Committee released a staff report documenting Russia’s “Asymmetric Assault on Democracy.” The document’s focus is, as Anne Applebaum has observed, the permanent, low-level distortion that Russian-inspired disinformation creates in all of the United States’ most important European allies.

The report also argues that social media companies need to be held accountable for their role in spreading disinformation. This post outlines the report’s findings on how Russia exploits social media to spread propaganda, before turning to the specific recommendations that the report makes about how to address this problem. In general, the 200-page report is a comprehensive, detailed and impressive overview of Russian interference in European and U.S. democracies. Far less detailed are the background facts supporting, and—more importantly—the substance of, the recommendation that governments should hold social media firms accountable for disinformation and other malicious activity on their platforms. While it is clear that social media platforms are a crucial battleground of modern information warfare, the report does not give much attention to their unique vulnerabilities and the proper role of government in policing content on social media platforms. These are extremely difficult questions, and they deserve proper engagement before proposing vague solutions. Instead, the four specific sub-recommendations highlight the tensions inherent in governments seeking to see greater content regulation online.

The role of social media

The report notes that “Social media platforms are a key conduit of disinformation that undermines democracies.” However, the bulk of the report focuses on the way that Russia has used legacy media such as TV and radio (especially RT and Sputnik) alongside connections with political parties and think tanks to disseminate misinformation. The report highlights that these long-standing techniques remain at the core of Russia’s disinformation toolbox, and cannot be ignored.

However, alongside these more conventional techniques, new technologies have given Russia’s disinformation campaigns tools that amplify their range and reach. As documented in the report, these new tools include trolls (including the now infamous St. Petersburg troll farm), bot networks, doxing, and increasingly powerful analytics that enhance the capacity to target specific groups with messages they might be especially vulnerable or receptive to. The report mentions targeting analytics only in passing, and the other tools are mainly discussed in the context of their ability to amplify traditional propaganda techniques. In contrast to the technically sophisticated picture of Russian online information-operations that is sometimes painted (such as at the congressional hearings with tech companies last year), the report describes a brute force style of operation. “President Putin and the Russian government are not master strategists,” it says. “But a few notable qualities make the Russian Federation a considerable opponent: scale, persistence, and adaptability.”

As such, the report makes clear that social media is an important plank of Russian strategy, yet it does not delve into the unique challenges presented by the online environment. For example, it does not reference the debates that are raging about the role of social media in exacerbating ideological bubbles and polarization, their algorithms in highlighting sensational and extremist (and often false) content, or the difficulties of preventing anonymity or identity-misrepresentation online. This is unfortunate, because as the report itself notes, “the problem has to be identified and understood before it can be addressed.” While the focus of the report is obviously on Russian actions, the call for social media companies to be made accountable needs to be made in a context that fully and accurately describes their role in creating the current online information environment.

The report also lacks detail when it comes to quantifying the spread of disinformation online. This is no doubt because investigations are ongoing, but the report also implies that it is because social media companies are not being sufficiently transparent. It notes at least two instances—in the U.K. and France—where social media companies provided information about the scale of malicious activity on their platforms that likely understated the true extent of the problem.

The report’s lack of trust in the ability or intention of social media companies to counter information operations on their platforms is in tension, however, with the recommendation it makes to remedy this vulnerability, which places much of the responsibility at their door.

Lack of clarity on the proper role of government

The recommendation to hold social media companies accountable for the spread of disinformation on their platforms is the ninth out of 10 key recommendations in the report. It stands out, though, because it is the only one that is not primarily directed at the government. The report’s four sub-recommendations on how to establish accountability evince varying degrees of comfort with governmental involvement.

The first sub-recommendation is that government should mandate transparency in political advertising, and it is clear that the report is comfortable with governmental involvement in this kind of regulation. In the U.S., this initiative is embodied in the bill for the Honest Ads Act, and the report seems to envision similar initiatives in other countries.

The second sub-recommendation is that social media companies should audit the level of Russian-linked disinformation activity on their platforms during election periods. The role for government in this initiative is more passive—it is the companies that should conduct the audits—since they control the relevant information—and government should merely “increase pressure on and cooperation with” them. However, the report seems reluctant to endorse using coercive power to require such action, saying only that governments should “increase pressure.” Given this information is critical to understanding the nature of the security threat, its contrast with the mandate recommended for transparency in advertising is notable.

The role of government in the third sub-recommendation is even more diminished. Under this recommendation, government would merely participate in civil-society advisory councils that the report suggests social media companies should establish. The goal of the councils would be to provide input and warnings about emerging disinformation trends. This would feed into the development, by philanthropic organizations, of a curriculum on media literacy that could be offered free of charge to the public.

However, it is the final sub-recommendation that most highlights the fundamental tensions for governments seeking to address harmful speech on social media. The final recommendation reads:

While accounting for freedom of speech concerns, social media companies should redouble efforts to prevent, detect, and delete [malicious inauthentic and/or automated] accounts, especially those that are primarily used to promote false news stories (emphasis added).

This recommendation raises more questions than it answers. How to account for freedom of speech concerns is the single most difficult issue in the disinformation quagmire. As the report itself notes, this is what makes democracies asymmetrically vulnerable to such information operations—their robust commitment to free and open debate. Furthermore, the general exhortation to respect free speech concerns fails to identify what kind of free speech concerns the report thinks the U.S. government should champion. America has historically shown a strong commitment to First Amendment exceptionalism that requires much broader protections for speech than other jurisdictions have. Is it this wider understanding of free speech that social media companies should take into account? Or the more circumscribed understanding of protected speech that exists in the other jurisdictions that they (and Russian propagandists) operate? The report itself notes the difficulties that social media companies have in moderating content in different environments with different cultures and languages, citing a NATO report that said Twitter had been less effective at removing Russian-language disinformation than messages in English. The legal differences only compound this difficulty.

The report notes these issues in passing in its discussion of the recently enacted German law requiring social media companies to remove obviously illegal content within 24 hours. It cites critics who say that the law places too much power in the hands of companies and poses a risk to free expression by incentivising those companies to remove content simply for the purpose of avoiding fines. However, the recommendation in the Cardin report similarly seeks to outsource responsibility for content moderation to the social media companies themselves, asking them to “redouble efforts” without further guidance.

By not acknowledging the complexity of this decision and the implications it has for free speech, as with the congressional subcommittee hearings in 2017, the report shows a lack of clarity about the proper role of government in tackling this problem. The simple proposal to hold social media companies to account belies the complexities of a solution to online disinformation in free and open democracies.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare