Cybersecurity & Tech

Congress’ Grilling of Tech Companies in 2017 Foreshadows the Debates of 2018

Evelyn Douek
Thursday, January 11, 2018, 7:00 AM

Three congressional committee hearings in late 2017, with representatives from Facebook, Google and Twitter, were high-profile examples of the tide of public opinion changing against the major tech companies.

The entrance sign at Facebook's headquarters in Menlo Park, California. (Photo: Wikipedia/LPS.1)

Published by The Lawfare Institute
in Cooperation With
Brookings

Three congressional committee hearings in late 2017, with representatives from Facebook, Google and Twitter, were high-profile examples of the tide of public opinion changing against the major tech companies. Much of the coverage at the time focussed on sensational revelations, such as the disclosures of large-sounding figures of the number of people potentially exposed to content produced by Russian operatives, the failure of the platforms to detect interference despite flags such as ads being paid for in rubles, and the hard-to-forget “Buff Bernie [Sanders]” colouring book ads.

The two days of hearings, however, represented a sincere attempt by both lawmakers and tech companies to grapple with the amorphous problem of the role of these companies in a democracy and define the narrative going forward. A detailed review of the transcripts reveals the multitude of complex problems that both are concerned with, the tensions these have exposed, and the many but vague solutions being proposed.

While the United States prepares for the 2018 midterm elections, there are reports that just this week the Russian troll farm responsible for most of the meddling in 2016 has tripled its office space. It is therefore helpful to revisit the most extensive public accounting of the problems and potential solutions to anticipate how the issues and debates around them will play out this year. The following is a brief summary of my longer account.

Key concerns

The headline problem of malicious actors using social media to interfere in democratic processes encompasses a number of thorny sub-issues. The freedom and openness that undergirds a democracy is also what makes it more vulnerable to these kinds of subversive activities. While there are concerns about private companies being primarily responsible for the moderation of public discourse, the undesirability of government interfering directly in political debate—and especially in determining what constitutes “fake” content as opposed to valid opinion—limits the alternatives. One thing that repeatedly emerged from the exchanges was the need to come to grips with the global and continuing nature of the problem.

Techniques used

Both members of Congress and the tech companies generally agreed that the malicious actors exhibited a relatively high level of sophistication in how they conducted their information operations. Techniques such as using fake accounts, bot armies and sharing of inflammatory, attention-grabbing content to sow division and undermine faith in democracy were discussed as evidence of these actors’ understanding of the unique opportunities and vulnerabilities of the online environment. No one sought to contest this narrative that the campaign was sophisticated and strategically focussed on these goals. (Julia Ioffe, in The Atlantic, by contrast has called this picture the “stuff of legend in the United States” but one that few Russians would recognize.) Members of Congress repeatedly highlighted the large-sounding figures of posts and views of Russian-generated content, which tech companies tried to emphasize was a tiny portion of overall content while hastily and contritely agreeing that any amount was too much. What also became clear was that some of the lowest-tech strategies used were also going to be the hardest to solve—tech companies are not going to be better placed than others to detect and prevent the use of shell companies to disguise geographic origin of money or content, for example.

Potential solutions

The discussion of solutions highlighted how opaque the social media ecosystem is. Tech companies assured Congress that they had incorporated the lessons learned from 2016 into their monitoring algorithms and broadened the number of signals they use to detect suspicious activity and content with a foreign origin. Exactly how they had done this or what it meant was largely left unexplained. Commitments to transparency going forward were plentiful but not clearly defined, and did not extend to shedding light on why it is safe to have faith in the tweaks made to the automated processes that tech companies use to police their platforms. There are legitimate reasons for this, of course—the trolls are learning too, and do not need help in learning how to adapt to the new environment. The hearings also highlighted the need to focus on the effective implementation of old solutions to the aspects of the problem that are not unique to online spaces—the need for transparency in political advertising and greater collaboration and information sharing with industry and law enforcement.

Grappling with the proper role of government

Common to most of these issues was the difficulty of defining the proper role of government in this arena. Companies wanted to push back on members’ frustration that they had failed to grasp the full scope and nature of the issue (insisting that this was the role of the congressional committees who had a wider frame of reference, information from law enforcement and access to all the data provided by the companies), while also resisting the idea that government needed to play a greater role going forward. Lawmakers, by contrast, struggled to balance the tensions of asking tech companies to take greater responsibility for the harm created by the exploitation of their platforms, the extent of the power this gave them over political discourse, and the problem of government having a hand in any kind of censorship, particularly around political speech.

For a more detailed account of the problems, techniques and solutions surfacing from the hearings, read my summary of and excerpts from the transcripts of hearings that occurred between Oct. 31 and Nov. 1, 2017.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare