Cybersecurity & Tech

U.K. Committee Grills Big Tech on Fake News

Evelyn Douek
Tuesday, February 13, 2018, 2:00 PM

On Feb. 8, 11 U.K. Members of Parliament came to Washington to grill witnesses from U.S.-based technology companies as part of a inquiry into “fake news” by the Digital, Culture, Media and Sport Committee.

Published by The Lawfare Institute
in Cooperation With
Brookings

On Feb. 8, 11 U.K. Members of Parliament came to Washington to grill witnesses from U.S.-based technology companies as part of a inquiry into “fake news” by the Digital, Culture, Media and Sport Committee. Over four hours, the MPs questioned representatives from Google, YouTube, Facebook and Twitter on malicious content and activity on the tech companies’ platforms. The committee members also probed a broader range of topics, including as the companies’ data-gathering and -retention practices, their transparency and their readiness to provide information to British authorities.

Transatlantic Differences in Approach

The committee’s fake news inquiry commenced in January 2017 and reconvened in September 2017 following its suspension during the general election last year. The inquiry was launched in response to reporting that propaganda and untruths had played an unprecedented role in the 2016 U.S. presidential election and resulting fears that such content was having a significant impact on democratic processes more generally. At the time, Committee Chair Damian Collins MP said that “[t]he growing phenomenon of fake news is a threat to democracy” and that major tech companies had a social responsibility to help address the issue.

Most of the concerns raised by British lawmakers were similar to those raised in the U.S. Congressional committee hearings with the same companies last year. Both American lawmakers and the MPs focused on a lack of transparency in political advertising, floating the possibility of imposing stricter regulations. During the Feb. 8 hearing, the MPs repeatedly expressed frustration that the companies’ might have allowed foreign-paid advertising on their platforms during recent elections. When a representative for Facebook suggested that responsibility for this problem lay with the British electoral commission or the persons who had purchased the illegal advertisements, Collins expressed his incredulity (to audible laughter in the room):

It is extraordinary: if Facebook were a bank, and somebody was laundering money through it, the response to that would not be, “Well, that is a matter for the person who is laundering the money and for the authorities to stop them doing it. It is nothing to do with us. We are just a mere platform through which the laundering took place.” That bank would be closed down and people would face prosecution.

No U.S. lawmakers have come close to suggesting prosecution. But Senators Mark Warner and Amy Klobuchar have introduced a proposal for the Honest Ads Act, which would mandate much greater transparency and disclosure for online political advertising.

Beyond this, however, the approaches of lawmakers in the two jurisdictions are diverging. While the U.K. hearings last week attracted much less attention within the US than the earlier Congressional ones, the U.K. inquiry might actually be cause for greater concern on the part of the tech companies. During the U.S. hearings, it was clear that members of Congress were struggling to conceptualize the proper role of government in regulating forums for public discourse. The U.K. MPs were far less equivocal. Conservative MP Julian Knight, for example, referred to the recent German law creating liability for large fines if companies fail to take down “manifestly unlawful content” (which includes hate speech, pornography and potentially fake news) sufficiently quickly. Suggesting that this law had caused a decline in the prominence of such content within Germany, Knight said to the companies’ representatives: “Surely this is strong evidence that the way in which Western democracies protect themselves is to regulate you.”

The difference in approach was at its starkest on the core issue of the day: “fake news.” U.S. lawmakers expressed concerns over stifling political debate, and the difficulty of finding any objective way of determining content’s truthfulness. Congress was much more accepting of what has become a standard line for the tech companies in these debates: that they do not want to, nor should they be asked to become, “arbiters of truth.” The British MPs instead saw this as the companies abdicating their responsibility. Rebecca Pow MP said she was “staggered” by the companies saying they did not have rules on truth:

To me, that gets to the nub of what we are all talking about today. As a platform, you are openly able to spread disinformation … What worries me is what this is doing to our children. Shouldn’t you take some responsibility for it? If you cannot and are not able to and your policing system is not up to it, surely some sort of regulation or body will have to be put in place to ensure that the next generation is safe.

Giles Watling MP invoked the Uncle Ben rule: “You have enormous power, and with enormous power comes great responsibility. You seem to want to duck that.”

The strongest pushback came from Twitter’s London-based Head of Public Policy and Philanthropy, Nick Pickles, who put the issue succinctly:

During an election campaign in the UK, political advertisements are exempt from the advertising rules, so that would be taking regulation of UK political advertising and giving it to American technology companies. In terms of the democratic process, that seems to me quite a robust step to take.

YouTube and Facebook similarly counselled caution. Juniper Downs, YouTube’s Global Head of Public Policy, said that there was robust, ongoing debate about the wisdom of heavy-handed regulation such as Germany’s. Monika Bickert, Facebook’s Head of Global Policy Management, similarly suggested that a cooperative, rather than coercive approach, was more likely to be productive and that regulation could have unintended consequences.

Not mentioned, but in the background of this discussion, was the controversy over apparently false claims made by the major campaigns in the Brexit referendum, which only serves to underline the problems of policing for truth in politics. The MPs did not seem to be thinking about this kind of false content in the hearings—but the implication of their arguments seems to be, for example, that Twitter should have removed content such as the Leave campaign’s mismessage that leaving the European Union would save Britain £350 million a week. Determining who and what is legitimate in the political sphere is inherently fraught. Asking technology companies to draw lines in such a politically contestable space would indeed be, as Mr Pickles put it, “quite a robust step.”

MP Frustration with Companies’ Lack of Cooperation

Echoing similar concerns from Congress, throughout the hearings the MPs made clear that they did not think the tech companies had been taking these issues seriously enough.

When Facebook’s representatives said they had not seen anything to suggest there had been foreign interference in the Brexit referendum, Chairman Collins MP exclaimed “But you haven’t looked, have you? … You haven’t looked!" In response, Simon Milner, Facebook’s Policy Director, UK, Middle East and Africa, said that unlike in the U.S. election, Facebook had not been given any intelligence reports to suggest that there was any such interference. Collins MP gave this short shrift:

You were insinuating that there was a lack of intelligence in the U.K. that had existed in America, and that the absence of intelligence support from the U.K. meant that that work had not already been done. But in America, it was not intelligence reports from the Government that led to that work being done; it was pressure from Congress, which led to what they would see, I think, as the company doing the bare minimum.

Milner suggested that Facebook required some external cause for concern before taking on the responsibility of investigating. But this seemed at odds with the position the companies had taken before the U.S. congressional committees. In the U.S. hearings, the companies had emphasized that while malicious content comprised a very small proportion of overall content on their services, any amount was too much in their eyes and they were doing everything they could to stamp it out.

As some MPs pointed out, it is difficult to show evidence of interference without access to the companies’ data. As Ian Lucas MP put it, “You have everything. You have all the information. We have none of it, because you will not show it to us.”

While all of the tech companies suggested that their business incentives aligned with lawmakers’ public policy concerns, the MPs were skeptical of this too. In response to YouTube’s assertion that it dedicated significant resources to managing content on its platform, Chairman Collins MP called the company’s allocation of $10 million a year “a small sticking plaster over a gaping wound.” His comments to Youtube reflected the apparent consensus of the day:

We have heard the expression “top priority” a lot. If we judge the company based on what it does rather than what it says, the top priority is maximising advertising revenue from the platform, and a very small proportion of that is reinvested into dealing with some of the more harmful content. That is one of the reasons why we are here and why social concern about this is growing.

Or, as Paul Farrelly MP said to Facebook, “We all think you can do much, much better.”

The Writing is on the Facebook Wall

It remains unclear exactly why the hearing took place in Washington: Buzzfeed reports that the companies had offered to fly their representatives to London. The fact that the hearing was the first ever select committee session to be live-streamed from abroad may be a sign of both how seriously the U.K. is taking this issue and of how important it is that the inquiry be highly visible, so MPs can be seen to be taking it seriously.

This is merely the latest sign. of an increasing appetite for regulation within the U.K., along with growing concern over the role of these companies in political debate. At the World Economic Forum in Davos at the end of January, Prime Minister Theresa May devoted a large portion of her speech to social media. She forecast “new rules and legislation” to deal with the loss of trust in social media companies, and reiterated her goal of making the U.K. “the safest place to be online.” Her frustration with the major platforms was clear:

These companies have some of the best brains in the world. They must focus their brightest and best on meeting these fundamental social responsibilities. … As governments, it is also right that we look at the legal liability that social media companies have for the content shared on their sites. The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts.

A week earlier, on Jan. 23, the U.K. government also announced plans for a dedicated “national security communications unit … tasked with combating disinformation by state actors and others.” As with May’s statements, or the recent Internet Safety Strategy Green Paper, the details on the unit so far have been sparse. But it seems that it’s only a matter of time: British regulators are coming for Big Tech.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare