Cybersecurity & Tech Surveillance & Privacy

U.K.’s Online Safety Bill: Not That Safe, After All?

Edina Harbinja
Thursday, July 8, 2021, 1:36 PM

The U.K. government's long-awaited Online Safety Bill was published on May 12. What does it say?

The Palace of Westminster, which houses the Parliament of the United Kingdom. (Jorge Láscar, https://flic.kr/p/PZYa9t; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The U.K. government's long-awaited Online Safety Bill was published on May 12, and it follows a series of documents in the past few years that announced reforms in the area of online harms and the regulation of platforms. Its notable predecessors include the Internet Safety Strategy Green Paper and the Online Harms White Paper. In the final version of the bill, the term “harms” has been replaced by “safety” in the title, and the content largely reflects this change in focus.

With the introduction of the bill, the government expressed an ambitious plan to show “global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.” After Brexit, it was much easier for the U.K. to claim “regulatory agility” and commitment to innovation, outside Brussels and its slow bureaucratic legislative and regulatory processes. The U.K. can almost act as a “regulatory sandpit,” where mechanisms can be explored and tested more quickly, for better or for worse.

The bill was met by support from the child protection community, but with suspicion and warnings from digital rights and civil society organizations. Leading civil society organizations warned against it, noting that it is introducing “state-backed censorship and monitoring on a scale never seen before in a liberal democracy as well as “collateral censorship, the creation of free speech martyrs, [and] the inspiration it would provide to authoritarian regimes. Others argued that it is “trying to legislate the impossible—a safe Internet without strong encryption[.]” The bill needs to go through both houses of the U.K. Parliament, and it will certainly undergo some changes, but its main goals and key mechanisms, both concerning, are likely to remain in the final text.

The bill establishes a new regime for the regulated internet services within its scope. It has the following key aims: to address illegal and harmful content online (terrorist content, racist abuse and fraud, in particular) by imposing the duty of care concerning this content; to protect children from child sexual exploitation and abuse (CSEA) content; to protect users’ rights to freedom of expression and privacy; and to promote media literacy. The bill designates the Office of Communications (OFCOM), the U.K.’s current telecommunications and broadcast regulator, to oversee and enforce the new regime. It requires OFCOM to prepare codes of practice to implement this mechanism of duty of care.

Because this legislation is long—at 145 pages in draft form—and detailed, this post will focus on the bill’s scope, some key regulatory requirements, and concerns related to digital rights and censorship.

Scope

Services included in the scope of this law are “user-to-user services” (an awkward term to describe an internet service that enables user-generated content; for example, Facebook or Zoom) and “search services” (search engines such as Google). To fall under the scope of the law, the regulated service needs to have links with the U.K. (capable of being used in the U.K. or there being “reasonable grounds to believe there is a material risk of significant harm to individuals” in the U.K. from the content or the search results).

Schedule 1 of the bill specifies certain services and content excluded from the scope of this regime. This includes emails, SMS messages and MMS messages—but the exclusion applies only if the services or content represents “the only user-generated content enabled by the service,” so Facebook Messenger, for example, does not qualify and it would be regulated. Also expressly excluded are comments and reviews on provider content, internal business services, paid-for advertisements and news publisher content (though the site needs to be a “recognised news publisher”), certain public bodies services, and “one-to-one live aural communications” (communications made in real time between users, though the exclusion applies only if the communications consist solely of voice or other sounds and do not include any written message, video or other visual images, so Zoom, for example, does not qualify and is within the bill’s scope). All the caveats mean that the list of exceptions is quite narrow. Moreover, the bill gives significant power to the secretary of state for digital, culture, media and sport to amend Schedule 1 and either add new services to the list of exemptions or remove some of those already exempt, based on an assessment of the risk of harm to individuals. This power gives the government minister a lot of discretion, which, if misused, could lead to policing private messaging and communications channels such as Messenger or Zoom. The rationale for giving that discretion to the minister is probably to address illegal content such as terrorist and CSEA content, but questions about the effect of the bill on encryption, security and privacy remain unanswered.

The scoping components of the bill have other problems, too. The draft does not refer to “platforms” almost at all, even though the government uses the term to refer to Category 1 service providers, explained further below. Instead, it chooses to encompass all service providers, save the exempt ones mentioned above. This entails a conflation of terms. “Platform” is the term commonly used by academia and lawmakers elsewhere. For instance, the European Commission uses that term in the proposal for the Digital Services Act. Additionally, this word choice creates confusion as to the importance of large and very large platforms for policing user content and making decisions about users’ digital rights. Once the bill is passed, the secretary of state will prepare delegated secondary legislation and OFCOM will introduce codes of practice to specify different levels of duty of care and liability, depending on risk assessments, prevalence and persistence of serious illegal and harmful content on a service, the dissemination of this content, and the like. At the moment, the extent and reach of duty of care remains unclear.

Duty of Care

The draft Online Safety Bill retains the underlying principle of duty of care, introduced in the Online Harms White Paper in 2019. This is a duty derived from the health and safety law and imposed on certain service providers to moderate user-generated content in a way that prevents users from being exposed to illegal and harmful content online. It has been criticized by many, including this author, based on its inadequacy, vague nature, rule of law concerns, human rights impact, among other factors.

The bill divides services providers into four key categories when it comes to their duty-of-care obligations: (a) all providers of regulated user-to-user services, (b) providers of user-to-user services that are likely to be accessed by children, (c) providers of Category 1 services (providers with additional duties to protect certain types of speech), and (d) search engine providers. All categories include some common duties. These include the risk assessment of illegal content; duties toward illegal content, primarily terrorist content, CSEA, and other illegal content; duties on the rights to freedom of expression and privacy; duties about reporting and redress; and record-keeping and review duties.

“Category 1” is a new category of service providers that was absent from the Online Harms White Paper and earlier proposals. In addition to the above duties, these service providers will have two distinct types of duties: the duties to protect content of democratic importance, and the duties to protect journalistic content.

Content that is “of democratic importance” is broadly defined as content intended to contribute to democratic political debate in the U.K. Essentially, the bill imposes a duty not to remove this particular type of speech. The definition is very broad and overlaps with journalistic content, discussed below. Notwithstanding numerous and continuing issues with content moderation, this duty will introduce an additional burden to this problematic area of private policing of free speech. A separate question concerns whether political speech should be distinguished in this way from other important forms of free speech.

In terms of journalistic content, the bill introduces a wide definition of “journalistic content” and acknowledges the importance of this type of content shared on platforms. Category 1 providers will be required to “make a dedicated and expedited complaints procedure available to a person who considers the content to be journalistic content.” The definition of this content seems to cover user content “generated for the purpose of journalism,” as long as there is a U.K. link. The government’s press release noted that “Citizen journalists’ content will have the same protections as professional journalists’ content.” In practice, though, it will be difficult to implement and ascertain if a given user post should be deemed as journalistic and a take down challenged as such. Also, the language in the bill opens the door to a level of confusion as to what content will be “of democratic importance” as opposed to “journalistic.”

Again, the scope of Category 1 is unclear at the moment. The bill suggests that the secretary of state will make the regulation to specify conditions for Category 1 services, based on the number of users and service functionalities. The secretary of state will need to consult OFCOM. The government hinted in the press release that Category 1 will include large platforms and social media. This is a missed opportunity—the language in the bill is quite vague, contrary to the European Union’s proposal for a Digital Services Act and the Digital Services Act’s reasonably clear definitions of online platforms and very large platforms (at least 45 million average monthly users in the union).

Interestingly, online scams and fraud made it into the scope of the final bill—both of which had been excluded from the earlier documents published by the government, including its response to the online harms consultation from December 2020. Industry pressure to include fraud in the scope of the reform seems to have worked, as the government stated that measures to tackle user-generated fraud are covered by the bill. That said, fraud that is not user-generated and is conducted via advertising, emails or cloned websites, for example, would fall outside the bill’s scope.

The Regulator and Its Powers

OFCOM, the current electronic communications and broadcast regulator, will act as an online safety regulator to enforce the new law. It will be equipped with various enforcement powers, including fines of up to 18 million British pounds or 10 percent of a provider’s annual global revenue. A new power given to OFCOM is the “technology warning notice,” which the regulator can issue if it believes a provider is failing to remove illegal content relating to terrorism or CSEA content and that this content is prevalent and persistent on the service. If OFCOM is satisfied that the measure is proportionate, the service provider will be required to use “ accredited technology” to identify terrorism or CSEA content present on the service, and to “swiftly take down that content.” In practice, this could mean that service providers may be obliged to install filters, which may interfere with encryption and affect users’ privacy and free speech.

Therefore, it seems that the enforcement powers, such as enforcement notices, technology warning notices, business disruption measures, and senior managers’ criminal liability, give OFCOM quite a lot of teeth. Still, there are concerns around OFCOM’s regulatory capacities and suitability. Historically, OFCOM was designed to regulate entirely different industries, and it’s not clear whether it could perform all the tasks envisaged by the bill, in particular those related to human rights and free speech.

Interestingly, OFCOM will be required to set up an advisory committee on disinformation and misinformation (“fake news”), including OFCOM and “persons with expertise in the prevention and handling of disinformation and misinformation online.”

Eligible entities” (presumably civil society groups, but this is also to be specified by the secretary of state later) can make a complaint to OFCOM about a service provider’s conduct that causes harm to the user, affects free speech of privacy and has other impacts.

Therefore, the bill creates a powerful online regulator, with potent enforcement mechanisms, which could have significant and lasting effects on businesses and digital rights. There is a real danger that OFCOM may not be able to undertake this role effectively, given all the other areas within its regulatory remit, plus its lack of human and technical capacity.

Red Tape That Threatens Digital Rights?

The bill retained the distinction between “illegal” and “legal, but harmful” content. This is problematic for two key reasons. First, the bill defines harm very vaguely: “The provider … has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child (adult).” It is unclear what indirect harm includes, and it could be a very wide category of content that impacts human rights.

Second, the service provider will need to determine if the content is harmful (directly or indirectly) to children or adults, when a “service has reasonable grounds to believe that the nature of the content is such that there is a material risk of harm[.]”. The standard used for this assessment is a risk of harm to an adult or child of “ordinary sensibilities.” This is a vague legal standard, which does not correspond to the well-established standard of a “ reasonable person,” for instance, and leaves many open questions, such as whether those who are “easily offended” fall into this category.

The bill broadly and tangentially mentions human rights or digital rights, only by vaguely mandating “duties about rights to freedom of expression and privacy” (Section 12). This section of the bill seems disjointed from the rest of the proposal and acts almost as just a necessary add-on, in my view. For example, the bill will protect individuals only against “unwarranted” privacy invasion, leaving an open question as to what “warranted” invasions are.

The intermediary liability regime as established in the EU’s E-Commerce Directive (eCD), implemented in the U.K. during its membership in the EU, has been mostly retained in the European Commission’s Digital Services Act proposal from 2020. Intermediary liability and “safe harbor” considerations for platforms are absent from the U.K.’s bill, however. The government refers to Section 5 of the European Union (Withdrawal) Act 2018, stating that “there is no longer a legal obligation on the UK to legislate in line with the provisions of the eCD following the end of the transition period on 31 December 2020.” This means that the bill does not alter the existing intermediary liability structure, but the government also seems to be announcing future changes to this regime. This language is concerning as it fails to acknowledge issues of filtering, monitoring of user content and censorship. One of the biggest concerns is the likely reversal of the prohibition of general monitoring of users in the U.K., established in Article 15 of the E-Commerce Directive, which aimed at protecting users’ privacy. Worryingly, the bill’s enforcement measures outlined above directly encourage user monitoring and do not guarantee encryption.

The bill will create significant red tape and bureaucratic burden on service providers and OFCOM. Service providers will be required to make numerous judgment calls, including what is democratic, political, and journalistic speech and which content is harmful to users, directly or indirectly and to what extent. This will require time and resources, which many service providers will neither have nor be willing to commit, as seen in examples of content moderation or data protection over the past few years. The most direct consequence may be over-moderation by the big tech players, on the one hand, and struggle by the smaller companies to comply, on the other. It is, therefore, doubtful whether the bill can ever be implemented in practice.

The Online Safety Bill will first be scrutinized by a joint committee of members of parliament before it’s introduced into Parliament, and there is some hope that potential modifications to the bill will improve the text before it goes into force. However, concerns around the key principles such as duty of care or the lack of human rights protections are likely to remain.


Dr. Edina Harbinja is a senior lecturer in law at Aston University, Birmingham, UK. Her principal areas of research are related to the legal issues surrounding the Internet and emerging technologies.

Subscribe to Lawfare