Australia’s New Social Media Law Is a Mess

Evelyn Douek
Wednesday, April 10, 2019, 8:28 AM

When Facebook CEO Mark Zuckerberg wrote on March 30 that the internet could use more regulation of “harmful content,” maybe he should have been more specific. Less than a week after Zuckerberg’s statement, Australia’s Parliament passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, with no public or expert consultation.

Facebook offices. (Source: Scott Beale / laughingsquid.com)

Published by The Lawfare Institute
in Cooperation With
Brookings

When Facebook CEO Mark Zuckerberg wrote on March 30 that the internet could use more regulation of “harmful content,” maybe he should have been more specific. Less than a week after Zuckerberg’s statement, Australia’s Parliament passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, with no public or expert consultation. Passed in response to the horrific terrorist attack in Christchurch, New Zealand, that occurred on March 15, Australia’s attorney-general said the act “will send a clear message that the Australian government expects the providers of online content and hosting services to take responsibility for the use of their platforms to share abhorrent violent material.”

The message is indeed clear: The law creates new offenses and liability, including imprisonment and huge fines for failing to take down violent content, such as the video of the Christchurch attack that was broadcast live on Facebook, quickly enough from online platforms. However, it has received widespread condemnation from internet rights organizations, the tech industry (both within Australia and abroad) and academics who study freedom of expression online. And the critics have a point. The legislation is riddled with ambiguities that make its legal effect and effectiveness uncertain.

What’s in the Legislation?

The centerpiece of the legislation is the creation of new criminal offenses for failing to “ensure the expeditious removal of” or “expeditiously cease hosting” a new category of content defined by the act, called “abhorrent violent material” (§ 474.34). Abhorrent violent material is material recording or streaming abhorrent violent conduct, which is exhaustively defined as engaging in a terrorist act, murder, attempted murder, torture, rape or kidnapping (§ 474.32). However, it is not clear how quickly a platform would need to act in order to comply with the law’s requirements, as the meaning of “expeditious” is not defined at all. The mental intent standard for the offense is “recklessness”—that is, people commit the offense if they are reckless as to their service being used to access or host abhorrent violent material. They do not need to have actual knowledge of specific content to violate the law. This is a higher standard than mere negligence, but it can be satisfied by awareness of a “substantial risk” of abhorrent violent material being available—a risk that, in the circumstances, it was unjustifiable to take (Criminal Code § 5.4).

Australia’s eSafety Commissioner—a pre-existing office responsible for promoting online safety—can, without observing any requirements of procedural fairness, issue a written notice to the service provider identifying content as “abhorrent violent material,” which creates a presumption in any later prosecution that the provider was “reckless” unless it can show otherwise (§ 474.35). It is not clear how these notices affect when the period for “expeditious” removal starts—it is possible that a company could be found to have failed to remove material “expeditiously” even if it removes the content before the eSafety Commissioner issues a notice.

The penalties for these offenses are high. An individual can be imprisoned for up to three years or fined AU$2.1 million (around $1.5 million). A corporation can be fined up to AU$10.5 million or 10 percent of its annual revenue for each offense. It is not clear whether each individual post containing abhorrent violent material might be considered a separate offense—for example, each repost of the Christchurch attack or the 800 different variants of it that were removed by Facebook in the hours following the incident—or how Australian prosecutors or courts might group or categorize such posts.

There are exceptions for material necessary for monitoring compliance with the law, for research or for journalism in the public interest, or for material that relates to artistic work, among other things (§ 474.37). The act also does not apply to the extent that it infringes on Australia’s implied freedom of political communication (§ 474.38). Although Australia does not have a constitutional free speech right, the implied freedom protects political communication that is necessary for a functioning democracy.

The legislation also creates an offense if a person becomes aware that their service is being used to stream abhorrent violent conduct and the person does not refer details of the material to the Australian federal police within “a reasonable time” (§ 474.33). Unlike the take-down offense, this reporting requirement applies only to abhorrent violent conduct that is occurring in Australia. That is, social media companies and hosting services are deputized to report to law enforcement if they become aware of abhorrent violent conduct occurring in Australia that has been depicted on their service.

Having passed through both houses of parliament in less than 24 hours, the law was sent straight to the governor-general for assent (a formality) on April 5 and has come into force.

What Are the Main Criticisms?

This was an unusually short timeline for passage of such a significant law, allowing no time for consultation with experts, industry or civil society.

Criticism has been widespread. The U.N. special rapporteurs on counterterrorism and human rights and freedom of expression have written to the government saying they had intended to provide comments on the proposed legislation, but the law was passed before they had an opportunity to do so. Their letter argues that ambiguities in the law, such as the definition of a “terrorist act” and “expeditiously,” endanger freedom of expression by incentivizing companies to err on the side of caution and take material down to avoid liability. The defenses for journalistic purposes apply only to people “working in a professional capacity as a journalist” and not to citizens who may broadcast material equally in the public interest. Everyday internet users have been vital in documenting war crimes in places from Syria to Myanmar—but such material may fall within the act’s definition of “abhorrent violent material,” given that the definition applies to conduct engaged in within or outside Australia. Much of this material may not technically fall within the definition of “abhorrent violent conduct,” given that the definition applies only to material produced by a person who is engaged in the relevant conduct and not bystanders, but a social media company needs to decide “expeditiously” whether this is the case. The incentives created by the act are clear—because of the act’s vague standards and high penalties, rational service providers will err on the side of caution and take down more material than they necessarily need to.

“Expeditiously” is not defined, but the minister’s second reading speech (that is, the speech introducing the bill, which Australian courts may use to construe the meaning of legislation) refers to the footage of the Christchurch attack being available for 17 minutes without interruption, another 12 minutes before the first user report, and an hour and 10 minutes having elapsed before the first attempts were made to take it down. This suggests that “expeditious” takedown is defined in hours or minutes, much less than even the 24-hour period a recent German law grants social media companies to remove material that is “manifestly” unlawful. (The German law has also been criticized as giving companies insufficient time to evaluate content before implementing penalties.) A key difference, and perhaps why the Australian attorney-general called this law “most likely a world first,” is the creation of a new category of content that is targeted by the law and the explicit expansive jurisdiction to conduct that occurs anywhere in the world. The German legislation, by contrast, relies on pre-existing criminal definitions.

Australian tech executives have also expressed concern about the ambiguity of who exactly within a company can be prosecuted under the law, and the potential damaging effect this could have on the tech industry. Tech companies, for example, could be disincentivized to create jobs within Australia’s jurisdiction.

Mark Zuckerberg is unlikely to find himself in an Australian jail anytime soon, though. Even if the law applies to him directly (which is unclear—the attorney-general said it would depend on the circumstances and if they were “very deeply connected” with their platform), enforcement would be difficult if not impossible. An “international grand committee” of lawmakers from nine countries could not even get Zuckerberg to appear for hearings in the U.K. last year. As an Australian opposition member said during parliamentary debate, “[D]espite the government announcing that a key purpose of this legislation was to enable the jailing of executives of multinational social media giants who breach its provisions, the bill does not do this.”

What Effect Will It Have on Tech Platforms?

The attorney-general said, “Internet platforms have the means to prevent the spread of abhorrent violent material and will face criminal sanction if they do not work expeditiously to remove such material.” This reflects the ethos of the legislation, which can generally be summed up as a direction to “nerd harder” to stop the spread of violent material. But Facebook blog posts in the aftermath of the Christchurch attack highlight the reasons why removing the livestream was so difficult. The issue was not that the video was not deemed to contravene the platform’s community standards, or that Facebook was ambivalent about whether it should take down the footage. It is not at all clear whether, as the attorney-general put it, platforms really do “have the means” to halt the spread of this material.

Consider what happened in response to the Christchurch video. First, Facebook’s systems did not proactively detect the video because, the company said, the company lacks a large database of similar kinds of footage on which to train its automatic detection systems—which is a good thing. It is also challenging to distinguish such videos from visually similar but innocuous content like video games, which, if flagged, could create noise for first-responders looking for alerts for ongoing incidents. When automatic detection systems fail, Facebook relies on user reports of violating content. But the video was not reported by any user during the live broadcast—the first report came in 12 minutes after it ended.

After this point, once the video had been identified, the challenge became preventing its spread. In the first 24 hours, Facebook removed or prevented uploads of more than 1.5 million videos of the attack, in over 800 visually distinct variants. The company said that detection was harder than other terrorist propaganda, which it has touted its success in removing, because bad actors were coordinating to reupload differently edited versions of the video designed to defeat detection, mainstream media were rebroadcasting it, other websites and pages seeking attention recut and rerecorded the video, and individuals were seeking it out at a higher rate prompted by reporting on the video’s existence.

To be sure, the tech companies have only themselves to blame for people overestimating the capabilities of their artificial intelligence tools after long hyping them as a solution for most content moderation woes. This is also exacerbated by a lack of transparency into how their various tools work. But it also means that when companies do provide insight into the limitations of their systems, it should not simply be ignored.

The attorney-general focused only on Facebook in parliamentary debates, describing the amount of time the video was available on that platform as if it were the only relevant example. But the act will apply to a much broader set of companies, not all of which will have the resources Facebook does to detect and remove content that users upload. Arguably, such laws—written with the major platforms in mind but applying more broadly—will entrench the companies that the law targets, because they are the only companies with sufficient resources to dedicate to complying with such burdensome laws.

More concerning, perhaps, is the fact that the law applies not only to platforms or content services but also to internet service providers (ISPs)—that is, those companies through which people access literally everything on the internet. One Australian professor has commented that the law potentially creates “an expectation for ISPs to apply deep packet inspection monitoring of everything that is said.” The Electronic Frontier Foundation has called indirect hosts of content such as ISPs “free speech’s weakest links” because they are often unable to remove individual posts and so, if facing liability, will remove entire websites or domains.

Why the Rush?

Australia is about to head into an election, which may have played some role in the rushing through of this legislation. (The opposition called the legislation “clumsy and flawed” before turning around and supporting it.) The law also comes during a broader conversation about Australia’s responsibility for the horror in Christchurch, given that the attacker was Australian.

Tech platforms have become a target of a great deal of criticism—much of it justified—during the ongoing techlash of the past few years. A law emphasizing their role in the Christchurch attack may be popular, but it will do little to address the societal causes of radicalization and racism that caused the event in the first place.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare