Cybersecurity & Tech

Platforms Are Testing Self-Regulation in New Zealand. It Needs a Lot of Work.

Curtis Barnes, Tom Barraclough, Allyn Robins
Friday, September 2, 2022, 8:01 AM

Key structural weaknesses will likely make this industry-led initiative ineffective in promoting true transparency and accountability among social media platforms.

A user opens YouTube. (CC0, https://creativecommons.org/share-your-work/public-domain/cc0/).

Published by The Lawfare Institute
in Cooperation With
Brookings

On July 25, New Zealand adopted a new industry-led mechanism designed to provide guidance for social media platforms to enhance safety and mitigate online harm: the Aotearoa New Zealand Code of Practice for Online Safety and Harms. The code’s architects have repeatedly proclaimed it to be unique, both in their public announcements and in the text itself, because of its governance framework and breadth compared to other self-regulation efforts such as Australia’s Code of Practice on Disinformation and Misinformation and the EU’s Code of Conduct on Countering Illegal Hate Speech Online. While complaints under the code can be made only by residents of New Zealand, the code’s text clearly positions it within a broader international context. 

Already, New Zealand has an outsized role in global tech policy. Initiatives such as the anti-terrorist Christchurch Call, which is a commitment by governments and tech companies to eliminate violent extremist content online, in addition to Prime Minister Jacinda Ardern’s leadership on challenges posed by the online information ecosystem, have made the country a significant player in the conversation about social media and governance. As such, the code’s primary audience is as much international as domestic—meaning that it will serve as a model likely to be advanced in a number of jurisdictions around the world.

Over the past nine months, the code’s initial signatories—Meta (on behalf of Instagram and Facebook), Google (on behalf of YouTube), TikTok, Twitch, and Twitter—have been involved in drafting its provisions to varying degrees. Concurrently, legislative processes around the world have gathered momentum as lawmakers aim to substantially reform many aspects of social media services. The New Zealand government itself is in the midst of the Content Regulatory Review (CRR), an initiative intended to align some of the statutory obligations of internet media companies with those of their traditional media counterparts. Like the code, the CRR is mobilized beneath the banner of “preventing online harms.”

Companies signed to the code voluntarily agree to make “best efforts” toward the following set of commitments:

  • To “reduce the prevalence of harmful content online.”
  • To “empower users to have more control and make informed choices.”
  • To “enhance transparency of policies, processes and systems.”
  • To “support independent research and evaluation.”

These commitments alone provide some insight into the various signatories’ regulatory preferences. The code divides the commitments into 13 “outcomes” and further into 45 specific “measures” that the companies are expected to take. Signatories are obliged to present annual compliance reports for evaluation and publication by the organization appointed as the code’s administrator. This administrator will also create and operate a complaints process, allowing New Zealand residents to complain if they believe that a signatory is not meeting its commitments under the code. When a company signs onto the code’s framework, it identifies which of its products the code will apply to and can further choose to opt out of any measures that it may feel are not relevant to the products in question. 

It is a truism to observe that the code is a strategic maneuver by companies operating in jurisdictions that are in the process of reform. Skeptics have latched on to this point, going as far as to call the code “a Meta-led effort to subvert a New Zealand institution” and “a weak attempt to pre-empt regulation.” But such criticisms miss several critical points. First, there is little realistic prospect that mechanisms like the code will divert or even delay regulation. If the New Zealand government elects to discontinue its legislative agenda, the government itself is then accountable for that decision. Further, the code explicitly anticipates and defers to any future legislation (and such legislation may include the code’s adoption by the state under a co-regulatory model). Moreover, claims that the government alone has the legitimacy to create regulatory frameworks for the internet merit careful consideration. The mere fact that the code is self-regulatory (or even self-interested, for that matter) doesn’t necessarily mean that no good will come of it. 

However, no good will come of the code if it is weak or ineffective—and unfortunately, this appears likely to be the case.

As it stands, three major uncertainties will determine the code’s overall impact: first, the governance arrangements across and between the code’s various key parties; second, the nature of the code’s complaints and reporting processes, which are yet to be established; and third, the approach and expertise of the code’s administrator. While these uncertainties make it difficult to comprehensively critique the code at this stage, they foreshadow weaknesses in several essential operational functions.

One could argue that by paying for the code’s development and future administration, tech company signatories have responded to civil society’s calls for social media companies to take financial responsibility for the issues they generate. But most civil society organizations in New Zealand will still lack sufficient funding to meaningfully engage with the mechanisms that the code will create. And organizations with the resources and expertise to engage critically with the code will incur even more strain on their already limited resources now that they are expected to do so. Moreover, there is little prospect that the country’s thinly stretched philanthropic sector will remedy this problem.

Like many of its legislative counterparts, much of the code’s operational substance is ambiguous or deferred to later processes, most of which will be developed between signatories and the code’s administrator. The administrator plays a pivotal role in all matters material to the code’s key functions, from transparency reporting to evaluating the signatories’ compliance. There is no avoiding the fact that the administrator is hand-picked and funded solely by the signatories to tell them how they’re doing.

This dynamic may constitute a conflict of interests that some advocates are likely to find intolerable—though such is true for virtually every self-regulatory regime, and by now, the risks are well understood. However, the code heightens such risks by requiring unquantifiable and subjective assessments of compliance—such as whether a signatory has made “best efforts” to “implement, enforce and/or maintain policies, processes, products and/or programs that seek to provide users with appropriate control over the content they see, the character of their feed and/or their community online.” It is exceptionally difficult to assess whether obligations like this have been met, and even harder for complainants to demonstrate that they have not. The code contains 45 of them.

Notably, the code clarifies that the administrator does not work directly for the signatories—although observers will nonetheless need to pay careful attention to the contractual relationship between signatories and administrator to avoid any terms that compromise the administrator’s independence. In similar self-regulatory initiatives, perceived or actual conflicts are mitigated through the establishment of structural arrangements that build public trust and confidence. Meta’s Oversight Board, for example, is funded by a separate endowment under an independent charter. Such structures are absent in the current version of the code.

The administrator does have a kind of security of tenure. Any performance review of the administrator can be scheduled only by the code’s other major governance pillar, the Oversight Committee. The details of this committee are yet to be revealed, though we note it will contain an indeterminate number of representatives from the signatory companies, in addition to other “agreed upon” stakeholders such as Māori cultural partners, civil society, and relevant academics and government officials. The process of stakeholder selection is also yet to be revealed by the administrator or the signatories. We intend to publish a longer list of questions that must be answered before the effectiveness of the code and the independence of its stakeholders—including the Oversight Committee—can be adequately assessed. The code in its current form warrants a slate of questions: How exactly will the Oversight Committee be composed? What arrangements will its architects put in place to mitigate dominance by signatory representatives on the committee? And will signatories agree to provide transparent and unconditional funding to allow less-resourced stakeholder groups to engage both productively and in good faith?

Under the code, unless the Oversight Committee intervenes or the administrator resigns, its term will “continue indefinitely.” As a result, the administrator is technically free to exercise the limited powers of the position as it sees appropriate. It is safe to assume that the manner and rigor in which the administrator exercises these powers will be a major factor in whether the current signatories continue to participate and, perhaps more importantly, whether new signatories can be attracted to join. In particular, the administrator will need to draw upon the body of principles in the code to justify its discretionary decision-making. Pragmatically, the code’s success also hinges heavily on the administrator’s diplomatic skills. If the administrator is perceived as too soft on the signatories, the code will garner no public legitimacy. But if the reverse is true, the assembly of signatories may stagnate or dissolve altogether—a tightrope that other self-regulatory and multi-stakeholder initiatives are very familiar with at this point.

The administrator’s powers to sanction are strictly limited, broadly equivalent to public statements of “I’m not mad, just disappointed.” (These statements can be issued only where there is a “proper basis to do so,” once “reasonable notice” has been issued to the relevant signatory and once a “consultation” has taken place.) Additionally, if the administrator and Oversight Committee agree to do so, they can remove a signatory for repeated breaches of the code. While this will hardly inspire regulators to shoulder arms and retreat, it’s a power worth accounting for. Almost all the commercial advantages of participating in the code lie in improved public and government relations, and these advantages won’t be limited to operations in New Zealand. While platform internet companies are no strangers to public criticism, reprimand for failing to meet the obligations of a self-regulatory mechanism—especially one as mild as this code—would only provide more evidence that self-regulation has failed to rectify public distrust, probably precipitating even greater regulatory intervention than presently anticipated. Worse still, it might attract interventions targeted directly at the errant signatory, rather than the platforms as a collective.

But the complaints process that could lead to such reprimands is yet to be defined—and the obligations established by the code contain so much leeway that it is difficult to imagine a situation in which the signatories could be pinned down for breaching their commitments. In its current form, it is plausible that signatories can satisfy the code’s reporting obligations merely through comprehensive completion and submission of a template report, regardless of what their report actually says. Viewed in this way, the administrator’s “assessment” responsibility may in effect be little more than a responsibility to assess their email inbox periodically to check if a report has come in and that the form has been fully filled out. 

If this seems like an uncharitable interpretation of the code’s assessment responsibilities, consider that the administrator has no explicit powers to ask for more information in order to determine whether the information included in a signatory’s report proves that they have actually met their obligations. For example, the administrator has little power to investigate the true efficacy of any of the measures the signatories claim to be taking, most of which will be things the companies are already doing anyway. All the administrator can do is take the companies at their word—one of many matters making it unclear what standard is required to make a successful complaint under the code.

It’s possible that the “proper basis” clause of the administrator’s power to criticize offers a backdoor to an implied power to request more information. But the fact that the administrator’s powers to gather information have to be implied is suboptimal for a code that is explicitly founded on the idea that accountability and trust can be generated through voluntary transparency. 

Per the code, the administrator must consult with a signatory before publicly criticizing it. The core of this consultation is whether there is a “proper basis” for issuing public criticism under the code. This measure is arguably necessary to preserve natural justice and to maintain the working relationship, and it also creates an opportunity for the administrator to offer interpretations and criticisms of the claims made by the signatory in their report. For example, the administrator could cross-reference a signatory’s report with other information not contained within the report itself. Equally, this consultation is an opportunity for signatories to share additional context that hasn’t been provided publicly in order to help explain where a mistake may have been made. Of course, a signatory may refuse or be unable to share such information. But such a refusal will become a relevant factor in assessing whether there is a “proper basis” for public criticism. And still, even where signatories do choose to reveal more information, this will probably occur in confidence with the administrator, meaning that the engagement will fall short of a truly transparent accountability mechanism.

Either way, the administrator must possess the skill and subject matter expertise necessary to meaningfully assess the substance of the signatories’ reports in the absence of further information. To have any chance at doing this successfully, an administrator will need a demonstrable understanding of contemporary internet governance matters, including the technical capabilities and limitations of automated content moderation, as well as the way in which human rights approaches are applied to matters of free expression and privacy and other human rights that must be protected.

Whether the chosen administrator possesses such skill and expertise is doubtful. Originally, the intended administrator for the code was an organization called Netsafe, a nongovernmental organization that holds a statutory appointment under harmful digital communications legislation to mediate online safety disputes. Having co-drafted the code with the signatories, Netsafe subsequently stepped back from the administrator role to avoid the appearance of a conflict of interest. In lieu, the signatories have chosen the New Zealand Tech Alliance (or NZ Tech) to fill this crucial role. NZ Tech is a membership-based organization focused principally on promoting the success of the nation’s digital technology industry. Its work entails hosting events, developing community, sharing best practices, connecting businesses to commercial opportunities, and lobbying the government for solutions to problems like the critical lack of skilled information technology professionals within New Zealand. 

It’s fair to assume that NZ Tech has the capacity to carry out some core functions of the administrator’s job, such as recruiting additional signatories, organizing events, and maintaining the code’s public presence generally. But it’s less clear that NZ Tech has the necessary experience to carry out the other core business of the administrator—that is, establishing and facilitating an effective public complaints mechanism, administering the annual compliance reporting process, assessing the signatories’ transparency reports to determine whether they have met their obligations, gauging whether there is a “proper basis” for rebuking a signatory, and executing public sanctions in a rigorous way without sabotaging working relationships. These are the most important responsibilities of the administrator’s role.

As things stand, the appointment of NZ Tech to serve as administrator undermines the credibility of the code. It’s doubtful that the organization possesses the personnel and processes necessary to demonstrate that it meets the appointment criteria stipulated by the code itself. It has little to no publicly available record of navigating the fraught complexities of online safety policy trade-offs, nor a demonstrable understanding of the technical and legal challenges that define content moderation. While it employs lawyers and policy professionals, there is currently no way of knowing whether the specialization of these individuals is appropriate for the role. There is little evidence documenting its experience as an arbitrator or mediator of industry assessments or human rights due diligence. From the information available to date, there is little reason to believe that NZ Tech is equipped to hold the signatories accountable—even to the relatively low bar that the code presents. The situation is complicated further by the fact that NZ Tech’s membership includes companies related to the signatories, such as Google and Amazon Web Services (linked to Twitch via its parent company, Amazon). Moreover, other NZ Tech members may also apply to join the code in future.

In the time since it was officially named as the code’s administrator, NZ Tech has done little to resolve the uncertainty. It has yet to release any public statement on who will be involved in the administrator role, and how they intend to perform their duties. Even after its launch in late July, the code still identified the administrator as “XXXXX.”

As the code stands, once per year the signatories are obliged to write down—in a template format—a handful of policies and practices that they consider to be measures for the reduction of online harms. While these must be assessed, neither the administrator nor the Oversight Committee possesses the formal power to look beyond the information included in the compliance report itself. This almost certainly conflicts with the code’s purported focus on the “architecture of systems, policies, processes, products and tools established to reduce the spread of potentially harmful content”—all of which would require detailed information disclosures far in excess of what the proposed transparency report is likely to accommodate. 

There remains a possibility that these aspects of the administrator’s role could be performed by an entity other than NZ Tech. Rather unhelpfully, the code states that the “administrative” functions of the administrator may be delegated or outsourced to a secretariat. But it’s not clear whether this includes the core assessment functions, and it’s fair to question the wisdom of the appointment if such extensive delegations prove necessary. Alternatively, NZ Tech may plan to rapidly acquire new staff with appropriate skill sets. Either of these options, however, must occur prior to Oct. 23, if capabilities are to be in place before the signatories submit their initial reports outlining the “current state of their efforts to address Online Safety and harms.”

It’s also possible that the key to understanding the administrator appointment lies with the envisioned final form of the Oversight Committee. The committee has a formal supporting role in many of the core responsibilities around compliance, assessment, and review of the code. A committee with relevant expertise would go some way to overcoming weaknesses in the code’s governance structures and any shortcomings of the administrator. Nevertheless, the committee has no greater powers to request or access information than the administrator does, so it will face the same barriers to assessing the signatories’ reports in a meaningful way. Moreover, by filling the Oversight Committee with employees from the signatory companies and requiring them to engage in detailed assessments of their own and each other’s business activities, the code generates complex antitrust risks. It’s unclear whether the code’s creators have considered the extent that these will be a barrier to transparency, and how this barrier will be managed.

So what’s next for the code? It’s fair to marry criticism with a degree of patience. All regulatory initiatives require a measure of time-under-tension for their worth to become apparent, and the code importantly includes explicit mechanisms for reevaluation and reform. Nevertheless, it is impossible to ignore that all critical operational matters have been punted down the road, such as the arrangement of the Oversight Committee, the requirements of the complaints process, and how independent third-party reviewers may be equipped to bring some spine to the assessment process.

If the code is to be a meaningful accountability mechanism, the signatories and the administrator must urgently direct their attention toward several critical areas. In particular, it is hard to imagine what would be required to show that companies are not making “best efforts” to meet the broadly worded commitments, or how anyone will obtain the information required to show that, or how anyone will resource the time and effort required to collate that information. If such issues aren’t resolved, then the code will likely fail. It is possible in principle to imagine a robust and successful self-regulatory system acting as one cog within a larger regulatory machine—something with real accountability structures, strong transparency provisions, and meaningful engagement with wider stakeholders. As of now, that is not this code.


Curtis Barnes is a director at Brainbox, a consultancy and think tank based in New Zealand. He is an editor of Springer’s Journal of AI and Ethics, and holds the degree of Master of Laws with Distinction from the University of Otago School of Law.
Tom Barraclough is a director at Brainbox, a consultancy and think tank based in New Zealand. His current research interests include platform regulation, disinformation, and digital legal systems. He holds degrees in politics and law with honours from the University of Otago.
Allyn Robins is a senior consultant at Brainbox, a consultancy and think tank based in New Zealand. He previously served as an intelligence analyst within New Zealand’s Department of the Prime Minister and Cabinet, where he specialized in space and emerging technology issues. He holds a Master’s degree in Physics and Bachelor’s degrees in Philosophy and Theatre.

Subscribe to Lawfare