Cybersecurity & Tech

Don’t Assume China’s AI Regulations Are Just a Power Play

Micah Musser
Monday, October 3, 2022, 8:16 AM

Commentators have framed new regulations on AI systems in China as part of an effort to micromanage algorithms. But this fails to address other possible rationales—and glosses over constraints inherent in regulating emerging technologies.

Code on computer monitor. (Source: Sai Kiran Anagani _imkiran, CC0, via Wikimedia Commons)

Published by The Lawfare Institute
in Cooperation With
Brookings

In March, new regulations entered into effect in China that require companies deploying recommendation algorithms to file details about those algorithms with the Cyberspace Administration of China (CAC). In August, the CAC published summaries of 30 recommendation algorithms used by some of China’s largest tech companies. The release sparked a round of flawed commentary on China’s unprecedented attempt to regulate some types of artificial intelligence (AI), which has largely framed the goals of the regulation in maximalist terms without acknowledging other possible functions of the regulation. 

In particular, much of this commentary mischaracterizes the actual impact of the regulation. Media outlets from Bloomberg News to the BBC and CNBC have all speculated that, despite the superficial nature of the recently publicized information, the regulation may have compelled companies to share with the government sensitive proprietary information about their algorithms, such as their source code, “business secrets,” or “inner workings.” This coverage represents a widespread assumption that the core function of the new regulations is to provide a government pretext for vacuuming up detailed technical information from tech companies, including companies like ByteDance, which owns TikTok. 

This assumption seems unwarranted. One review of the portals through which companies are required to file their algorithms suggests that many key pieces of requested information are optional, answered with multiple-choice questions, or described in 500 characters or less. This is hardly enough information for the government to access the “secret sauce” behind these recommendation algorithms. While the Chinese government may demand access to companies’ closely guarded algorithms in the future—and would have the authority to do so under its 2015 National Security Law—the implementation of the current AI regulation does not appear to push toward that goal. 

A different line of commentary implicitly assumes that the goal of the new regulations isn’t information collection but, rather, to enable direct government management of the algorithms themselves. Writing for the Wall Street Journal, Karen Hao juxtaposes the sweeping requirements of the regulations with complaints from ByteDance engineers that CAC regulators struggled to understand even basic technical information about their systems. Hao interprets the broadness of the regulations as a sign that China is eager to “police algorithms directly” and suggests that both the technical complexity of recommendation systems and the limitations of the CAC’s staff have made it difficult for China to follow through on its ambitions.

But direct management of algorithms and information collection are far from the only possible functions of the algorithmic governance regulations. An overreliance on these two theses misses the variety of purposes that laws can serve.

Legal scholar Richard McAdams, for instance, has argued that law often plays an expressive function by communicating information and by making certain forms of coordination easier. It is not hard to imagine that one goal of the Chinese algorithmic regulations may have been to signal to an international audience that China takes the harms caused by new and emerging technologies seriously—and that it was quicker to take action to rein in AI systems than any other country. 

Even more important than international signaling is the expressive role that the regulation may be intended to play domestically. In recent years, the Chinese government has cracked down sharply on tech companies for a variety of perceived abuses, causing the top six tech companies to lose over $1.1 trillion in market value. Tech companies now have strong reasons to fear being swept up in the next government crackdown while being uncertain about which practices the central government is most likely to punish. A regulation like the CAC’s rules on recommendation systems helps solve this problem by communicating information about the central government’s priorities, which can shift company behavior even without any meaningful enforcement actions. The new regulation may also be intended to signal the central government’s sensitivity toward Chinese public opinion, considering that 72 percent of young people in China want the tech industry to be subject to tighter regulation. And, given the size and scope of the Chinese bureaucracy, it may even play a role in coordinating behavior within government by communicating top-level priorities to provincial and local regulators. 

There is an additional function that the regulation may be serving: the broadness of the regulatory text may not be a sign that the Chinese government has sweeping ambitions to crack down on the tech sector but, rather, that it is attempting to create a large discretionary ambit for itself. Regulators—knowing full well that they lack expert technical knowledge and that the underlying technology is very fast moving—may be unwilling to embed specific requirements for tech companies into legal language, instead preferring to rely on vague generalities. This need not signal an intention to prosecute every violation of the regulation; the government may instead be angling for legal text that is broad enough to permit itself to prosecute the most clear cases of abuse. By avoiding specificity and stumbling over its own words during enforcement down the road, broad regulatory language allows the government to choose the time and place of oversight. 

Consider that this is not so different from the approach the European Union has taken toward tech regulation. The current draft of the EU Artificial Intelligence Act, for instance, requires datasets used in training high-risk AI models to be “relevant, representative, free of errors and complete.” Because work at the cutting edge of AI often requires using datasets far too large to meaningfully ensure the accuracy of every datapoint, this requirement gives regulators an extremely wide authority to prosecute nearly any tech company developing high-risk AI systems. 

And yet it is unlikely that EU regulators will fine the majority of AI companies for violating this requirement. Instead, they will likely use it as a legal hook to prosecute serious oversights by companies that result in clear cases of harm—and to keep the rest of the market on its toes. A similar trend has been observed with the EU’s enforcement of the General Data Protection Regulation (GDPR), a law with which full compliance is arguably impossible and which, read literally, makes many machine learning applications illegal. While a meaningful number of fines have been issued under the GDPR, the total number is far lower than the number of companies that are (presumptively) in violation of some aspect of the law.

None of this is to say that this model of regulation is a good one, or that it ought to be emulated by the United States. There are major downsides to attempting to pass comprehensive technology regulations along these lines. Vague laws can significantly raise uncertainty and compliance costs, create risks of abuse from government regulators, and increase the likelihood that different jurisdictions may interpret the law differently. But there is a reasonable argument to be made that when the subject of the law is as fast developing as AI, “future-proof” legislation may need to be vague enough to accommodate substantial change in technical details, with significant discretion falling to regulators and judges to interpret implementation. If a country decides that comprehensive technology regulation is needed, it would be difficult to write it in a way that doesn’t suffer from vagueness issues similar to those of GDPR, the EU AI Act, or China’s new regulation on recommendation systems—although future iterations might become more tailored as regulators become more familiar with the technical details. 

It is true that the CAC’s regulation on recommendation algorithms is very broad and that it does require the disclosure of some technical information to the central government. But the lack of details in the information publicized (and, apparently, requested) by the CAC should undermine the assumption that the central government intended to use the legislation to vacuum up technical information from companies or to micromanage algorithms directly. More importantly, some level of vagueness is arguably a necessary component in any comprehensive tech regulations. Rather than assuming that China’s attempts at algorithmic governance are power grabs, it would be useful for commentators to analyze the constraints inherent in regulating fast-developing technologies and the potential other functions that these regulations may be intended to play.


Micah Musser is a former research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), where he worked on the CyberAI Project, and an incoming 1L at New York University School of Law.

Subscribe to Lawfare