A Digital Regulator Must Be Empowered to Address AI Issues
Published by The Lawfare Institute
in Cooperation With
As the demand to cast a regulatory net over digital industries has grown in recent years, one call in particular has emerged from the noise: support for the creation of a digital regulator to do the job.
Many have forcefully championed this idea. In 2019, Harold Feld, Senior Vice President of Public Knowledge, urged Congress to pass a Digital Platform Act to deal with digital competition and content moderation issues. A report from the Shorenstein Center at the Harvard Kennedy School advocated for a Digital Platform Agency to promote competition and enforce a digital duty of care. (The Shorenstein report itself was authored by an impressive group of seasoned former government officials, including Tom Wheeler, former Chairman of the Federal Communications Commission under President Obama, Phil Verveer, whose long career in government service includes positions as the former Deputy Assistant Secretary of State for International Communications and Information Policy and Senior Counsellor to the Chairman of the FCC, and Gene Kimmelman, one of the nation’s leading consumer protection advocates who recently stepped down as chief counsel for the U.S. Department of Justice’s Antitrust Division.) Last year, Sen. Michael Bennett introduced legislation to establish a Federal Commission to oversee digital platforms and has indicated that he will reintroduce the bill again this year.
My own contribution to these discussions is contained in my forthcoming book from Brookings Press, entitled Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech. In it, I argue that generalist regulators and courts are ill-equipped to deal with the policy challenges in competition, privacy, and content moderation, as these challenges arise in unique form in digital industries. Just as the Federal Communications Commission regulates broadcasting, cable operators, and telephone companies, the designated digital regulator will supervise social media companies, search engines, electronic commerce marketplaces, the mobile app infrastructure and the blizzard of companies involved in ad tech. Its subject matter jurisdiction would include the promotion of digital competition, the protection of online privacy, and measures to counteract disorder and abuse in the online information environment.
Open AI’s ChatGPT has taken the U.S. by storm this year, prompting increased calls for the regulation of both artificial intelligence (AI) in general and, more specifically, of AI-generated content. Given that ChatGPT garnered 100 million users in its first two months—positioning itself as the fastest-growing consumer application in history—this attention from policymakers is a welcome development.
In this post, I begin to disentangle the issues of digital regulation and regulation of artificial intelligence. While there is considerable overlap between the two, a critical distinction must be made in order to ensure that any proposed regulatory institutions are properly focused on the jobs they would be created to perform: A digital agency should regulate AI as it appears in the industries under its jurisdiction, much the same as any sectoral regulator would. It should not, however, be in charge of regulating AI as such, nor of regulating general-purpose providers of AI systems—despite the proposals that some people have put forth, arguing for this authority as a needed response to the challenges of increasingly powerful and risk-riddent AI capabilities.
The Regulation of Artificial Intelligence by Specific Agencies
In April 2023, the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) released its request for comment on a proposed accountability framework for artificial intelligence, including AI systems that generate content. The key motivating idea is that uses of AI systems can be trustworthy and safe only if they are accountable, and transparent assessments and audits are essential elements of accountability.
This NTIA’s accountability framework marks another effort by the Biden administration to bolster oversight of artificial intelligence. In advance of a meeting on May 4, 2023 on AI issues between Vice President Kamala Harris and the chief executives of Google, Microsoft, OpenAI, and Anthropic, the White House released a summary of its AI initiatives. The National Institute for Standards and Technology put out an AI Risk Management Framework in January 2023, and in October 2022, the White House Office of Science and Technology Policy published a blueprint for an AI Bill of Rights.
In many ways, the Biden administration’s approach is in keeping with that of prior U.S. administrations. The Trump administration delegated AI responsibility to specific regulatory agencies, with general instructions not to overregulate—an extension of the Obama administration’s treatment of AI regulation. The general consensus of the U.S. approach is summed up in the Obama administration’s statement (p. 17) that “broad regulation of AI research or practice would be inadvisable.” Instead, the logic follows that the government should focus on the new risk created by applications of AI and “should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI.”
As a result, U.S. administrative agencies have been busy focusing on how to adapt their regulatory systems to the introduction of AI. In a recent update, the White House’s Office of Science and Technology Policy rolled out a blueprint of AI initiatives in agencies across the government including the Department of Labor, the Equal Opportunity Enforcement Administration, the Federal Trade Commission and the Consumer Financial Protection Bureau.
These agencies typically do not regulate AI systems as such or providers of general-purpose AI systems. At most, they indirectly regulate vendors of AI systems that are specifically designed to assist end user companies in activities that are subject to regulatory control. For instance, New York City prohibits employers and employment agencies from using an automated employment decision tool unless it has been audited for bias by an independent auditor. Because the city can oversee rules around hiring decisions, it can indirectly regulate AI systems by imposing restrictions on employers.
This flurry of regulatory activity has gone almost unnoticed in the breathless discussion of the supposedly unregulated ChatGPT. It was welcome indeed, then, when FTC Commissioner Alvaro Bedoya blasted the current misperception that AI is unregulated in an April speech before the International Association of Privacy Professionals. In his remarks, Bedoya noted that unfair and deceptive trade practices laws apply to AI, as do laws relating to discrimination in credit granting, employment, and housing. Tort and product liability laws apply as well. As Bedoya affirmed, there is no “AI carve-out” to current law. And while there may need to be supplementary statutory protections, it is a fallacy to pretend that AI as used in today’s business world is free of law and regulation.
Alondra Nelson of the Biden Administration’s Office of Science and Technology Policy, has also argued that laws apply to AI in particular “use cases.” She insists, for example, that “generative A.I. use in an employment tool is going to have to abide by labor law and civil rights law with regards to employment.”
Further, the chair of the Federal Trade Commision (FTC), Lina Khan, has been forceful in making the same point, thus sending a message to any would-be edge-riders seeking to use AI to evade current law. At a recent Congressional committee hearing, Khan reinforced the bipartisan view of this administration (and all of the previous ones) that AI is already subject to regulation. During the hearing, Khan warned malicious actors intent on using the newest versions of AI to turbocharge their scams that any such AI-based wrongdoing “should put them on the hook for FTC action.” On April 25, speaking at a press conference with a group of federal regulatory agencies, she observed decisively, “There is no AI exception to the laws on the books.” “Although these tools are novel,” she repeated in a May 3 New York Times opinion piece, “they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.”
Regulating General Purpose AI
While it is well settled that current law applies to the specific uses of AI technology, there has also been a lively debate about regulating general purpose AI, or systems that can be used in a wide range of specific economic and social contexts. As people have witnessed the capabilities of AI systems like ChatGPT, this debate has grown increasingly urgent. After all, this marks the start of a new era, in which an individual can generate potentially dangerous content within seconds that is usable in virtually any context.
Civil society scholars and AI experts have urged European regulators to cover general purpose AI in its Artificial Intelligence Act (AI Act). According to some reports, industry heavyweights are urging European regulators to refrain from labeling general purpose AI as inherently risky in order to avoid subjecting new such systems to procedural requirements for risk assessment and certification under the AI Act. This exemption from high requirements is needed, they argue because general-purpose AI systems such as ChatGPT are hardly being used for risky activities.
The issue is coming to a head as the EU is working to finalize the AI Act. On April 17, an influential group of EU Parliamentarians released a letter supporting the inclusion of general purpose AI in the category of risky AI systems. An all-but-final compromise on the AI Act (announced on April 28) appears to impose special obligations on a subset of general purpose AI systems, called foundational systems, which are trained on broad data at scale, designed for generality of output, and can be adapted to a wide range of distinctive tasks. ChatGPT belongs to the class of more strictly regulated AI systems. The EU also appears to be on the verge of requiring companies training AI systems, regardless of their intended use, to disclose the copyrighted material that they used in the training data.
Some commentators such as Brookings scholar Anton Korinek have thought that a key element in regulating AI is the designation of a specific agency to focus on issues that arise with general purpose AI—that is, issues that crosscut the jurisdiction of the specialized agencies. The same AI experts who want regulation of general-purpose AI typically endorse this call for a separate AI agency. They note that the “UK has already established an Office for Artificial Intelligence and the EU is currently legislating for an AI Board.” Notably, however, the EU’s AI Board appears to be entirely advisory, while the AI agency that Korinek and others have in mind would harbor its own enforcement power.
A new crosscutting agency would be needed, the thinking goes, to oversee the transparency, risk assessments, audits and mitigation requirements appropriate for general purpose AI systems. Such a regulatory role would demand extraordinary technical skill in machine learning, but not necessarily specialized expertise in specific businesses or policy areas. The designated AI agency would serve as a reservoir of technical knowledge that could be shared with business or policy regulators at other specialized agencies, thus seeking to ensure a consistent technical approach across agencies.
The NTIA’s latest initiative might signal a change in the U.S.’s current sector-specific approach, as the procedural requirements for accountability that it is considering could very well apply to providers of general-purpose AI as well as to specific AI applications.
Moreover, Senator Chuck Schumer recently announced a legislative initiative to regulate AI in general, and generative AI in particular. Early indications are that Senator Schumer’s new framework will focus on transparency, perhaps along the lines of the assessments and audits described in the NTIA request for comment. This initiative could very well result in the designation of an AI agency equipped to handle the demands of overseeing and regulating general purpose AI.
Role of the Digital Regulator in AI
Regardless of how that debate turns out, the role of a digital regulator is not to regulate providers of general AI services. Rather, it is to govern the core digital industries whose failures to protect competition, privacy and free speech have been the focus of so much concern in the last decade. The digital regulator would not be the designated AI agency if the U.S. were to move in that direction. The digital agency should be fully authorized to prevent the companies in the specific digital industries under its jurisdiction from using AI systems in ways that threaten competition, invade privacy, or encourage information disorder on their systems. These industries would be explicitly detailed in the enabling statute and could include social media, search, electronic marketplaces, mobile app infrastructure and ad tech companies. This potential role can be more clearly illustrated and understood through several examples.
For one, it has long been known that algorithmic pricing models can raise questions of illegal price fixing and price discrimination. Artificial intelligence has the potential to vastly improve the sophistication and accuracy of algorithmic pricing models, thereby exacerbating these risks. If firms effectively collude on pricing through algorithmic price discrimination systems, they cannot claim an exemption from the antitrust laws on the grounds that they used innovative, cutting-edge AI tools to deprive consumers of the benefits of price competition. In her New York Times opinion piece, FTC Chair Khan properly reminded firms that the FTC is “well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.”
When these algorithmic pricing issues arise in electronic commerce marketplaces or other core digital industries, as they often do, they would fall under the jurisdiction of the digital regulator assigned the task of promoting competition in digital industries.
AI in marketing and advertising is often used to “segment customers, message them at optimal times and personalize campaigns.” The use of AI models to enhance ad targeting , make product recommendations or set prices might very well implicate the privacy interests of the users of social media platforms, search engines and electronic marketplaces. These uses would fall under the jurisdiction of the digital regulator.
The training data used to generate these AI ad models might also be drawn from online contexts in digital industries. The authority accorded to a digital regulator to require a company to demonstrate a legal basis for extracting training data from online contexts creates a potentially powerful weapon to force compliance with digital privacy rules. The Italian data protection authority illustrated that power when it barred ChatGPT from access to the Italian market and issued a series of requirements that ChatGPT owner Open AI would have to satisfy to regain access. In late April 2023, ChatGPT said it was returning to Italy, having addressed the issues raised by the privacy regulator.
Already, ChatGPT plugins for Twitter and other social media platforms are readily available. With this comes the imminent risk that ChatGPT could be used to spread misinformation. Moreover, AI content generation technology has made the production of deepfakes exponentially less expensive and more easily available. As illustrated by the recent political ad from the Republican National Committee which used (and disclosed) AI to generate fictitious images of war, financial crisis, and riots if President Joe Biden were to be reelected, such developments present new challenges to the integrity of political discourse and the democratic process in the United States.
A digital regulator charged with ensuring that social media companies assess the systemic risks associated with its operations would have to react to this development. Specifically, the digital regulator would be responsible to ensure that social media companies include in their risk assessments an evaluation of the dangers associated with letting AI generated content flourish on their systems. The agency would also be authorized to demand that social media companies construct a mitigation strategy to deal with these new and less familiar risks and to make its containment approach available to the public.
These examples are purely illustrative, but they make the point that the digital regulator is like the other specialized agencies dealing with AI in the areas under their jurisdiction. The digital regulator is not an economy-wide agency charged with seeking out the use of AI wherever it appears and endeavoring to ensure that it operates in a safe and trustworthy manner.
No one can pretend that AI regulation will be easy, nor that policymakers will get it right on their first attempt. But that is no excuse to refrain from taking action. Making it clear that AI is regulated under current law is a first step. When Congress legislates in the areas of promoting digital competition, protecting privacy, and ensuring free speech online, it should make it clear that AI-enabled services are not exempt from these new laws and that a digital agency authorized to enforce them can reach the new technologies.