AI Regulation’s Champions Can Seize Common Ground—or Be Swept Aside

Zachary Arnold, Helen Toner
Tuesday, August 27, 2024, 11:00 AM
The feud between AI “doomers” and “ethicists” holds AI governance back. Advancing shared policy interests could shift the tide. 
Artificial Intelligence. (https://www.rawpixel.com/image/5922843, CC0)

Published by The Lawfare Institute
in Cooperation With
Brookings

As AI whistleblowers speak out, safety advocates depart leading machine learning companies, and tech execs warn about AI-enabled superweapons, it’s easy to forget how quickly existential fears over artificial intelligence have become respectable. Of course, killer AI has long been a Hollywood trope—but for decades, only a handful of bloggers, academics, and autodidacts took the idea seriously. In the era of AlphaFold and ChatGPT, though, visions of all-powerful, uncontrollable AI no longer seem as far-fetched. Today, the so-called AI “doomers” get TED talks and New Yorker profiles, and their ranks are swelling with leading pundits, scholars, and billionaires. Their concerns are now firmly in the mainstream, taken seriously from billion-dollar boardrooms to the senior ranks of the federal government.

But not without controversy. Other prominent AI watchers and practitioners stress concrete problems that automation and machine learning are already causing, from the often-miserable working conditions of data laborers to the widespread use of AI for surveillance. Rather than dwelling on speculative catastrophes, many urge action on these immediate problems. These advocates often describe themselves as focused on AI ethics, fairness, and similar concerns; for lack of a better term, call them the “ethicists.”

These two camps each have some influence in Washington, and they fundamentally agree with each other (and the public) that AI poses serious risks and should be regulated. But so far, feuding between the doomers and the ethicists has precluded a united front. The ethicists reject existential AI fears as intellectually shoddy “end-of-days hype”—or, worse, a cynical “sleight-of-hand trick” played by tech companies to distract from the harms their products are already causing. For their part, many doomers downplay or ignore harms from existing AI as trivial compared with the cataclysmic scenarios they fear. Political fissures heighten the tension; the labor, racial, and environmental harms that ethicists emphasize echo contemporary left-wing concerns, while many in the other camp align more with techno-libertarians or D.C. national security hawks.

Rising Headwinds

As many doomers and ethicists focus fire on each other, Big Tech companies and Silicon Valley investors are gearing up to oppose practical controls on AI, however motivated. Warning that safety regulation will stifle innovation and cede leadership to China, they downplay the present harms AI is causing as well as more speculative concerns. One Big Tech executive argued last year that “we shouldn’t regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios,” while a venture capitalist “manifesto” popular in Silicon Valley lumps “existential risk,” “tech ethics,” and “social responsibility” together under a single heading: “The Enemy.”

Arguments like these are quickly gaining ground in Washington. Tech companies may have been caught off-guard at first by the sudden surge of lawmaker interest in AI after ChatGPT’s release, but at this point they and their lobbyists have arguably pulled ahead of regulation’s advocates. Dozens of bills to govern artificial intelligence have been introduced in Congress but have largely failed to progress—suggesting that members may be more eager to look active on AI than to actually legislate. Senate Majority Leader Chuck Schumer’s “AI Insight Forums” culminated in a summary document with few concrete recommendations beyond a call for greater investment in AI technology (the one governmental action that is perennially popular with industry). Meanwhile, his GOP counterparts are embracing AI laissez-faire.

Meaningfully regulating AI over industry objections was always going to be a tall order, but by training their sights on each other, AI doomers and ethicists are helping clear the field for tech lobbyists. 

It doesn’t have to be this way. Experience teaches that unlikely allies can wield outsize influence in Washington, and fortunately, AI doomers and ethicists have more common ground than they may think. From our research into diverse AI safety and governance issues and conversations with advocates across the spectrum of concern, we believe several AI policies can satisfy both doomers and ethicists on their own terms—and help draw them together for greater impact. None of these policies will necessarily be easy to achieve, and they don’t amount to a comprehensive agenda for governing AI and its challenges. But in our view, each has the important advantage of potentially appealing to advocates with diverse perspectives on AI risk and harm. Figuring out how to make them reality will take careful thought and collaborative work—making it all the more important to get started soon.

Starting Points for Cooperation

To begin, both sides have a clear interest in building AI-related expertise and governance capacity in the public sector.Regulating any AI risks, existential or otherwise, requires regulators with the knowledge and resources to keep up with the tech and avoid being outrun or manipulated by industry. But even the lead federal agencies implementing the October 2023 AI executive order are typically underfunded (if not literally crumbling) and short on AI-savvy staff—to say nothing of the individual congressional offices, state and local regulators, and others with a critical role to play in AI governance. Raising public-sector salaries, properly funding critical agencies and functions, and streamlining the federal hiring slog, among other measures, are challenging but necessary steps toward meaningful AI governance of any sort.

Another shared interest lies in implementing policy measures to help assess AI-related dangers of any sort, whether present-day or speculative. More investment in AI measurement science is especially critical. The techniques currently used to measure AI systems’ risks and capabilities would be considered rudimentary by the standards of any other scientific field. Currently available metrics for AI systems’ risks and capabilities are unstandardized and manipulable, making it easy for marketing teams to cherry-pick a few impressive data points as needed. When detailed risk assessments of AI systems (nearly always carried out by the systems’ own developers) are published, they rely heavily on ad-hoc testing by “red teamers,” who seek to poke and prod AI systems into doing something undesirable but do not create reproducible metrics. There are few reproducible, scientifically principled methods available today to compare AI systems with one another or measure their competence or danger. Emerging government efforts to develop better metrics are a welcome start, but they will need sustained focus and significantly more resources if they are to make a dent in the problem.

As AI measurement science develops, it will be critical to ensure that the right tests are run, and run right. To that end, both AI doomers and ethicists have advocated for building credible mechanisms and institutions for third-party AI auditing. AI companies shouldn’t be “grading their own homework.” Independent third-party auditors should assess their products and processes to assess capabilities, risks, and risk management practices—just like what’s already done in accounting, aviation, product safety, and many other sectors. Meaningful third-party oversight of the growing AI sector will require a robust, rigorous ecosystem of auditors and a clear incentive for AI companies to hire them. One practical step would be for the U.S. government to require that AI systems procured by federal agencies (at least some subset of them) receive certification from third-party AI auditors. Another step would be to establish an AI equivalent of the Public Company Accounting Oversight Board, a body that provides government oversight of firms offering financial auditing services to public companies. This approach could harness the creativity and flexibility of private-sector auditing companies while ensuring their audits are rigorous.

In parallel, fostering greater transparency and disclosure in AI will illustrate opportunities to act on present harm and indicate potential “failure modes” for future, more powerful AI systems (which many believe will be simply scaled-up versions of current technologies, or at least share common features with existing AI models). We can start by systematically tracking AI-related incidents. Volunteers are already compiling some public reports of AI-related incidents. But thoroughly monitoring the emerging landscape of AI harm will take more than civil society. AI needs what already exists in aviation, cybersecurity, and other tech domains: an institutionalized, public incident-tracking system backed by government resources and investigatory powers. If the recent history of cyber disclosure requirements is any indication, any effort to develop such a system will face serious opposition—making this one area where advocacy by the full spectrum of AI safety advocates may be especially impactful.

Beyond incident tracking, transparency and disclosure requirements for high-stakes AI would help mitigate both present and future AI risksRight now, companies building and selling AI have far more information about their systems than the policymakers trying to figure out how to govern them. Many researchers have suggested categories of information that, if shared, would help rectify this imbalance, such as training data, internal testing results, and organizational risk management practices. Requiring companies developing AI systems that affect people’s rights or safety (including the “frontier models” being built by Google, OpenAI, Meta, Anthropic, and others) to disclose this information could go a long way to giving governments the information and early warnings they need to make sensible policy.

Finally, anyone interested in present or future AI risks should push for a clear allocation of AI liability. If no one is clearly accountable when technology goes wrong, no one has a strong incentive to make the tech safe in the first place. Unclear liability for harm has long undermined progress in cybersecurity and now threatens the same for AI. Both doomers and ethicists already seem to agree in broad strokes: Developers of unreasonably risky or harmful AI should be liable for the damage they cause. In the cyber context, after decades of delay by multiple administrations, the White House recently endorsed a similar principle, setting up a difficult but necessary process of debate, refinement, and (maybe eventually) implementation. AI safety advocates of all stripes should push policymakers to launch the same liability-defining process for AI, building on experience in cyber as well as thoughtful proposals emerging at the state level and elsewhere.

 For advocates of regulation of any sort, the clock is ticking. As opposition to meaningful AI regulation swells, a coalition of advocates with concerns across the spectrum of AI risk could be a powerful counterweight. Advancing policies that satisfy AI doomers as well as AI ethicists—each for their own reasons—could help bridge stubborn divides and reduce both present and future dangers from AI.


Zachary Arnold is an attorney and the analytic lead for the Emerging Technology Observatory initiative at Georgetown University’s Center for Security and Emerging Technology (CSET).
Helen Toner is the director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET).

Subscribe to Lawfare