Prioritizing International AI Research, Not Regulations
Published by The Lawfare Institute
in Cooperation With
How best to govern artificial intelligence (AI) remains an open question. Many of the proposed answers have focused on regulation. Scholars, labs, and policymakers, for instance, have been debating the merits of the EU AI Act, SB 1047 in California, and similar regulatory efforts. Comparatively less attention has been spent on the role of research in identifying and mitigating the risks posed by AI. Analysis of prior efforts to govern emerging technologies reveals that sustained, iterative, and independent research is essential to understanding nascent fields and, in turn, developing appropriate regulations. Automobile safety standards, for instance, are developed through crash tests that expose weak points in the latest designs. A similar research-first approach with respect to AI could ensure related policy debates are grounded in the technical aspects of AI rather than abstract political arguments.
This paper details this research-regulation cycle in more detail. It also argues that an international research initiative focused on AI would serve two critical purposes: first, facilitating the consolidation of resources and expertise required to do high-quality AI research; and, second, producing independent and transparent research that can shape responsive regulations around the world.
A thorough analysis of two research entities—CERN, which conducts particle physics research, and the Intergovernmental Panel on Climate Change (IPCC), which produces consensus reports on current climate science—structures the paper's analysis of what an international AI research initiative could look like. This thorough investigation of CERN and the IPCC makes clear that the latter is a more feasible model to emulate in the AI context given current geopolitical difficulties. That said, both entities provide important lessons that should inform any effort to conduct international AI research.
For an in-depth discussion of the paper, listen to this Lawfare Daily podcast episode.
You can read the paper here or below.