Congress Cybersecurity & Tech

The Road to Regulating AI: Analysis of the AI Working Group’s Roadmap

Kevin Frazier
Friday, May 24, 2024, 9:40 AM

 The AI Working Group, led by Majority Leader Schumer, recently released its AI policy roadmap. Its focus on innovation and lack of details have drawn criticism.

The US Capitol building in Washington, DC (Mark Fischer, https://www.flickr.com/photos/fischerfotos/9161482061; CC BY-SA 2.0 DEED, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On May 15, the Bipartisan Senate AI Working Group released a long-awaited report titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.” The roadmap intended to give relevant congressional committees some initial considerations as they go about regulating AI, but critics decried the report’s lack of specific calls to action and legislative proposals. 

In response, Majority Leader Chuck Schumer (D-N.Y.) countered that the roadmap was never intended to produce legislation but rather to empower and inform committees. He anticipates that committees may benefit from the roadmap and “come out with bipartisan legislation that we can pass ... before the end of the year.”

As Schumer’s defense illustrates, responses to the roadmap have been mixed. Stakeholder groups concerned about the risks posed by AI, such as Public Citizen, the ACLU, and Public Knowledge, have questioned the roadmap’s focus on innovation over safety. Others, including IBM Chairman and CEO Arvind Krishna, have applauded it as a helpful guide to a more consistent and stable regulatory approach. 

The roadmap covers the following topics: Supporting U.S. Innovation in AI; AI and the Workforce; High Impact Uses of AI; Elections and Democracy; Privacy and Liability; Transparency, Explainability, Intellectual Property, and Copyright; Safeguarding Against AI Risks; and National Security. 

The summary below does not take a stance on the roadmap’s contents but rather aims to highlight its key provisions. 

Background

The emergence of artificial intelligence (AI) as a transformative force in society has garnered significant attention from local, state, federal, and international lawmakers. Early in the 118th Congress, the Bipartisan Senate AI Working Group formed in response to broad awareness of AI’s potential to revolutionize various sectors—including science, medicine, agriculture, and beyond. This group was founded not only to tap into the benefits of AI but also to address the associated risks, such as workforce displacement, national security challenges, and existential threats. The AI Working Group’s primary objective is to assist and inform the traditional lawmaking process—a process oriented around congressional committees that often lack the flexibility to address the cross-jurisdictional nature of AI. 

The AI Working Group, which includes Sens. Schumer, Mike Rounds (R-S.D), Martin Heinrich (D-N.M.), and Todd Young (R-Ind.), initiated its efforts with a series of educational briefings, resulting in the first-ever all-senators classified briefing on AI. These sessions underscored the broad bipartisan interest in AI and highlighted the need for comprehensive policy discussions. Recognizing the complexity and far-reaching implications of AI, the group then hosted nine bipartisan AI Insight Forums. These forums covered a wide range of topics, including U.S. innovation in AI, the impact of AI on the workforce, sensitive sectors such as health care, elections and democracy, privacy and liability, transparency and intellectual property, society’s long-term well-being, and national security.

By bringing together leading experts and fostering a dialogue with the Senate, the group aimed to develop a common understanding of the complex policy choices surrounding AI. However, some senators, including Sen. Elizabeth Warren (D-Mass.), questioned whether the forums were sufficiently inclusive, transparent, and open, and whether the sessions appear to have served their intended function—narrowing the regulatory focus of lawmakers and galvanizing support for action sooner than later.

The Road Ahead

Building on the insights gained from these initiatives, the AI Working Group developed an AI policy roadmap to guide bipartisan efforts in the Senate. This roadmap strives to identify areas of consensus for consideration during the 118th Congress and beyond. In other words, the roadmap’s authors acknowledge that it is not an exhaustive list of policy proposals. Instead, the roadmap marks the beginning of a lengthy legislative process. 

Another high-level theme is the group’s dedication to harnessing AI’s full potential while minimizing its risks. As Schumer remarked recently, innovation remains the group’s North Star. The roadmap aims to keep the United States at the forefront of AI innovation and to increase the odds that all Americans benefit from the opportunities created by this technology. The core aspects of the roadmap, discussed below, explore how best to accelerate the development and deployment of AI in a manner that does not disrupt the nation’s economic, political, and social stability. 

Supporting U.S. Innovation in AI

The AI Working Group frequently and explicitly stresses the importance of robust federal investments in AI to maintain U.S. leadership in this critical field. The group encourages the executive branch and the Senate Appropriations Committee to assess how to accelerate and augment federal investments in AI, aiming to reach the spending levels proposed by the National Security Commission on Artificial Intelligence (NSCAI). Specifically, the NSCAI recommends at least $32 billion per year for non-defense AI innovation. 

These funds would “strengthen basic and applied research, expand fellowship programs, support research infrastructure, and cover several agencies,” with a focus on supporting research by a proposed National Technology Foundation, the Department of Energy (DOE), the National Science Foundation (NSF), the National Institutes of Health, the National Institute of Standards and Technology (NIST), and the National Aeronautics and Space Administration. This cross-government AI research and development (R&D) effort would also encompass an all-of-government “AI-ready data” initiative and focus on responsible innovation, including fundamental and applied sciences like biotechnology, advanced computing, robotics, and materials science. Additionally, such research should address the transparency, explainability, privacy, interoperability, and security of AI. To get these efforts going sooner than later, the roadmap includes a call for emergency appropriations to make up the difference between current spending levels and the recommended amounts.

The roadmap also highlights the need to fully fund outstanding initiatives from the CHIPS and Science Act, particularly those related to AI. These include the NSF Directorate for Technology, Innovation, and Partnerships, Department of Commerce Regional Technology and Innovation Hubs, DOE National Labs, and NSF Education and Workforce Programs. The group supports funding for semiconductor R&D specific to AI chip design and manufacturing. These investments may go a long way toward establishing American leadership in AI and developing new techniques for domestic semiconductor fabrication. This proposal is of particular importance in light of the Biden administration’s recent decision to raise tariffs on Chinese goods, including semiconductors.

Other critical funding areas include the National AI Research Resource (NAIRR), AI Grand Challenge programs, AI efforts at NIST, and the Bureau of Industry and Security’s IT infrastructure and personnel capabilities. The roadmap also emphasizes the importance of developing policies and R&D activities at the intersection of AI and robotics, supporting smart cities and intelligent transportation system technologies, and funding local election assistance for AI readiness and cybersecurity. 

The latter priority has been the subject of increasing conversation on the Hill as well as in the states. Sen. Amy Klobuchar (D-Minn.), for instance, is spearheading an effort to prevent harms caused by deceptive AI during elections. California already has a ban on certain uses of AI prior to an election. Other states intend to follow the Golden State’s lead.

AI and the Workforce

The AI Working Group acknowledges the widespread concern about AI’s potential impact on jobs. Despite some researchers concluding that advances in AI will only lead to gradual job loss, the group stresses the need for ongoing evaluation of AI’s impact on the workforce. The roadmap encourages the development of legislation focused on training, retraining, and upskilling workers in light of a more turbulent and unpredictable labor market. This includes incentives for businesses to integrate new technologies and reskilled employees into the workplace and for workers to attend retraining programs.

The group also highlights the importance of consulting stakeholders—including innovators, employers, civil society, unions, and workforce representatives—during the development and deployment of AI systems. This inclusive approach, per the roadmap, would provide for the consideration of diverse perspectives and mitigate potential negative impacts on workers. Given concerns about the lack of inclusion and transparency during the AI Insight Forums, this point deserves particular attention. The roadmap did not specify how to make AI development more participatory and open. 

The group also supports legislative efforts to improve the U.S. immigration system for high-skilled STEM workers, thereby fostering AI advancements across society. This, too, echoes related programs undertaken by the administration. As summarized by the director of the White House Office of Science and Technology Policy (OSTP), Arati Prabhakar, the administration has placed “a lot of emphasis on bringing in talent from everywhere.” A massive executive order issued by the administration in 2023 evidences President Biden’s desire to recruit and retain AI talent. The order empowers the OSTP to assist agencies in bypassing the traditional causes for delay in bringing in new talent. Congressional action could take aim at other barriers to luring AI experts to America’s shores.

High-Impact Uses of AI

The roadmap also hits on the opportunities and challenges brought on by “high-impact uses” of AI. What constitutes a high-impact use is unclear. The roadmap implies that such uses involve matters related to consumer protection and civil rights, such as health care and criminal justice. The group points out that existing laws pertaining to those fields must be applied to AI systems and their developers, deployers, and users to avoid the perpetuation of disparate outcomes and, more generally, to protect consumer well-being. The group encourages congressional committees to proactively identify gaps in current law, develop language to address these gaps, and give regulators the authority and information required to enforce new provisions.

Beyond generally exploring the role of Congress in updating the regulatory ecosystem to safeguard the public from AI’s risks, the roadmap hints at the need for AI labs to take extra precautions around the deployment of AI in sensitive areas. In such cases, the group calls for greater transparency, explainability, and testing. 

The health care sector in particular received substantial attention from the group. The roadmap advocates for legislation that supports and sustains the use of AI to spark new health care innovations while requiring measures to protect patients. This includes consumer protection, fraud prevention, and the promotion of accurate and representative data. By way of example, the group supports existing NIH plans to develop and improve AI technologies, particularly in data governance. NIH researchers have already managed to achieve scientific breakthroughs with AI—the National Cancer Institute recently announced that researchers had created an AI tool that allows doctors to more precisely match cancer drugs to patients.

Elections and Democracy

As previously noted, AI’s impact on elections and democracy is a significant concern of state and federal officials. The Justice Department recently warned that there has already been “a dangerous increase in violent threats against public servants,” amplified by AI, that may affect election workers. Related threats spurred South Korea to take extensive measures to mitigate electoral interference from AI. The group seems keen to ensure the United States does the same.

Building on President Biden’s October 2023 executive order on AI, the group encourages the development of effective watermarking and digital content provenance technologies for AI-generated election content. The roadmap acknowledges that the implementation of robust protections against AI-generated disinformation must not infringe on First Amendment rights.

Additional safeguards against misuse of AI during elections would complement the valuable work of the Election Assistance Commission and the Cybersecurity and Infrastructure Security Agency in developing tools and resources to protect elections. The group encourages states to utilize these tools and highlights the importance of voter education about content provenance to mitigate AI-generated disinformation.

Privacy and Liability

The rapid evolution of AI technology presents challenges in assigning legal liability for AI-related harms. The roadmap includes a nudge for congressional committees to consider additional standards or clarify existing standards to hold AI developers and deployers accountable if their products or actions cause harm. The group also supports efforts to hold end users accountable for harmful actions facilitated by AI. 

How this call to action informs ongoing debates around a nationwide comprehensive privacy law remains to be seen. A race to regulate privacy, especially data privacy, has kicked off in D.C. The Federal Trade Commission is expected to release a commercial surveillance rule in the coming weeks. Meanwhile, the American Privacy Rights Act proposed by Sen. Maria Cantwell (D-Wash.) has garnered substantial attention from lawmakers and stakeholders. 

Transparency, Explainability, Intellectual Property, and Copyright

The group welcomes the development of legislation to establish public-facing transparency requirements for AI systems. The roadmap urges the creation of best practices for automation levels, ensuring a human-in-the-loop approach for high-impact tasks where necessary. Additionally, it flags the importance of transparency in training data, particularly any use of sensitive personal data or content protected by copyright. 

How labs source training data has been among the most contentious topics to date. Researchers remain unsure whether synthetic data is a viable alternative to training the next generation of AI models. The roadmap may signal that lawmakers intend to err on the side of protecting data creators (that is, the public) rather than easing access to data for labs.

Safeguarding Against AI Risks

Given the potential long-term risks associated with AI, the group insists on detailed testing and evaluation to understand and mitigate potential harms. The group supports the development of a capabilities-focused risk-based approach, standardizing risk testing and evaluation methodologies. The roadmap encourages relevant congressional committees to explore frameworks for pre-deployment evaluation of AI models and the establishment of an AI-focused “Information Sharing and Analysis Center.”

The group also advocates for legislation to advance research and development efforts addressing AI risks. In particular, the roadmap highlights the need for continuous research and collaboration to address both short-term and long-term AI risks. If Congress heeds this advice, then federal resources may support proposed state risk research programs, such as California Sen. Scott Wiener’s bill to create “CalCompute,” “a public cloud computing cluster that will allow startups, researchers, and community groups to participate in the development of large-scale AI systems.”

National Security

Schumer and other group members have not shied away from the fact that the group’s focus on innovation is related to its desire to outpace AI initiatives by China and other adversaries. The national security ramifications of AI development likely informed the group’s emphasis on the U.S. maintaining a competitive edge in AI capabilities. The roadmap encourages robust investments in AI for national security, including advancements in cyber capabilities and the integration of AI into defense operations. Another key area for investment is personnel—the roadmap makes the case for the development of career pathways and training programs for AI within the Department of Defense and the intelligence community, ensuring a strong digital workforce.

More broadly, the roadmap acknowledges the potential threats posed by general-purpose AI systems achieving artificial general intelligence (AGI), generally regarded as the point at which AI reaches “human-level learning, perception and cognitive flexibility.” The group welcomes more congressional attention on defining AGI, assessing its likelihood and risks, and developing an appropriate policy framework. The group takes a broad conception of national security—exploring potential uses of AI to manage and mitigate space debris as well as to stem climate change while also taking measures to limit the energy-intensive nature of AI development. 

The roadmap also discusses the importance of advancements in AI for biotechnology and the need to defend against AI-enhanced bioweapons. The group nudges lawmakers to consider recommendations from the National Security Commission on Emerging Biotechnology and the NSCAI in this domain. 

In an email to Lawfare, Melissa Hopkins, health security policy advisor at the Johns Hopkins Center for Health Security, applauded the roadmap’s focus on tailoring regulation of biosecurity risks posed by AI to “the specific capabilities of certain types of systems” rather than “focusing on proxies for capabilities such as compute power or operational capacity[.]” Hopkins expects that the former approach “will be important for enabling the vast amount of innovation in AI and biotechnology to continue in America unencumbered.” 

Hopkins did spot a missed opportunity, though. She would have liked the roadmap “to encourage Congress to consider clarifying liability for developers and deployers of models with certain biological capabilities.” The current uncertainty regarding liability may be siffling innovation, according to Hopkins. If Congress adopted a liability scheme with some safe harbors for responsible AI developers, then innovation may occur at a faster clip. 

Additionally, the roadmap flags the need for proactive management of export controls for critical technologies, including AI, to ensure national security. This consideration of national security risks earned praise from Hopkins, who said she was “really encouraged” by the inclusion of export controls.

Initial Responses

Tech Policy Press, a nonprofit that covers challenges to democracies globally, conducted a helpful, albeit incomplete survey of initial responses to the roadmap. Their analysis included favorable remarks from industry stakeholders, such as IBM Chairman and CEO Arvind Krishna, as well as “disappointment and sadness” from Suresh Venkatasubramanian, director of the Center for Tech Responsibility. 

Where Krishna seemed keen to rally behind congressional adoption of the roadmap, “civil society reactions were almost uniformly negative,” according to Gabby Miller and Justin Hendrix of Tech Policy Press. Cody Venzke, senior policy counsel at the ACLU, was among those in the disappointed camp. He faulted the group for paying inadequate attention to AI’s threats with respect to civil rights and civil liberties. Similarly, Chris Lewis of Public Knowledge concluded that the roadmap offered insufficient analysis of competition in the AI marketplace, developing robust oversight of AI, and protecting fair use and, by extension, creativity online.

While the details of the responses vary, a common sentiment seems to be that the roadmap was woefully light on details, somewhat of a surprise given that it was the product of so much time and energy.

* * *

The AI Working Group’s roadmap is a crucial step toward regulating AI, addressing its profound impacts across various sectors. By prioritizing innovation while mitigating risks, the roadmap provides a detailed (and, to some, controversial) framework for federal investments, workforce adaptation, and AI risk mitigation. Despite potential challenges in legislative adoption, the group’s comprehensive analysis of AI’s implications underscores the urgency of bipartisan efforts. As AI continues to influence national security, privacy, and civil rights, the roadmap highlights the necessity for continuous research, transparent governance, and strategic policymaking to ensure that AI benefits all Americans while maintaining the United States’s leadership in this transformative technology.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow.

Subscribe to Lawfare