Unpacking the Final Report of the Bipartisan House Task Force on AI
The report provides guidance, recommendations, and policy proposals focused on the global race for leadership in AI innovation.

Published by The Lawfare Institute
in Cooperation With
On Dec. 17, 2024, the Bipartisan House Task Force on Artificial Intelligence (AI), led by Chair Jay Obernolte (R-Calif.) and Co-Chair Ted Lieu (D-Calif.), released its 273-page Report on Artificial Intelligence. The report fulfills the task force’s purpose to “detail[] the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.” Whereas the Senate roadmap, a predecessor document, is more of a starting point for further discussions, the House report aims to serve as a starting point for AI-related policy in the 119th Congress.
As the chair and co-chair stated in their letter introducing the report, the task force members recognize “AI has tremendous potential to transform society and our economy for the better and address complex national challenges” and aim for “this report to inform future congressional policymaking.”
Delivery of the report marked the end of the work of the task force. The work that led to its creation and reactions to it upon delivery provide important context to how the content was created and how it may be implemented in the 119th Congress.
Background and Reactions
In February 2024, Rep. Don Beyer (D-Va.), one of the task force members, recognized AI as “a bright spot of good faith cooperation between the parties” and said the task force sought to “explore how Congress can ensure America continues to lead the world in AI innovation” via guardrails to keep the nation secure from “current and emerging threats.” The group consisted of 24 members—12 from each party, appointed by Speaker Mike Johnson (R-La.) and Minority Leader Hakeem Jeffries (D-N.Y.)—incorporating, per Co-Chair Lieu, a “wide spectrum of political views” that represent individuals, industries, and agencies likely to be impacted by AI.
The task force aimed to incorporate a significant amount of feedback from the public, both as a group and by individual task force members, such as Rep. Ami Bera (D-Calif.)’s request for information from healthcare organizations. The report contains seven primary principles upon which 66 key findings and 85 recommendations are outlined among 15 different sections, each of which is covered below. Industry groups and practitioners have expressed support for the report and its development, noting the breadth and depth in particular.
The report goes beyond the Bipartisan Senate AI Working Group Policy Roadmap, which was released on May 15, 2024. While the Senate roadmap “aimed to help lay the foundation” for action, the report aims to be “an essential tool for crafting AI policy.” As we noted previously in Lawfare, critics of the Senate roadmap cited the “lack of specific calls to action and legislative proposals.” The report, meanwhile, is intended to serve as “a foundation both to ensure that America leads in AI innovation and to ensure that we have appropriate guardrails to protect Americans.” Recognizing the speed with which developments in AI are taking place, the report’s authors indicated it “is only the first step.”
It is unclear whether the report will shape congressional action on AI given the incoming administration’s insistence on a light-touch AI regulatory regime. The 2024 Republican platform called for the “repeal” of President Biden’s Executive Order 14110, replacing it with policy “rooted in Free Speech and Human Flourishing.” Similarly, as vice president-elect, JD Vance indicated “real concern” that “overregulation or some preemptive overregulation” would reduce competition and make it harder to innovate. President Trump’s choice of David Sacks as “White House A.I. & Crypto Czar” supports Vance’s approach, as Sacks has historically advocated for less regulation. Additionally, Trump’s selection of Sriram Krishnan as senior policy adviser for AI at the White House Office of Science and Technology Policy indicates that attention will likely be given to the role of immigration in the context of AI-related higher education and employment.
Contents of the Report
The task force breaks down the report into the following sections: Government Use; Federal Preemption of State Law; Data Privacy; National Security; Research, Development, and Standards; Civil Rights and Civil Liberties; Education and Workforce; Intellectual Property; Content Authenticity; Open and Closed Systems; Energy Usage and Data Centers; Small Business; Agriculture; Healthcare; and Financial Services. The report contains six appendices: AI Task Force Members; AI Task Force Events; Key Government Policies; Areas for Future Exploration; Overview of AI Technology; and Definitional Challenges of AI.
Government Use
The report’s first substantive section recognizes that although there is no “single source of comprehensive guidance” on responsible government use of AI, 20 of 23 government agencies surveyed by the U.S. Government Accountability Office reported using AI. This section calls on Congress to ensure that future government AI use protects “the public’s privacy, security, civil rights, and civil liberties.” The report advocates for required agency disclosure of data and metadata, software, model development, deployment, and uses. Where agencies seek to protect privacy, security, proprietary data, or national security, the report suggests agencies disclose data collection and methodology rather than the underlying data.
The report identifies leading roles for several government agencies: the U.S. Office of Management and Budget (OMB), the U.S. National Institute of Standards and Technology (NIST), and the U.S. Cybersecurity and Infrastructure Security Agency. The report suggests OMB issues policies while NIST continues to provide technical support to agencies as needed.
The authors note AI’s potential to help modernize the federal government’s information technology (IT) system, as 80 percent of federal IT spending is expended on maintaining outdated systems, and recommend the federal government develop capabilities to use AI defensively against cyber threats.
According to the report, agencies will need to employ more experts and should use resources like the U.S. Office of Personnel Management’s list of general technical competencies to identify skills to promote in their respective workforces. The report recommends implementing new hiring methods to bring AI talent into the federal government and develop existing capabilities.
Federal Preemption of State Law
The report suggests Congress could regulate AI to “preempt” state law, as the doctrine of federal preemption allows congressional legislation to override state law. The report offers possible routes for Congress to preempt state AI laws, such as requiring uniform AI regulations across states, prohibiting state action on AI for a duration of time, or implementing floors or ceilings on AI standards, among other forms of preemption.
The authors recognize that state law can be more customizable and experimental than national legislation and recommend Congress commission a study to assess possible federal and state laws regulating AI across different sectors.
Data Privacy
The report admits AI models excel with larger, more diverse datasets but that use of such data often raises privacy concerns. The report observes that many models scrape internet data in violation of terms of use and that companies even scrape their own customers’ data, citing Google’s alleged scraping of Google Docs and Gmail data to train its AI tools and Meta and X’s changes to their privacy policies “to allow for training AI models on the platforms’ data.”
The report discusses how AI misuse can generate physical, economic, emotional, and reputational harms, as well as discrimination and reduced individual autonomy. The authors note that although there are federal laws regulating specific topics, like child privacy and health records, there is no comprehensive federal data privacy or security law—instead, a confusing “patchwork” of state regulations imposes uncertainty on regulated parties, disproportionately burdening small businesses.
The report offers two brief recommendations for legislators: promote data access “in privacy-enhanced ways” and pursue privacy legislation that is “generally applicable and technology-neutral.”
National Security
This section begins with the premise that AI is a dual-use technology: AI can both protect and harm national security. It goes on to describe the Defense Department’s current experiments with AI in logistics, operations, and autonomous vehicles—unilaterally and with foreign security partners. The report recommends prioritizing congressional oversight of AI activities affecting national security and supporting workforce AI training at the Defense Department.
The report also warns that adversaries are already adopting and militarizing AI and devotes substantial discussion to the People’s Republic of China. The authors report that China has outpaced the United States in AI patent applications, journal publications, and journal citations. The Chinese Communist Party’s lower standards for operationalizing AI as a weapon “raises grave concerns.”
The report emphasizes that the U.S. military must be able to use AI technologies worldwide, even in “contested” and “denied” environments, including during wartime and peace.
Research, Development, and Standards
The report defines AI research and development (R&D) as the creation of processes “for learning from data, representing knowledge, and performing reasoning to build computer systems.” With adversaries like China outpacing the United States, Congress must promote federal R&D, support AI evaluation, and standardize AI efforts.
While private industry leads American R&D, the report notes the federal government is a leading source of AI research funding. The U.S. government invested $2.9 billion in AI R&D in 2023. Congress must keep supporting AI research, continuing its decades-long efforts, the report says. An open AI research ecosystem is essential, and university researchers must collaborate with commercial producers to achieve AI’s full potential.
The report recommends increasing technology transfers between universities and the market, promoting public-private partnerships, supporting small business R&D, and promoting research and standardization of AI evaluation while maintaining U.S. leadership in global AI norms.
Civil Rights and Civil Liberties
AI can introduce different types of bias, harming civil rights and civil liberties. To deal with this bias, which includes “unrepresentative data, skewed data, and incomplete data,” Congress must develop AI policy focusing on reducing detrimental impacts on Americans’ civil rights and liberties. The report offers five recommendations:
- Developers should keep humans in significant decision-making loops.
- Agencies must understand how AI produces discriminatory outcomes and protect against these.
- Sectoral regulators must have the resources and knowledge needed to mitigate AI-produced risks in their fields.
- AI technologies should offer transparency to the users they affect.
- AI producers should develop standards to mitigate their technologies’ biased decision-making.
Education and Workforce
The report notes that U.S. students under age 18 lag behind those in other advanced economy countries in math and science, and there is an expanding gap in the academic skills needed to develop and deploy AI. National test scores show that during the pandemic, math outcomes “regressed approximately 20 years.” There is also disparity between STEM degrees awarded to minorities and women versus other groups, which could in part “reinforce existing structural, economic, social, and demographic disparities.” The federal government should support STEM and AI skill development by endorsing curricula teaching AI research, machine learning, and data science and empowering schools to teach AI literacy and provide the necessary curriculums.
The report also supports new employment pathways and opportunities for AI-related jobs. It predicts that generative AI will augment, rather than replace, most jobs. Within the next three years, an estimated 40 percent of the U.S. workforce will need to develop new skills to successfully implement AI.
Intellectual Property
The report explicitly hesitates to make legislative recommendations for AI’s impact on intellectual property (IP), as there is a great deal of pending litigation on these issues, but notes that generative AI has a particularly large impact on the creative community because it is difficult for creators to know whether and how their copyrighted works are being used. These challenges are exacerbated by the global nature of IP policy, which may encourage AI developers to move their operations to jurisdictions with lower technical standards and therefore lower operating and regulatory costs.
Although it avoids making legislative recommendations, the report suggests clarifying how IP laws, regulations, and agency activities intersect with AI. Additionally, it recommends combating the “growing harm of AI-created deepfakes” through several strategies, such as allowing individuals to protect their “identity-based rights” and creating national IP protections.
Content Authenticity
The report notes that synthetic content—material “altered or created using any audio, video, image creation, or editing tool”—presents threats and opportunities. It states that regulation must balance First Amendment rights with threats like nonconsensual intimate images, child sexual abuse material, fraud, copyright infringement, political interference, and media mistrust. Instead of proposing a single all-encompassing solution, the report notes that the response must be multifaceted, relying on public media literacy and technical solutions. Overcoming the challenges posed by synthetic content will require participation by media creators, media consumers, and content distributors. However, amid public alarm, policymakers must remember to focus on demonstrable, not speculative, harms of synthetic content.
Open and Closed Systems
The report assesses the benefits and harms of “open” AI systems—models whose code is publicly available. A key issue in “open” systems is the disclosure of model weights—numerical parameters that dictate model behavior. Anyone possessing the weights can operate the model. The report notes that the disclosure of these parameters fosters innovation and research but includes risks such as misuse by malicious actors, accelerated AI development by adversaries, and exacerbation of existing harms (such as AI-generated child sexual abuse material). Ultimately, the report fails to conclude that restrictions on open-weight models are appropriate but leaves room for future restrictions.
Energy Usage and Data Centers
The report emphasizes concerns about AI models’ energy consumption, with more effective models requiring substantial resources for model training and operation. For example, the amount of electricity used to train the GPT-4 model is roughly equivalent to the electricity needed to power 4,800 homes for a year. The report explains that AI growth generates high carbon emissions, strains power grids, and introduces novel supply chain concerns (such as semiconductor supply chain disruptions). However, the report stresses that AI can also be part of the solution—optimizing power plant design and grid utilization, finding new energy sources, advancing nuclear energy, furthering carbon capture research, and improving design efficiency for fuel cells.
The report notes that data centers—facilities containing computing hardware, networking, and storage centers for AI models—require large quantities of electricity and water, but they also create economic growth. It cites statistics by the Data Center Coalition that in 2022 the data center industry created over 560,000 jobs directly and 4.2 million jobs overall in the United States.
Small Business
The report concludes that small businesses are eager to adopt AI but face distinct disadvantages and challenges. One challenge is limited AI literacy, which may also exacerbate socioeconomic divides. Further, because of large startup costs, small AI firms struggle to compete with large AI companies, although open-source models can counter some costs. Finally, federal compliance requirements are likely to disproportionately affect small businesses. The report recommends supporting small business AI literacy, investigating resource challenges, and easing compliance burdens for small businesses.
Agriculture
The report outlines the ways AI can enhance agricultural production, reduce environmental damage, and improve commodities trading.
The report summarizes three ways AI can enhance agriculture: precision agriculture, which utilizes technologies to increase operational efficiency; water technology, which forecasts water supply; and land change analysis, which monitors land patterns such as soil mapping and wildlife habitats. However, the report finds that high costs, technical literacy demands, and connectivity issues hinder adoption. It observes that greater AI integration could further mechanize agriculture, increase efficiency within specialty crop production by aiding crop assessments, and modernize U.S. Department of Agriculture operations. Furthermore, the task force report details how AI is being used to detect ongoing wildfires earlier, predict future fires, and aid post-fire analysis.
The report also discusses the role AI plays in commodities trading. The Commodity Futures Trading Commission uses AI to detect anomalies and is exploring its use in compliance enforcement. AI is used for trading in derivative markets, pricing, enhancing firm compliance, and risk management.
Healthcare
The report emphasizes AI’s transformative potential in healthcare. Applications include the use of AI to streamline operations, improve diagnostics, and enhance research. According to the report, AI can reduce administrative burdens, speed up drug development, and improve patient care efficiency. Nevertheless, challenges such as inconsistent medical data standards and lack of interoperability hinder progress. The report recommends fostering safe, transparent, and effective AI practices; encouraging risk management; supporting healthcare-related AI research; and incentivizing adoption while ensuring privacy, security, and equitable healthcare.
Financial Services
The report observes that the financial industry has long used AI. The industry’s uses have included loan processing, fraud detection, and algorithmic trading. The report finds that generative AI can improve compliance, efficiency, and inclusion for underbanked communities but raises concerns about bias, data security, and systemic risks. Furthermore, financial firms face adoption barriers compared to larger, better-resourced, institutions. The report recommends fostering responsible AI use, strengthening regulator expertise, and maintaining consumer protections, urging stakeholder collaboration in these efforts.
Looking to the Future
The report demonstrates that AI is already prevalent in many sectors and may transform others. Based on the report, this section outlines what is likely to happen with AI regulation in the coming year or two. The report suggests that Congress is likely to take the following steps concerning AI development and regulation:
- Technical training: The report emphasizes the positive impact AI can have on various fields while noting that the large barrier to this progress is a lack of technical knowledge. The public should watch for public- and private-sector educational initiatives geared toward these fields.
- Jobs: The report notes the potential for creating technically sophisticated and nontechnical AI-related jobs but notes that hiring needs remain unmet, partly because many companies have sought workers with four-year technically oriented degrees, which are in short supply. Besides incentivizing companies to hire more candidates without four-year degrees, Congress may continue previous efforts like the CHIPS and Science Act to fund more STEM education.
- Energy: The report links AI with energy development. Energy-associated issues such as climate change, energy independence, and job creation are of concern to both political parties, so Congress may make more funding available for AI-related energy projects.
- Data privacy and civil rights: The confusing patchwork of privacy protections burdens companies key to AI development. Uniform data-privacy regulations would bring economic advantages, foster AI development, and alleviate bipartisan concerns about data-scraping, civil rights violations arising from the lack of data privacy, and individuals’ autonomy over their own data.
- National security: Congress will continue to focus on AI’s dual-use potential. Legislators will emphasize possible threats from technology transfers with entities based in adversarial territories and building defensive and offensive AI capabilities.
On Jan. 14, nearly a month after the report’s release, President Biden announced Executive Order 14141, which responds to many of the concerns and opportunities detailed in the report. The executive order aims to create more domestic data centers and foster U.S. AI innovation while balancing concerns like national security—including the cyber, supply-chain, and physical security of AI infrastructure; economic competitiveness, helping the U.S. gain a competitive advantage in the AI global market and creating U.S. jobs; and clean energy. The executive order prioritizes the growth of clean energy technologies as key for powering AI development, calling for investment in geothermal, solar, wind, and nuclear energy. It is unclear what will happen with the order under the Trump administration. President Trump has yet to comment on Executive Order 14141. He has advocated for a more laissez-faire approach and overturned Biden’s 2023 Executive Order 14110, which created regulations aimed at promoting “responsible” AI use. However, Executive Order 14141 has a different aim than the rescinded order, focusing more on AI growth, aligning with much of the bipartisan report, and so might be adopted by the new administration.