New White House AI Policies Introduce Government by AI

Published by The Lawfare Institute
in Cooperation With
While the Biden administration introduced government with artificial intelligence (AI), the Trump administration aspires for government by AI. The former encouraged agencies to explore AI use cases and generally experiment with how to integrate AI tools into services and systems, while adhering to a number of procedural safeguards. The latter retains some of those safeguards but establishes a default of using AI to streamline and improve government services. Two new policies issued by the Office of Management and Budget will usher in this era of government-by-AI. OMB Memo M-25-21 calls for accelerating federal use of AI. OMB Memo M-25-22 directs agencies to use an “efficient" acquisition process. The policies build on the administration’s Jan. 23 executive order, which centered “AI dominance” as the primary aim of President Trump’s AI strategy.
Implementation of these policies may have a major influence on the public’s willingness to accept AI as a core part of government activity-—whether that influence is positive or negative depends on whether agencies adhere to the safeguards and transparency requirements outlined by the OMB memos while also leveraging AI to meaningfully improve government services and effectiveness. Current uses of AI by the administration—namely, by the Department of Government Efficiency (DOGE) team—may jeopardize this public trust, clashing with the aims of the OMB memos. DOGE has deployed AI systems with seemingly minimal oversight and in sensitive contexts. For example, Reuters recently reported that DOGE developed AI tools to monitor the communications of staffers within at least one federal agency to identify conversations that include critical opinions of the President and his agenda. Musk has even floated replacing government workers with AI. Effective government by AI—as articulated by the OMB memos—precludes such reckless, opaque employment of AI.
As things stand, just 17 percent of the public expects that AI will have a positive effect on the US in the next two decades. Nearly 60 percent have concerns that the government will inadequately regulate AI, whereas just 21 percent think it will overreach and quash AI innovation. If the administration rushes to work AI into sensitive decision-making contexts without earning the public’s trust, then it may hinder future efforts to adopt AI into government processes. The unprecedented shift to government by AI represents a fundamental transformation of public administration that will not only redefine federal operations but also determine whether AI serves as a democratizing force or concentrates power further away from citizen oversight—making the implementation of these policies a critical battleground for the future of American governance
The Era of Government By AI
The two policies are underlied by a belief that the federal government ought to lead and encourage AI adoption. This belief goes beyond the idea that the government should merely avoid regulation that hinders private sector development and use of AI and instead turns the government into an active participant in the diffusion of AI into daily life, including key services and systems. Lynne Parker, the Principal Deputy Director of the White House Office of Science and Technology Policy, celebrated the policies as a manifestation of the administration's goal to "encourage and promote AI innovation and global leadership," which she asserted "starts with utilizing these emerging technologies within the Federal Government."
Memo #1: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust
The first memo directs agencies to "provide improved services to the public [via the best of American AI], while maintaining strong safeguards for civil rights, civil liberties, and privacy." More broadly, it instructs agencies to think through how increased use of AI can "promote human flourishing, economic competitiveness and national security." It also subjects agency use of AI to an appendix's worth of implementation requirements. Nevertheless, a survey of its core components helps make clear why this policy paves the way for government by AI.
First, agencies now have the task of "lessening the burden of bureaucratic restrictions" and "building effective policies" for deploying AI. This deregulatory and aggressive mandate speaks to a reorientation away from AI as a complementary governance tool to one at the core of governance tasks. The goals for increased agency use of AI—innovation, governance, and public trust (in that order)—also align with this new focus.
Second, the memo instructs each agency’s Chief AI Officer (CAIO)—a role originally created by the Biden administration—to assume a pro-AI adoption mentality. CAIOs bear responsibility for "promoting AI innovation, adoption, and governance." They must also track the agency's progress toward creating an AI-ready workforce and assist other agencies with their own AI adoption plans. This latter responsibility may include sharing the data and code the agency used in creating and deploying an AI system with other agencies.
Third, agencies must adjust their procurement processes with an eye toward AI adoption. For example, agencies should consider data as a "critical asset" in future contracts for AI products and services. In practice, this looks like agencies "taking steps to ensure their contracts retain sufficient rights to Federal Government data and retain any improvements to that data, including the continued design, development, testing, and operating of AI." Another proactive measure to facilitate government by AI turns on the memo’s provisions related to vendor lock-in. Pursuant to the memo, agencies should avoid contractual terms that may render them beholden to a particular AI company. Such an outcome might prevent agencies from accessing the latest and best AI tools. Vendor flexibility may also foster a more competitive AI ecosystem when compared to one in which a few AI companies manage to secure a large fraction of agency work for several years, if not more. The memo’s instruction that agencies give preference to interoperable AI systems in their procurement processes will likewise facilitate competition among vendors by easing the ability of agencies to switch to different providers. As noted below, the second memo adds to this emphasis on structuring procurement around a robust, competitive US AI ecosystem.
Fourth, the memo imposes few procedural barriers that would stand in the way of government by AI. A sample of the procedural hurdles include transparency, employee training, and development of minimum risk management practices. Transparency measures include biannual release of the agency's AI compliance plan, which will outline how the agency has adhered to the terms of the memo. They must also maintain and update an AI use case inventory accessible to the public. This inventory must call attention to "high-impact" agency uses of AI, such as when the AI output "serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety." Notably, some degree of human oversight (a proverbial human in the loop) does not automatically exempt an AI use case from falling in this bucket. Certain AI use cases automatically qualify as high impact, such as the use of AI in "safety-critical functions of critical infrastructure" and "in healthcare contexts."
To maintain an "AI-ready" workforce, agencies face three obligations: host regular trainings, such as those offered by the OMB, and additional trainings as necessary to "increase practical, hands-on expertise with AI technologies,"; recruit and retain individuals with the skills necessary to use AI systems in diverse contexts; and, continuously monitor the extent to which the agency's workforce lacks the skills required to make full use of AI.
With respect to minimum risk management practices, these practices specifically pertain to high-impact AI use cases and generally do not apply to AI use by the Intelligence Community. Pilot AI programs also lie outside the reach of these practices—these programs have limited reach and occur within a finite time period and, "when possible," afford individuals interacting with the AI in question to opt out of pilot participation. Use cases within the ambit of the practices must go through the following checks:
- pre-deployment testing,
- an AI impact assessment,
- ongoing monitoring for potential adverse impacts,
- provision of training for the AI operators,
- creation of oversight and accountability systems to assess the AI system in practice, availability of consistent remedies and appeals (specifically, agencies must allow individuals affected by an AI-made decision to make use of human review and a chance to appeal negative impacts, "when appropriate"), and,
- establishing a means for feedback from end users and the public.
While this list sounds long, it’s worth stressing that these practices only cover “high-impact” AI use cases and that agencies may receive a year-long, renewable waiver from one or more of the requirements upon receiving approval from the agency's Chief AI Officer (CAIO). The extent to which these safeguards further public trust and mitigate AI risks will likely hinge on how frequently agencies receive waivers.
Memo #2: Driving Efficient Acquisition of Artificial Intelligence in Government
If the first memo signaled a green light for agencies to expand AI use, then the second removes the traffic cones from the road ahead. That said, there’s a high degree of overlap between the two memos when it comes to procurement best practices.
The second memo, at a high level, instructs agencies to retool their acquisition practices to better accommodate the speedy and substantive adoption of AI. Three themes underpin this memo’s approach: fostering a competitive AI marketplace, tracking performance to protect taxpayer dollars, and encouraging broad collaboration across acquisition teams. Each theme signals a commitment to infusing AI into the fabric of federal operations—not as an add-on, but as a foundational capability.
First, the memo places clear guardrails on how agencies should structure AI contracts to maximize competition and minimize dependence on a single vendor. Agencies must prioritize interoperability, data portability, and modularity in their solicitations. That means baking in requirements that enable models and data to transfer easily between systems and vendors, avoiding technical lock-in that could frustrate innovation and increase long-term costs. The guidance also cautions against opaque or exclusionary licensing schemes that might hinder vendor transitions or restrict government use of its own data and code. In short, agencies must treat government data and contracting leverage as strategic assets in building a more open, flexible AI ecosystem.
Second, the memo sets expectations for how agencies should manage the performance and risks of AI tools throughout the acquisition lifecycle. Before an AI contract is awarded, agencies must convene a cross-functional team—potentially including privacy, legal, cybersecurity, and civil liberties experts—to identify foreseeable risks. These teams are responsible not only for flagging issues before purchase, but for ensuring that vendors disclose how their tools will be evaluated, monitored, and updated over time.
Third, agencies are expected to play a more active role in cultivating AI contracting expertise across government. Within 200 days of the memo, the General Services Administration must launch a repository of AI contracting tools and best practices. Agencies will be expected to contribute language, templates, and resources to this shared hub, including cost benchmarks, performance-based incentive strategies, and model terms for safeguarding data rights.
The memo also nods to broader geopolitical concerns by instructing agencies to “maximize the use of American-made AI,” a provision that signals both economic and national security priorities in line with the administration's “America First, America Only” AI mentality. This emphasis effectively urges agencies to treat U.S.-developed AI tools as default options—especially in contexts that implicate sensitive data or critical infrastructure.
Taken together, the memo formalizes a new procurement posture. Agencies must move faster, think more collaboratively, and act more strategically. But they must also be more discerning—about data rights, vendor accountability, and the long-term adaptability of the systems they acquire. The extent to which agencies embrace these dual mandates—speed and scrutiny—will shape whether government by AI advances the public interest or just adds new complexity to old bureaucratic habits.
The Significance of Government By AI
The administration's policies set the foundation for a generational shift in how government operates. Whether that foundation supports public trust and practical innovation—or ends up reinforcing cynicism and inefficiency—will depend on how faithfully agencies live up to both the spirit and the letter of these memos.
Looking ahead, these policies will likely unfold along one of three trajectories. In the first and most optimistic scenario, agencies will embrace both the innovation mandate and the safeguards, resulting in AI systems that meaningfully improve service delivery while maintaining transparency and accountability. Citizens might experience faster benefit determinations, more personalized government interactions, and fewer bureaucratic hurdles. Such outcomes could gradually shift the deeply skeptical public perception toward cautious acceptance.
In a second, more concerning scenario, the emphasis on speed and American AI dominance could overshadow the safeguards. Agencies might rush deployment, overuse the waiver provisions for high-impact systems, and prioritize efficiency metrics over impact assessments. The DOGE team's reported use of AI for monitoring staff communications suggests this path is already being pursued in some corners of government. Should this approach predominate, public trust will likely erode further, potentially triggering a backlash that hampers legitimate AI adoption across all levels of government.
A third possibility is a bifurcated implementation, where some agencies develop thoughtful, transparent AI applications while others deploy systems with minimal oversight. This uneven approach would create pockets of both innovation and distrust, leaving the public confused about whether government by AI represents progress or peril.
What ultimately emerges will depend on three key factors: the degree to which agencies prioritize meaningful transparency over mere compliance; whether procurement flexibilities truly enable diverse vendor participation rather than concentrating power in a few well-connected firms; and how the administration balances its dual commitments to AI dominance and responsible use when these goals inevitably conflict. For citizens and technologists alike, the stakes could not be higher.