Cybersecurity & Tech Executive Branch

Lawfare Daily: Aram Gavoor on the Trump Administration's AI Pivot: Trading Safeguards for Stargate

Kevin Frazier, Aram A. Gavoor, Jen Patja
Friday, January 24, 2025, 8:00 AM
How is the Trump administration approaching AI policy?

Published by The Lawfare Institute
in Cooperation With
Brookings

Aram Gavoor, Associate Dean for Academic Affairs at GW Law, joins Kevin Frazier, a Tarbell Fellow at Lawfare, to summarize and analyze the Trump administration’s initial moves to pivot the nation’s AI policy toward relentless innovation. The duo discuss the significance of Trump rescinding the Biden administration’s 2023 executive order on AI as well as the recently announced Stargate Project.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Aram Gavoor: This Stargate Project touches on a number of other priorities of the administration, including several executive orders that seem to operate in harmony with it, as well as a broader vision for American supremacy, as well as national security.

Kevin Frazier: It’s The Lawfare Podcast. I'm Kevin Frazier, a Tarbell Fellow at Lawfare, joined by Aram Gavoor, Associate Dean for Academic Affairs at GW Law.

Aram Gavoor: Taking out the partisanship from it is, Trump is returning to the baseline that he had at the end of his first term, and a baseline upon which the Biden administration did not disrupt, just built upon and shunted in a different direction.

Kevin Frazier: Today we're talking about the rapid moves taken by the Trump administration to pivot to an AI policy grounded in innovation. President Trump has already rescinded the Biden administration's key executive order and launched a bold proposal to double down on domestic AI innovation. Aram helps me analyze what this all means and what may be next.

[Main podcast]

So depending on who you ask, on January 21st, 2025, President Trump either announced the next Apollo missions or the next Manhattan Project. Or, if you've been on X these days, you may take an opposite view—one held by one Elon Musk—that perhaps the recently announced Stargate Project is instead something that really isn't going to amount to much because we just don't have the funds there.

So this Stargate Project is a collaboration between OpenAI, SoftBank, Oracle and Microsoft, among others, aiming to invest up to $500 billion—yes, half a trillion dollars—over four years to build AI infrastructure in the U.S., starting with an immediate $100 billion deployment.

OpenAI's blog post on the project argues that, quote, this project will not only support the re-industrialization of the United States, but also provide a strategic capability to protect the national security of America and its allies, end quote. Sam Altman, standing alongside President Trump during the announcement, said, quote, I think this will be the most important project of this era, end quote,

Aram we have so much to discuss just in the first days of the Trump administration with respect to AI policy, but this project obviously has been getting a ton of headlines, a lot of attention. What are we looking at here? What is the significance? Can you give us a little bit more about the actual details about what's going on on the ground?

Aram Gavoor: I think a lot of the facts are emerging. But one thing that's very clear is President Trump's vision of working with industry in a collaborative way, focusing on investment and facilitation is at the front and center here.

Obviously, this Stargate Project touches on a number of other priorities of the administration, including several executive orders that seem to operate in harmony with it, as well as a broader vision for American supremacy, as well as national security.

Kevin Frazier: And for our listeners who may have seen this announcement, may have seen President Trump, for example, standing alongside many CEOs, —just to be clear, there is no actual federal explicit participation in the Stargate Project. This is purely a private endeavor.

Aram Gavoor: Well, again, I think the facts are still emerging, so we don't know yet, but undoubtedly it's federally supported in the sense that if it's within a day or so of the president swearing into office, and he's in the Roosevelt room, because that's what it looked like at the White House, there's going to be some significant coordination.

So I think the federal support is currently taking place in secondary formats and I would be unsurprised if direct federal support in the context of reallocated funds, etc., are not made to be had because there's some executive orders that indeed cover those types of things, freeing up some money for purposes yet to be determined.

Kevin Frazier: And obviously, this is going to scratch a huge need when it comes to AI development and the nation's AI goals. As noted by McKinsey, for example, global demand for data center capacity may triple by 2030, growing between 19 percent and 27 percent over the next couple of years.

So the scale of this project seems to be aligned with the scale we need for that sort of AI dominance as forecasted by President Trump and others. What is your sense of the actual capacity of this project? Are you optimistic and bullish about it achieving, quote, the re-industrialization of the United States? Or is there a little bit of room or argument for some pessimism? Or what are some of those initial signs you're seeing?

Aram Gavoor: Well, whenever these big projects are rolled out there, they're certainly rolled out with intended goals. Those goals can be modified over the years. But I am bullish that a Stargate Project is indeed going to launch, that it's going to have definitely tens of billions of dollars if not actually achieving the sum—maybe it's going to achieve the sum that's described.

And really I think the facts speak pretty loudly and what I see are at least the two executive orders that tend to align with this infrastructure project on tech. One is Declaring a National Energy Emergency. That's a day one executive order where the president declared a national energy emergency due to the U.S.'s inadequate energy supply and infrastructure.

It also involves the potential expedition of energy infrastructure projects, leveraging resources under laws like the Defense Production Act and also calls for consultations under the ESA, the Endangered Species Act, to facilitate energy supply outlines coordinated to enhance the nation's energy security.  So if you're talking about a massive AI infrastructure that as you correctly identified, that's predominantly data centers and clustered computing, which has massive energy demands.

And then the other executive order that I wanted to identify also on day one is called Unleashing American Energy. So this establishes the policy of fossil fuel exploration on federal lands. I think this one also freezes Reduction Act grants. So something like 86 or 87 percent of the total grants under the IRA of 2022 have been made, but there's about $18 billion that hasn't been committed. And that's based on President Biden's own words before leaving office last week, if I recall.

So what you're really seeing is this AI project Stargate synergizes with a number of different policies of the Trump administration. Just today on day three—we're not even on like full day three of the term, that's two hours from now. And undoubtedly we're going to see more of that coming up, especially if the president decides to engage in an affirmative executive order on AI.

And keep in mind on the day one rescission—a revocation of EOs—Executive Order 14110, which is the big Biden AI EO, was revoked. However 14-141, which is an EO that Biden put out, President Biden put out last week, has not yet been revoked, but likely will be, that's my guess. So there's a, there's a lot developing here.

Kevin Frazier: Lots developing indeed. And I think before we move on to some of those initial rescissions of Biden era EOs and some of those EOs that haven't been touched yet, I do just want to hit on some of the initial commentary that's been interesting around the Stargate Project.

Justin Amash had a hot take, a former congressman, that he couldn't imagine anything scarier than a collaboration between the federal government and a handful of companies all agreeing on some technological vision for the country.

Others, though, have embraced this as a really monumental pivot in AI development in the U.S. because we're going to have potentially the capacity through the Stargate Project for startups researchers to have far more access to the critical resources to experiment and build out their own AI products. And so even though OpenAI is the key lead here on this project, it may actually embolden and increase AI competition.

Do you get a sense of which of those seems more likely or what sorts of key indicators are you going to be keeping an eye on as the Stargate Project continues to unfold?

Aram Gavoor: So I definitely think we're at a crossroads of if Stargate is as big as it's described to be. And I just described, I just gave you a variety of different data inputs from actual policy that it is potentially going to achieve its stated outcome.

So I think the, the big variables to look at first are, will, will this collaboration involve on the compute side, compute availability for researchers? Will it also allow for small businesses to participate? As to the models themselves, will it be on an open source platform or will there be a preference for it as a prerequisite to utilize the large infrastructure clustering?

And then also from a government contracts perspective, which is the direct government use of AI. And I do think the concept of frontier models—although it might not survive in the text of Biden executive orders, a presidential memoranda or national security memoranda—the idea of that is a generally valid idea that's existed for quite a long time. I could see that to continue.

And some of this will also come down to whether in a public procurement setting, the government's going to focus on really the big players and the highest capabilities, which of course I think it needs to in one respect. But then also if it's going to take any steps to try to mitigate the risk of what's known as lock-in in the public procurement world, for which a vendor enters into an agreement with the government and it is utilized and structured in such a way where it's very difficult for competition in the future.

So are these sole source procurements? Are these going to be open procurements with requests for RFQs, etc.? And also to what degree will there be small business set asides explicitly built in or provided as a plus factors for the procurements themselves?

Kevin Frazier: Well, certainly we'll have a lot to watch there as the Stargate Project continues to evolve and we'll see whether or not they actually have the money. Elon Musk may have to eat his words or not. All to come forth in due time.

So regarding the rescission of, or the rescinding of Executive Order 14110. Folks may know this just simply as the Biden AI EO. This came out in October of 2023 and it established comprehensive guidelines for AI development with a focus on preventing AI enabled threats with respect to civil liberties and national security. Generally, it was framed as kind of making sure that ethical considerations of AI were at the forefront of its development and generally promoting responsible AI development, to use some of the buzzwords of this policy space.

President Trump rescinded that executive order on day one. And I was wondering if you could walk through some of the immediate impacts. You've been involved in thinking about that 2023 EO. And there were a lot of projects that required explicit actions by federal agencies, like creating chief AI officers. What's going to happen to those AI officers? They've been established. Are we seeing changes in how agencies are going to operate? What are some of the other consequences you see from this rescinding?

Aram Gavoor: So based on this rescission day one–again, I want to introduce a broader sense of context. Part of it is looking, is just spending all of the time to read everything and then seeing what's the negative space also involved there and what is the consequence of other rescissions and coordinated policies.

So I think you, you hit the main one, which is the rescission of 14110. But then to offer a little bit more triangulation, I would also think that it's crucial to identify that Executive Order 14091, which is Further Advancing Racial Equity and Support for Underserved Communities through the Federal Government, —which is implicated in 14110 and also referenced in the OMB implementation of 14110 is a big feature.

And then the third part of the triangulation would be the broader Trump HR policies and pauses. So in the context of gender equity, racial equity, DEI, those are all policies that the president day one described as arguably unlawful, inconsistent with American interests, discriminatory, and immediately not part of the administration's policy to the point where there's already been some terminations of employees, career employees in the federal government. And it's conceivable that that might continue within these domains.

So AI as structured in the Biden administration — 14110 and then predating that, the AI blueprint for a bill of rights in 2022 and a number of the other structures — all of that is being pulled out root and stem. So to the extent that AI officers exist in the agencies as a technicality, I could see that the rescission of 14110, which required that, might also undo those positions.

But still, there needs to be—those positions currently exist. Like, there's a designation that exists. Some people might actually be employed in that sole role. So there's going to be implementation consequences for which those things will be analyzed.

I would be utterly unsurprised if the general proposition of a chief AI officer does not remain intact. Or also in the, again, context, if you look at not just the DOGE order that took the U.S. Data Service, which is housed in the EOP, a 2014 Obama era creation, renamed it DOGE, gave it the mission of government efficiency, predominantly in the context of computing.  And then if you also look at one of the hiring freeze EOs actually referencing DOGE as like a tasking, I could see those AI officers sort of emerging or even being dual hatted with being the same, like, DOGE officers within the agencies, as opposed to AI with a pivot towards equity.

Kevin Frazier: And so I think this implementation of the rescission is going to be fascinating to pay attention to because another weird quirk of all of this is that the U.S. AI Safety Institute, or AISI — one of the AISIs that's been stood up by other countries, for example, the UK has one, South Korea has one, I believe France has one — this was created in the wake of the EO, but wasn't explicitly outlined in EO 14110.

And so a lot of folks have been debating what does this mean for the U.S. AISI, because the AISI is supposed to be the entity that, for example, performed some safety evaluations on frontier models developed by OpenAI, Anthropic, and others. Do you get a sense that some of these evaluatory tasks performed by AISI are going to fall by the wayside? Or maybe we'll see an alternative sort of means of checking these models before deployment? Or is that generally inconsistent with the broader Trump's vision of reducing barriers to these frontier models being deployed and adopted as quickly as possible?

Aram Gavoor: So I'm of three minds on this. First, it's a gray area, so you know, to be determined.

Second, as a technicality, if 14110 was yanked — which it was, rescinded day one — 14110's associated national security memorandum 25, which is premised on 14110, was consequently presumably also yanked, including, by the way NSC staff, holdover staff, career staff, detailees predominantly from the agencies from 46 to 47 presidential administrations. Many of those people have been sent back to their agencies. I could see that the undergirding for the Safety Institute has all but evaporated.

Now, the general proposition of the Safety Institute as a public private intercessor partnership validator, obviously with a different mission criteria, right? Cause a lot of the equity fairness, at least described equity issues by the Biden administration, have also been undone. I think the, the structure of the Safety Institute might continue and it would have a different mission criteria, probably with different employees, because of the lead of the Safety Institute as is pretty clearly someone who is a leader in the AI, as the Biden administration described it, safety and rights-implicating world.

And then the third mind I have is it just might just be wiped out. It might, I might get caught up first as that's not our vision. It's too focused on things that we didn't originally design. We liked where things were headed with 13960, which is the Trump EO that actually wasn't rescinded in the Biden administration.

And we might want to just build something new or it might get caught up in DOGE. Or also if you're looking at the hiring freezes, employees who are about to be hired, those people are not in national security or immigration — those are hiring freezes that implicate them. And then also given that it's a relatively new phenomenon, people who are recently hired within one or two years. There was an OPM memorandum by Acting, I think, Administrator Ezell to all the agencies saying, hey it is possible and a reminder that those who have not had one or two years in civilian service have not yet achieved tenure and therefore are easier to engage in employment separations without the MSPB protections.

So, again, looking at it all in context of the three minds, my guess is that it's either going to be reduced and modified or just utterly taken away. But that's something that we really will need to wait and see. It could be like a next week thing. Honestly, we don't know.

Kevin Frazier: It could be. News is, news is arriving quickly these days.

And I think—just again, speculating here —if you take seriously the broader vision of a focus on innovation as espoused by the GOP platform and as President Trump has advocated for, it does seem somewhat counterintuitive to suggest that the AISI in its current form, especially with a focus of perhaps slowing down the deployment of models that that would align with, with Trump policy. But we'll see which of these three minds is correct in due time.

One thing that is interesting to see, looking back at the Biden administration, already doing some historical analysis here. We saw early in the Biden administration, as you mentioned, a greater focus on that sort of responsible AI innovation as outlined by the 2022 AI Bill of Rights, the 2023 EO.

Yet by the end of the Biden administration, signified by the National Security Memo on AI and then a couple last second EOs, there seems to have been a pivot toward a more innovation minded approach to AI and in particular winning the AI race against China. And one of those EOs still in place was a last second executive order on AI infrastructure. That executive order called for the construction and operation of quote, frontier AI infrastructure, including data centers and clean energy resources.

So is this one of the EOs that you think might survive the first few days of the Trump administration? Or is that focus on clean energy perhaps going to be a death knell, and we've already seen President Trump kind of lean into alternative approaches to building out AI infrastructure?

Aram Gavoor: I think EO 14141, that's what you're describing—it just hit the Federal Register today. Typically it takes five or six days for it to get to the register before being in the reading room for a day or two. Part of that EO is directly inconsistent with post-dating Trump administration policy. So that's “Unleashing American Energy,” “Declaring a National Energy Emergency,” the end of the quote, Green New Deal.

So, I would be unsurprised if 14141 was not completely rescinded. But, at the same time, I think some of the ideas in 14141 are long standing good ideas. You're seeing the application of like Stargate is AI infrastructure, it's the AI infrastructure project. So I think you, you could see it in a modified vision that's consistent with President Trump's governing mandate.

A couple other points that I wanted to draw that your, your good questions implicated is first, the way I look at this zooming way out is that the Trump administration—one way of looking at this, right, there's plenty of coverage, like what I call like outrage level news coverage, over lots of things the Trump administration is doing the first couple of days.

Another way to look at it – taking out the partisanship from it – is Trump is returning to the baseline that he had at the end of his first term. It's a baseline upon which the Biden administration did not disrupt, just built upon and shunted in a different direction.

Remember, 13960 is still a good EO and it was embraced by 14110 and a lot of the other Biden era EOs. We're just returning to that, which is a build out of actually Obama era policy, which is, the way I view it, is the non-politicized version of AI where you're not using AI as a mechanism to achieve other social ends. You're not using AI as a function to try to regulate general computing, et cetera, or to achieve clean energy goals.

So I think the benefit for purposes of the national approach to AI is that there is going to be a shot in the arm of innovation because of a consequential deregulated policy, because I view all of this is just executive order based regulation. Those types of policies, as well as the potential pulling back of specific agency enforcement audit and oversight functions that undoubtedly all of the Trump acting officials who designated new independent agency chairs are going to modify, that's going to give a lot more space. So the, you know, the investment dollar, a higher percentage of that, goes to the actual innovation as opposed to compliance.

And then on top of that, because AI, the primary strategic competitor of the United States, the PRC, has a national approach, there is some level of need, and I think cognizance. Which is what you saw with the October national security memorandum under the Biden administration that you were talking about this shift towards a national plan, national security, this type of clustering with support from the U.S. government and Stargate. And this is just the first of probably multiple policies forthcoming intended to really sharpen the edge of the U.S. advantage and to maintain that lead.

It's not even just the goal is to maintain the lead, its to grow the lead and then have associated secondary technologies and secondary benefits flow from that, which would be increased energy independence, increased capability for chip manufacturing and actual manufacture of advanced electronics in the United States. All of that as well as trade policy, which I think undoubtedly is going to have to be keeping in mind raw materials needs as well as some of the developed and manufactured goods that are just not easy to get in the United States, but do not directly pose a national security risk.

Kevin Frazier: Yeah, I mean, in addition to uncertainty about what EOs we may see develop and what further executive action may be taken, of course, another major variable here is what action Congress takes, just looking down Pennsylvania Avenue.

we know that states like Connecticut, Texas, the list goes on, are actively considering their own AI legislation that may create the sort of regulatory patchwork that folks have contested and argued over in the privacy context and in the health care context and in other manners. And I wonder if you're picking up on a vibe there in D.C. of whether we may see some action on the Hill sooner rather than later to try to preempt that sort of patchwork approach from blunting the administration's goals for a national innovation approach to AI.

Aram Gavoor: So two key points for this. One is yeah, if Congress, one of its first main legislative actions this year, is to pass an immigration detention bill, the Laken Riley Act, which if you're paying attention, like the past six or seven years was all but impossible. Even in the first term in 45 in 2017, 2018, where the Republican party actually had a larger majority in the House, none of that was possible legislatively.

So that's happening week one, which is much more controversial historically than AI funding or authorizations. I see a total willingness in the context of Congress to move forward on that because I mean, the president just had a meeting on tax reform, too, which undoubtedly also will help these AI companies in the industry, right? Lesser taxation, more dollars allocated to innovation.

With regard to the question of AI federalism and preemption as a function of regulation: some have observed, so there's a very large – this happens every time you have a cross party presidential transition. All of your top people from the last administration are going to go somewhere. Some of them are going in the industry. Some of them are going into think tanks and some of them are going to state governments. And they will undoubtedly advance their vision of AI in some of the blue states as well and, and essentially be fed and co-develop some of that policy with some of the think tanks.

So I think there is going to be a potentially agonistic relationship between the federal approach and sometimes with industry, like as alongside industry versus some levels of state regulation.

So we've seen some pushback. There was a, there's a big general AI, almost general computing regulation that California got close to passing, but didn't pass. And I think those are, that's going to be the battlegrounds for the next coming years. But at the same time, states like California don't want to lose more of their economy to Texas. And I think there's a growing willingness to realize, oh, perhaps we should be just like a touch more business friendly so we can retain what we've got.

Kevin Frazier: And in case folks didn't catch the immediate details of the Stargate Project, that is going to start in Texas, right?

Aram Gavoor: Starts in Texas, right. California would have naturally, five years ago, four years ago, it would have been naturally the place to go for these things. And it's a huge loss, I think, to the state. And undoubtedly the governor's office, Gavin Newsom, is paying attention to that and, and they're trying to reevaluate how do we maintain our competitiveness as the eighth largest economy in the world in the form of our state to try to keep this tech sector here.

Kevin Frazier: Well, you know, they say everything's bigger in Texas, including the data centers—may be the new refrain.

But thinking about this triangulation of really interesting actors involved in the Trump ecosystem where Elon Musk actually came out in support of that SB 1047 bill in California. Sam Altman and Open AI, obviously as demonstrated by their participation in the Stargate Project have their own view of policy; and President Trump has been known to have a variety of views on, on different topics from time to time.

And so keeping track of all of these different actors and what strings are being pulled is going to be a tough one, especially as we continue to see progress in China that perhaps exceeds expectations. They recently announced, for example, DeepSeek, based in China, launched a very impressive open source model that had impressive capabilities.

So monitoring the domestic situation and the international situation will be a tall order for us all, but we're glad to have your insights. And before we let you go, I think it would be important to hear what other news stories, what other developments you may be keeping an eye on in the next days, hours, minutes that our listeners may want to clue into as well.

Aram Gavoor: So there's two that we didn't directly cover. One is actual technical advances. One example is, you just described a PRC technical advance, but I'm sure we were all tracking in May of last year, Sandia Laboratories, which is Sandia National Laboratories, which is a Department of Energy lab, was developing a liquid coolant, which resulted in a 70 percent energy cut for purposes of advanced computing. And then the most recent supercomputer cluster and the best of, you know, of them all, was recently released in the past month. So it's a, it's a big sort of a move counter move game of chess. That's one.

And then second, I have a very high level of certainty that there's going to be affirmative AI policymaking in the form of executive orders by the president. Looking at the veneer of all of the day one stuff so many things, even to like a high, high, high degree of nuance.

So for example, if you look at the TikTok executive order section three, directing the attorney general to, to interpret a law in a particular way—you know, very muscular presidential Article Two, many of the invocations of the authority are pure Article Two power, Appointment Clause power, Consequential Removal power, Foreign Relations power, Care Clause power Commander in Chief power.

I think it is undoubted that there will be some level of affirmative AI policymaking in the form of presidential memoranda and EOs. And there's also, you know, a czar for that now too, in addition to the Office of Science and Technology policy, Mr. Kratsios.

So those would be the things that I would be paying very close attention to, and I would be unsurprised if we didn't find ourselves talking about this in this exact format sometime sooner rather than later.

Kevin Frazier: I doubt this will be the last time, but for now, we'll have to leave it there. Thank you again for coming on.

Aram Gavoor: My pleasure. Thanks so much.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a law fair material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillen of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare .
Aram A. Gavoor is a professorial lecturer in law at The George Washington University Law School, where he teaches national security law, constitutional law, federal courts, and administrative law courses.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare