Lawfare Daily: Tim Fist and Arnab Datta on the Race to Build AI Infrastructure in America

Published by The Lawfare Institute
in Cooperation With
Tim Fist, Director of Emerging Technology Policy at the Institute for Future Progress, and Arnab Datta, Director of Infrastructure Policy at IFP and Managing Director of Policy Implementation at Employ America, join Kevin Frazier, a Contributing Editor at Lawfare and adjunct professor at Delaware Law, to dive into the weeds of their thorough report on building America’s AI infrastructure. The duo extensively studied the gulf between the stated goals of America’s AI leaders and the practical hurdles to realizing those ambitious aims.
Check out the entire report series here: Compute in America.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Tim Fist: If you want to lead in AI, how much electricity do you actually need? We looked at this from two perspectives. So first, the amount of computation that the most advanced models need, which is roughly quintupling each year, a factor of five each year. And then second, the announced plans of the AI industry.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, a contributing editor at Lawfare and adjunct professor at Delaware Law, joined by Tim Fist, director of Emerging Technology Policy at the Institute for Future Progress, and Arnab Datta, director of Infrastructure Policy at IFP and managing director of Policy Implementation at Employ America.
Arnab Datta: Just building out the nuclear supply chain could take a decade if you, you know, count the workforce as well. And so I think that's like something we really, really need to grapple with when we think about actually, you know, this actually penciling out into project deployment.
Kevin Frazier: Today we're talking about the legal, political, and social barriers to building out America's AI infrastructure.
[Main podcast]
Today, we're tackling one of the most pressing issues at the intersection of national security, emerging technology, and policy: how to build the infrastructure needed to power the AI revolution.
The saying goes, if you build it, they will come. But in the AI context, we're instead wondering whether we can build it at all. AI isn't just about software, it's about physical infrastructure. Training the next generation of AI models requires data centers on a scale we've never seen before, demanding gigawatts of power and massive policy coordination.
But the U.S. is facing steep challenges: bureaucratic bottlenecks, energy grid constraints, geopolitical competition, and security risks, oh my! Can we rise to the occasion or will countries like China and the UAE outpace us?
Thankfully, today we've got two experts who just published excellent essays on this topic: “Compute in America: A Policy Playbook,” and “How to build an AI Data Center.” So with all that teed up, Tim, let's just set the ground floor here and get some basic definitions out of the way.
You all really dive into the weeds of AI data centers, clusters, and accelerators. Let's just get those definitions on the table so that we're all on the same page. So can you start with what's an AI data center and then move into those other topics?
Tim Fist: Yeah, sure thing. Thanks, Kevin. Yeah.
So as many people will be aware, if you want to lead in both building the most cutting edge AI systems and then deploying them widely to be used throughout the economy and critical infrastructure, you need a bunch of specialized AI chips. AI chips—these are sometimes called AI accelerators, I think AI chips is sort of like the most commonly used term generally. But yeah, basically these chips are typically designed by American companies. So think Nvidia with their GPUs and Google with their TPUs, Tensor Processing Units.
And these chips, they're installed in these big data centers, which are big buildings that provide centralized power, cooling, and internet connectivity. And because the number of chips that you need to train a cutting edge model is growing exponentially, we now see companies move to using multiple data centers to train a single model.
And so we use the term cluster to refer to the total set of networked chips that you use to train a model, where a cluster can be chips connected within just a single data center, or it can span a bunch of data centers that are all networked together. So that's the rough taxonomy that we're working with.
Kevin Frazier: Excellent. And we've seen headlines from the White House, from Open AI about the need to massively build out in AI infrastructure. And what's really valuable about y'all's report is you give us a sense of the fact that even though there's tons of money being spent and lining up to invest in these different infrastructure projects, there's some real need to make sure that we actually have the operational and policy capacity to realize those goals.
So one stat just to point out for the listeners here—you all report that globally, the power required by AI data centers could grow by more than 130 gigawatts by 2030, whereas American power generation is forecasted to grow only by 30 gigawatts. And much of that's going to be unavailable or unusable for AI data centers.
So Arnab, can you give us a quick sense of why our energy portfolio is lagging so far behind our aspirations for data centers? What's our current makeup and some of those chief bottlenecks that you all identified in your report?
Arnab Datta: I'd probably kind of center on four broad barrier, categories of barriers: economic, legal, political and societal, and environmental.
And so the biggest challenge I would start with is really on the economic front, which is that energy infrastructure, particularly at a gigawatt scale, is just very costly and very risky. It's a big risk. And if you're not procuring from the grid—we're seeing a lot of AI compute companies moving off the grid—they have to build their own energy sources off the grid, and that's very capital intensive.
And, you know, if you even take—so natural gas specifically, right, which is a proven technology, it's really kind of the most proven for rapid deployment; there's still hurdles, right? We've got a supply chain that's strained. GE Vernova, one of the biggest manufacturers of gas turbines, you know, is already sold out by 2030.
There's regulatory uncertainty, and there's, you know, potential for stranded asset risk if you can't get interconnected and eventually sell power to the grid after an AI data center. You know, useful life is completed. And so, you know, even for gas, we're managing a lot of risks and uncertainty.
And then for next generation energy technologies, I think things like small modular nuclear reactors, enhanced geothermal, there's, you know, something that kind of we've, we've described in the report as the costs associated with uncertainty, right, and that like, these are not proven technologies, they haven't been deployed at scale.
And there's a lot of costs that are hard to quantify. They're not risks in the same sense of like macroeconomic risks, you can model these, you can assess probabilities to them. But there's, you know, a lot of uncertainty that becomes part of that decision.
And so even though we see headlines about a lot of investment pouring in, it's not really pouring into, like, project deployment upfront. Even a lot of that investment coming from, you know, compute companies, are through power purchasing agreements. But the challenge is, is that you know, I can get into this more later, but the challenge with that is, like, no one's really owning that uncertainty, so it doesn't net out to a project.
On the legal side, you have, you know, anything that has a federal nexus, and, you know, runs into a gauntlet of procedural laws, the National Environmental Policy Act, also known as NEPA, also local regulatory barriers as well. And we have an interconnection queue that's essentially a wait time to get projects connected to the grid. So that can be really challenging.
Relatedly, I think there's politically and societally, there's been a lot of pushback against data centers being built out because of, you know, I think what we would kind of, you know, call typical NIMBY complaints, but then there's also very tangible, like rate payer effects, right? Like these are, you know, potentially sources that'll add costs to a lot of folks, to a lot of utilities, and someone has to bear that.
And I think finally, the environmental one that I mentioned is that, you know, if we don't find alternative means to develop this infrastructure rapidly, either through natural gas or through some of these next generation technologies, we could end up just keeping a bunch of coal plants online, something that President Trump has signaled an intent to do. And, you know, the environmental harm from keeping coal plants online is pretty, you know, pretty significant.
So I would really, you know, classify the barriers in those terms.
Kevin Frazier: And I can't recommend this report—I know I'm tooting your own horn, but I can't recommend this report more thoroughly to any policymaker or anyone interested in the space because you all do such an excellent job of putting hard numbers on these questions.
So just for readers who haven't dived into this report yet, the minimum construction time for a new natural gas line, you all pin at about three years. For next generation geothermal, four years. For those small modular reactors, six years. So just looking at the build time to get all of this new energy infrastructure going, if we're going to realize these bold AI aspirations—to quote the Trump EO on AI—to realize AI global dominance in the U.S., we've got to get building fast. And you all do an excellent job of pointing that out.
And Tim, I want to understand better why these data centers are so energy intensive and why from an economic standpoint, it's so important to have these data centers be able to run 24/7. So can you walk us through those sorts of economics of the actual need to get a return on investment on different chips in a very finite amount of time?
Tim Fist: Yeah. In general, like a single AI chip requires a fair amount of electricity. So like a single server, which contains eight chips—this is kind of like the building block of a data center—is around 10 kilowatts. So this is, you know, a pretty hefty amount of electricity. This by itself, if you’re sort of powering it across the course of the year, is pretty much equivalent to a single American home. And you're having thousands to tens of thousands of these in a data center.
And though these chips require a lot of electricity, it turns out the main cost in actually building and operating a data center is the chips themselves. So the cost of acquiring the hardware rather than the operational cost of electricity. And because you have chips getting better and better over time, you have new chips now being released by Nvidia every single year. Generally, the lifetime of a chip is fairly limited in terms of when you need to replace it with the next generation.
So we estimate this at around four years. So if you net that out and sort of look at the total cost of ownership of the hardware and needing to replace that every four years or so versus the operational cost of electricity, the hardware absolutely dominates. It's sort of 80 percent plus where there's electricity is more like 10 percent. So what all that means is to get the best return on your investment, you want to power your data centers, your clusters with 24/7 power.
And there's this kind of question of if you want to lead in AI, how much electricity do you actually need? We looked at this from two perspectives. So first, the amount of computation that the most advanced models need, which is roughly quintupling each year, a factor of five each year. And then second, the announced plans of the industry, so think about projects like Stargate announced by OpenAI and a few others. And basically, both these trends pointed us needing at least one gigawatt for a single training cluster by 2027, and then at least five gigawatts by 2030.
But you've got to keep in mind that's just a single cluster for a single company. We also need to look at this ecosystem level. And at the ecosystem level, you have many companies that want both chips and energy, and they want to use them both for training models, but also for deploying them to, you know, many thousands to millions of users.
And as you said, we estimated that globally, the total size of this ecosystem could grow by over 100 gigawatts over the next five years. Others have made more aggressive estimates there. It could possibly be much more. And our basic argument is that you want to capture as much of this growth in the United States as you can, especially on the training side.
Kevin Frazier: To stick on this focus on the actual operations of AI for a second, you also dive into the importance of the balance between training a model and inference, that sort of prompting of a model by an end user. Can you break down how that dynamic may play out in shaping our energy needs and where data centers may end up being located based off of a preference for training advanced models versus just enabling as many people as possible to use them to the full extent possible.
Tim Fist: Yeah, this kind of distinction between, as you said, training—we are developing the model itself. So you've got a huge data set. You're training the model on that data set, and you're creating this advanced system that you can then deploy. And once you deploy it, you need to think about how do I serve this to a bunch of users at the same time?
The nice thing about training from an energy builder perspective is it's kind of location agnostic; as long as you have the power available, you can basically build these training data centers anywhere. There's obviously logistical considerations around things like cooling water, getting assets to the site, but there's much less constraints around where you actually place it.
Compare this to inference where, because you want essentially low latency connections to users—so you want users to be able to interface with your system fairly quickly—you want to place these close to where existing high bandwidth telecommunications infrastructure is. So there's a little bit more of a locational requirement around inference.
Combine this with just the general thing that training a model requires a whole bunch more compute and a whole bunch more energy than running it does in general, you can think of training infrastructure as being, you know, generally highly centralized but location agnostic. And then inference infrastructure is doesn't need to be as centralized and has a few sort of more hard requirements on where you can put it.
Kevin Frazier: Excellent. So we know that these companies, though, despite all of the barriers we've already discussed, seemingly have out of this world plans for realizing bold projects by the close of the decade. So, Microsoft is aiming for a 5 gigawatt AI supercomputer by 2028. OpenAI, obviously, is moving forward with the Stargate Project.
Can you give us a sense of how these projects are developing right now? What are we seeing from Open AI with respect to the Stargate Project? Are we already seeing reports, for example, of maybe things not working as quickly as possible? Are they seeing some headwinds with respect to things just like finding where they can actually build different data centers and finding the contractors necessary to build them. What sorts of barriers are already appearing for these bold endeavors?
Tim Fist: Yeah, so I think it's useful to differentiate what companies are actually building now from the plans they've announced. And also keeping in mind that these companies are generally very secretive with which sites they're bidding for and where they're trying to connect to transmission infrastructure because there is very much a race dynamic between companies in finding these regions and outputting each other to try and get there.
And so if we look, for example, at the Stargate Project, the general understanding from sort of public reporting around this is that there's the part of this project that is basically under construction now—so this is the initial kind of, I think, $100 billion of investment—and then there's the bigger part of it, which is like going up to the total $500 billion that was announced, which to my understanding hasn't really as much materialized into here are these physical locations that are currently being built out at the moment.
And so the part that's materializing now seems to be this site that OpenAI is building out in collaboration with Oracle and Microsoft in Abilene, Texas. So this is currently a 100,000 GPU cluster. And yeah, it looks like they are powering this with existing grid capacity at that site, based on reporting.
We just generally see like across the industry though, a lot of kind of behind-the-meter build out as well. So lots of companies are installing natural gas turbines on site. Often these are trailer mounted to, you know, these modular units that you can bring on on site to make things move a lot quicker.
Companies are signing agreements to use nuclear power. So you might have seen this news that Microsoft is keeping the Three Mile Nuclear Island plant open and buying a large fraction of its electricity to power data centers co located with that site in the future. AWS—Amazon Web Services—is doing a similar thing, buying some fraction of a nuclear power plant. In general, though, this is the amount that you can get from, you know, the existing grid and from sort of keeping nuclear plants online or even coal plants online is fairly limited.
Overall, you're not going to get into that, you know, 100 gigawatt kind of range just by thinking about this existing capacity. So, yeah, the sort of like big question that's facing all of these firms is how do we build a new generation and how do we build it really quickly?
Kevin Frazier: Arnab, can you flesh out a little bit more about why this move to next generation of energy sources is going to be a really tough wall for these companies to get over?
Arnab Datta: Yeah, absolutely. So I think, you know, as Tim alluded to, there's, there's only so many, you know, shuttered nuclear plants that we can bring back online. You know, like after Three Mile Island and I think it's Palisades in, in California—you know, after a number of these, you, you, you kind of run out.
You need to build new sources and new generation, whether that's, you know, traditional nuclear—like AP1000s, like the Vogel plant that really, you know, recently came online and finished construction after about 13 years—or next-gen technology, like, you know, enhanced geothermal systems like what the company Fervo is doing or small modular reactors.
And I think one of the challenges you see—and I think this is somewhere, a place where the headlines always don't match kind of what's underlying it—is, you know, every other day you see, you know, huge investments in energy infrastructure from, you know, compute companies. And, you know, what those investments take the form of is predominantly power purchasing agreements, as I mentioned earlier.
And, you know, to, to get a project deployed, you really need, you know—PPA is part of the puzzle, but you also need debt and you need equity, really. And the, the challenge with PPAs is that when you, when you get into financing an energy project, particularly next gen—I mentioned all that uncertainty–someone needs to own that cost. And, you know, debt investors like banks are typically reluctant to own that. You know, they, they tend to shy away from risk, risky investments. Equity, you know, again, it's, it's another limitation to that.
And so, you know, power purchasing agreement is not necessarily like both crowding in capital because everyone's kind of reluctant to own that uncertainty, whether it's associated with the risk of litigation—something we call the litigation doom loop, which, you know, projects can get into when they have a federal nexus, they get wrapped up in deep litigation—or supply chain challenges. You know, just building out the nuclear supply chain could take a decade if you count the workforce as well.
And so I think that's like something we really, really need to grapple with when we think about actually, you know, this actually penciling out into project deployment.
Kevin Frazier: One of the things you all explore as well is thinking about the different technological uncertainty behind these new generation sources of energy. So geothermal, everyone's super hyped on it and everyone's battling to make their state the next hub of geothermal. We also have wind, we have solar continuing to expand.
When we're thinking about some of that uncertainty, can you give us an evaluation of the extent to which that uncertainty is arising from? Will SMRs work? Will geothermal work versus regulatory hurdles? What's the sort of breakdown there? Is it all NEPA is the source of uncertainty, we just don't know if we're going to be litigating for 10 years, or is it just, we don't know if geothermal is actually going to mean to pan out to the extent we're hoping?
Arnab Datta: Yeah, I think for most of these, the technological uncertainty is real. I mean, they would probably fall on differences you know, different levels of TRL—technological readiness levels—but they'd all be pretty, you know, at the, at the more kind of, you know, immature range of that. And so, you know, we really just, we haven't had a demonstration size, you know gigawatt scale or, you know, cluster of small modular reactors like deployed in the U.S. You know, we haven't had a utility scale enhanced, you know next gen geothermal project deployed here.
Also one one we should mention as well is, is solar plus storage right? Batteries like batteries. I think if you can get solar plus storage—if you can get storage to a place where it's viable at basically a 24, you know, equivalent to firm where you can store it for over 24 hours you know—that could really dramatically change all of this. But we're still, you know, we're still not there.
One thing we tried to dig into in the report as well is that even let's say if you got technological readiness for these next generation sources, there's a whole ecosystem that has to develop to get to that level of scale, right, commercial scale. And so, nuclear is a really tough one, right? Like, we have, you know, we don't have a nuclear industry, essentially, in this country, right? We need a supply chain, we need, you know, if you take companies like X energy, like they're developing their fuel plant in addition to their actual, like, production plants.
And so I think it's something important to, to kind of focus on—that like geothermal, we do have a supply chain that's, like, pretty ready to go because that relies on a shale oil supply chain and you know can be turned towards developing geothermal energy. So I think, you know, to varying degrees, we have an ability to scale up, but all of these are going to be pretty challenging.
Kevin Frazier: Frequent listeners to the pod and those who are really in the weeds on AI may be thinking, hey, didn't the Biden administration issue an executive order on AI infrastructure that was making federal land available for the buildout of these very projects? Why is this an issue? Didn't we clear a lot of the barriers to this buildout?
So can you all help us understand what are the shortcoming of that EO with respect to AI infrastructure, assuming that the Trump administration ends up adopting some similar EO with respect to trying to make it as easy as possible to build out this AI infrastructure. What components were missing from that EO, in your opinion, that hopefully the Trump administration will learn from if it is going to act on this AI dominance agenda?
Tim Fist: Yeah, I think it's worth stepping back a second and just sort of going over what our core recommendations are and then kind of comparing them to what the Biden admin has already done.
So in general, we've talked about a bit of this already, but really, there's three key problems that we're trying to address with the policy playbook around how do we build gigawatt scale data centers within just a few years in the States.
The first is obviously we need more generation capacity. So we need to make it easier and faster to build both new power plants as well as transmission infrastructure. And as we talked about, permitting—especially new transmission infrastructure like power lines—is a nightmare. So I think in 2013, we built around 4,000 new miles of transmission infrastructure each year. Transmission lines now it's more like 500 miles per year. And it takes on average 10 years to build one of these lines, mostly due to holdups and permitting.
The second big problem is supply chains for key components have really long lead times. So I have mentioned this on the gas turbine side, but there's also sort of other components that have these huge lead times, one being electrical transformers, which have a lead time of at least one to two years. And you kind of need this to build out this kind of grid scale electricity infrastructure.
Then third, the big problem that we want to try and solve is what we describe as a market failure around security. So the basics here are, you know, American companies are building models that they say within just a few years could be used to essentially reshape the global balance of economic and military power.
So think like AI systems that can autonomously carry out massive cyber attacks or automate large functions of scientific R&D or serve as substitute remote workers for many jobs. If this is true, then we really need to be protecting these systems against theft by bad guys. And a lot of the security issues are at the level of the data center.
But the problem is the kind of bad guys that we're talking about are, you know, extremely capable kind of nation state grade hacking groups from places like China and protecting against that level of actor is both pretty hard and pretty expensive. And if a company invests in this to the level required to defend against this class of threat actors, they risk falling behind everyone else who isn't doing this.
And so, you know, companies care about security, they want to be investing in these things, but if it's a choice between, you know, being the most secure company in America and being able to protect your AI models and systems and being six months behind everyone else, they're all going to choose, you know, being ahead. That's sort of like not. not a hard decision to make.
And so the core thing that we recommend and that the features very prominently in the Biden infrastructure executive order is basically combining two things. So the first is pretty ambitious investments in cutting permitting times for building new infrastructure and for enabling early stage financing for next generation energy technologies that solve the class of problems around uncertainty that I know was talking about before.
And our proposal is to do this through what we call special compute zones. So this is regions of the country where there's existing promising electricity infrastructure. So, you know, geothermal energy—you have sort of potential to do a lot of drilling and access a lot of geothermal energy close to the earth's surface as well as coal sites. So where you have existing transmission infrastructure that you can plug and play new behind the meter generation into quite easily. And sort of federal lands where the government has authority to like really speed things up if they want to.
So the second is then tying this assistance to strong security requirements. And so because the fast permitting and the energy access stuff is so valuable to companies, you can tie security requirements to them and in doing so you turn security into a sensible commercial decision rather than something that puts you at a disadvantage relative to your competitors.
So we think that tying these two things together is essentially a great way to solve these market failures around security. And so in January, you know, the Biden administration put out this AI infrastructure, EO which focuses on both of these things. The main sort of thing that we're trying to get at is our recommendations go further. The two main differences are, one, allowing the build out to happen with natural gas, which is going to be required to hit timelines in the short term.
So the Biden EO has these pretty strict requirements around using clean energy for sort of all of the electricity that you, electricity generation that you're building out to support this stuff. This is just a trade off. If you want to actually build this stuff in two years as the, you know, the Biden EO lays out—they've got these really ambitious timelines, like two years to build this gigawatt scale infrastructure. The reality is that you just need to use natural gas.
And if the alternative is gas over, you know, keeping coal sites online, then gas is just a much more sensible option, both from a sort of long-term economic perspective and from an environmental perspective as well.
The second big difference is we recommend using the Defense Production Act to both speed up permitting on federal lands and resolve supply chain issues. So the Biden EO has this requirement—well, it focuses a lot around making federal lands available for both energy and data center build outs. The problem with doing this on federal lands is NEPA basically automatically applies, and so you need to also have a solution for how you make NEPA not a problem anymore. And the EO doesn't really have an ambitious plan for solving that.
I can talk more about the specifics for use of DPA here, but I think just why we think this is a sensible use of the DPA because DPA sort of, you know, is often sort of a controversial tool to use.
So just as like a little bit of background, I'm sure most of your audience knows that, you know, the DPA grants the president, like, pretty broad authority to intervene in the economy and supply chains. to ensure the supply of technologies that are deemed essential to national defense.
And our claim is, you know, AI definitely fits within this scope. We see powerful AI systems being increasingly adopted by the U.S. and Chinese militaries across areas like sensing surveillance, command and control, and autonomous weapons.
And many of the big companies who are building out these huge data centers are now directly contracting with the U.S. government on these kinds of things, including companies like Palantir, Open AI themselves. Now Google has opened themselves up to working on military stuff.
So, yeah, we think sort of using the DPA both to speed up permitting and to resolve supply chain issues by kind of taking existing orders for things like transformers and gas turbines and where they relate to a data center infrastructure, moving them to the top of the queue. We think both of these moves make a lot of sense.
Kevin Frazier: And I really appreciate that you all take this sort of, to borrow a quote from PETA, feeding two birds with one scone approach, where you all are saying, hey, if you want this energy, great, but you're also going to need to focus on these security aspects. And making that a two-in-one combination makes so much sense from an AI dominance perspective, from a sort of America First AI dominance perspective. You don't want to go to all this work of building out this new energy infrastructure only to then have bad actors tapping into all of the AI developments that occur.
So Arnab, thinking about this sort of end round to get by NIMBYs, right? So NIMBYs traditionally were focused on hyper local concerns. If our special compute zones, as you all proposed, are on federal land—well, suddenly NIMBYs maybe aren't actually in your backyard because they don't have a backyard on those federal lands.
But then we still have, as Tim pointed out, these NEPA questions, and you all study closely, what are the sort of exceptions that can apply under NEPA? And how have we seen those used previously, and why might those exceptions be available here? Or what needs to be done with NEPA to clear that barrier?
Arnab Datta: Yeah, so I think a couple things. There's a, there's like a DPA specific aspect of this and then there's like a generalized aspect of it.
And from a DPA-specific perspective—so in our, in our report, we basically advocate for the president to invoke Title I and Title III of the DPA. So Title I is basically the prioritization aspect of it that Tim mentioned, and Title III is a, a broad kind of authority of financial assistance. And so when it comes to either of those, to the extent that NEPA would be invoked, it wouldn't, I don't think, be invoked for Title I, but for Title III, you know, there's a, there's a number of national security exemptions.
Basically, for NEPA, so there's, we have the emergency provisions of NEPA that, you know, allows for urgent defense related projects agencies can proceed in consultation with the Council of Environmental Equality—which, you know, I should also mention that the Council of Environmental Equality is wrapped up in some litigation of its own right now, and so, you know, that, that could affect all of this, but generally that's kind of historically one lever that's available.
There's also, you know, robust protections for classified and sensitive actions. There's an Endangered Species Act national security exemption. And I think most, kind of, assertive here is the Defense Production Act's Title III lending authority specifically can be used without regard to limitations on existing law.
And, you know, without regard clauses have, you know, historically, particularly in the national security context, been, you know, held up, I think, by the, by the courts. This is one of those areas where, you know, if you take you know, Justice Gorsuch's, you know, major questions, reasoning in Gundy, like this is an area where the president has a, is delegated an enormous amount of authority by Congress, but also one that he already typically does have authority, and is like seen within the purview of managing national defense and security. And so I think that would that, that authority would be held up.
From a more general perspective, you know, we, we do advocate for this, all of the above approach and longer term, we think there are, you know, a number of regulatory actions on NEPA that can be taken to accelerate some of these next-gen energy sources.
And one, I think, specific example I would point to that we examined that I think can be analogized to agencies writ large is there's a number of ways that NEPA impacts the state's ability to, like, deploy capital quickly and not really in sensical ways.
So the example I would use is that, you know, the Loan Programs Office idea, which was given, you know, an enormous amount of funding under the Inflation Reduction Act. They can lend or do loan guarantees for next gen energy projects, but those, you know—let's take nuclear energy, for example.
The NEPA approval permitting process for a nuclear project is tied to the approval under the Nuclear Regulatory Commission. And so you can't get NEPA approval without NRC approval. And, you know, in the abstract, there's some sense to that. But what it means is, is that no funding is unlocked until you have NRC approval, which can be a very long process.
And companies have to incur a lot of costs even before that process is done. They need to start procuring, you know, all the things I mentioned earlier in terms of getting a supply chain ready. Like, that has to happen years in advance. They have to do testing on the site, soil testing, things like that. These are things that don't have any environmental impact, but LPO can't provide any funding ahead of time, even though these are eligible expenses for LPO to cover.
And so we advocated basically for the, the Department of Energy to adopt a categorical exclusion from NEPA, which would basically unlock a certain portion of the funding of the loan or loan guarantee ahead of time so that it can be dispersed so that companies don't need to pull from, you know, their own cash, their own equity to pay for these procurement contracts, for example. And so what that would do is it would, you know, that cost of uncertainty I mentioned earlier, it allows LPO to own some of that a little bit ahead of time.
So that's one example and, you know, it's feasible. We have, you know, agencies like the Department of Interior, they are categorically excluded from routine financial transactions, but you know, companies aren't if they're giving it to them. So it's kind of a perverse situation where the agency is less burdened if it were just, you know, just paying for this directly and developing the project than the private company is. And I think that's a challenge we need to overcome.
Kevin Frazier: Before I let you all go hopefully start writing yet another great insightful report, one question that you all raise is the possibility that AI companies currently may look abroad for cheaper power and to take advantage of some massive subsidies, for example, from Gulf nations.
How real of a threat is that sort of move of more AI infrastructure abroad? And why does that matter from a sort of national security perspective and from a AI dominance perspective, if that is indeed the approach that the government continues to pursue?
Arnab Datta: Yeah, as you mentioned, we see this wave of investment globally at the moment into AI data centers. This is often led by U.S. Companies like Microsoft and BlackRock, but also foreign companies like MGX, who's a big Emirati investment firm, as well as Scalar, who's a Brazilian data center company.
And a key region is the Gulf where you have both plentiful energy as well as these huge sovereign wealth funds, which are just like deploying capital like crazy to support AI data center building projects globally. Like just last week, we saw a $50 billion planned investment from MGX, this Emirati firm into data centers in Europe.
This is also happening on the China side; we see a bunch of investment happening there too. In 2023, they began work on what they call their national computing network. So this is a centralized network of compute spending the entire country. They're also doing massive subsidies for chip production, which has been described as basically a blank check.
Early last year, there was $50 billion going into China's domestic chip industry from government. So that's the size of the CHIPS Act just in a single year. And then more recently this year we saw a $150 billion announcement soon after Stargate to invest in infrastructure.
And, you know, obviously China has over the last 20 years been able to bring on new energy generation much faster than the U.S. has, so about 20 times the pace that the U.S. has since 2000. And over a period of five years, from 2014 to 2019, they're able to bring 25 gigawatts of new nuclear energy online, which is absolutely crazy.
And this is great, you know, for local champions like DeepSeek who's been in the news a lot recently, who's CEO is on the record as saying that access to AI compute is their main blocker to proceeding as a company. And so what all this means is, you know, you do see this race globally to build out bigger and better data centers and you know companies and money are flowing to the places that can build the fastest.
I will say one blocker on this happening in many countries is the recent AI diffusion rule that came out of the Department of Commerce. So for those who aren't familiar, this is basically an export control regulation which carves up the world into three tiers of countries.
You have basically the U.S. and its allies who get fairly unfettered access to AI chips and the ability to build data centers; you then have, you know, the bad guys, the arms embargoed countries, so China and a handful of others who currently don't get access to chips—they're completely export controlled. But then you have this middle tier, Tier Two, which is, you know, every other country who are, who aren't in those two groups. And for those there's a, actually a cap on the number of AI chips that you can get both on a per country and a per company basis.
And so this is likely—if, you know, the Trump administration keeps this regulation in place—this is likely to be a blocker on the total scale of data centers that can be built out, built out overseas. I will say that the caps that it currently implements aren't really an impediment to sort of like current plans over the next few years. You know, you're still looking at hundreds of thousands of GPUs per company being built in a single country, and you know, that's much bigger than you know the actual clusters that are online today. I think these kind of restraints are going to really kick in in a few years once companies are wanting to build out more the million GPU scale cluster overall.
But I think those are the main considerations there. So you've got the export control side, which is attempting to keep as much of this as we can in the United States, but also, you know, a bunch of money flowing from investment companies as well as domestic investment in China. And you know China doesn't really care about U.S. export controls, is trying to get around those as quickly as it can, both by smuggling chips as well as producing them domestically.
So, yeah, I think that there's the kind of global dynamics here. And you know, there's a real risk that the U.S. loses its lead in the frontier of AI development by just not being able to build as quickly as China can domestically.
Kevin Frazier: Well, that is quite the conclusion to end on knowing that we've got this crazy race that's going to unfold over the next couple of years. Can't wait to have you all back to get a sense of how this pacing is changing and whether or not the U.S. indeed is able to keep up.
So thank you Tim and Arnab for coming on. We'll have to leave it there.
Tim Fist: Thanks very much.
Arnab Datta: Thanks, Kevin.
Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work lawfaremedia.org.
The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As always, thank you for listening.