Cybersecurity & Tech

Lawfare Daily: Chris Miller and Marshall Kosloff on the Abundance Agenda’s Implications for National Security

Kevin Frazier, Alan Z. Rozenshtein, Christopher Miller, Marshall Kosloff, Jen Patja
Tuesday, February 11, 2025, 8:01 AM
Discussing AI supply chains.

Published by The Lawfare Institute
in Cooperation With
Brookings

Chris Miller, a professor at the Fletcher School at Tufts University and Nonresident Senior Fellow at the American Enterprise Institute, and Marshall Kosloff, Senior Fellow at the Niskanen Center and co-host of the Realignment Podcast, join Kevin Frazier, a Contributing Editor at Lawfare and adjunct professor at Delaware Law, and Alan Rozenshtein, Senior Editor at Lawfare and associate professor of law at the University of Minnesota, to discuss AI, supply chains, and the Abundance Agenda.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Chris Miller: Well, I would say if you zoom out from the past two years, you've never before seen tech and software companies spending this much money on capital expenditure. So there's been a huge change in the last couple of years, which Stargate is part of. And so we shouldn't, I don't think, say this is nothing new because it is really a major change relative to what the tech sector used to do.

Kevin Frazier: It's the Lawfare podcast. I'm Kevin Frazier, a contributing editor at Lawfare and an adjunct professor at Delaware Law with my colleague, Alan Rozenshtein, senior editor at Lawfare and associate professor at the University of Minnesota School of Law.

Marshall Kosloff: That's the part of the story we should really focus on here. The transition to the physical and the real is going to be bumpy. It's going to be difficult. But I think that's the investment side of the thing, not just is Microsoft taking some of his war chest and spending money on its own programs. I think what really matters is the outside capital is coming in and that's a really, a real indicator of where things are heading.

Kevin Frazier: Today we're talking about America's national security from the perspective of its industrial capacity and infrastructure. From chips to armaments, Chris Miller, a professor at the Fletcher School at Tufts University and non resident senior fellow at the American Enterprise Institute, and Marshall Kosloff, senior fellow at the Niskanen Center and co-host of the Realignment podcast, help us break it all down.

[Main podcast]

Today's episode is all about the infrastructure of power in the 21st century—semiconductors, AI, and warfighting capacity. The U.S. is in a high stakes race to secure its technological and industrial base. Can we build what we need fast enough, or will strategic dependencies on Taiwan, on foreign AI infrastructure, on fragile supply chains hold us back?

We're going to discuss TSMC's buildout in Arizona, the DeepSeek quote Sputnik moment—depending on who you ask—and what it reveals about the limits of U.S. export controls, and more generally, the abundance agenda, a push to supercharge American production in the defense sector and beyond. Let's get into it.

Chris, we'll start with you. You frame semiconductors as the new oil in your recent book, “Chip War.” Given China's increasing pressure on Taiwan, how prepared would you say the US is for a supply chain crisis if conflict indeed erupts in the region?

Chris Miller: I think the unfortunate reality is that we're horrifically unprepared for that type of crisis. Today, Taiwan is at the absolute epicenter of the world chip industry, as well as the broader electronics and computing industry. Across the supply chains that produce electronics and computing, you, you can only get around Taiwan with great difficulty.

And today, when it comes to the most advanced types of computing systems, like the servers that make artificial intelligence possible, an extraordinary degree of the key components, including almost 99 percent of the GPU chips, which are the key chips for AI are produced today in Taiwan. So we are extraordinarily exposed to Taiwan's own security.

Alan Rozenshtein: What about the recent move of TSMC, the main Taiwanese chip manufacturer, to sort of move some of its production to the U.S., right? So you have this big factory in Arizona, it seems to be going okay—but you tell me. How is that going? And is that a potential way forward for dealing with this, this supply chain concern?

Chris Miller: I think it's an important move, but it's also key to put it in context. It's a medium size factory as the chip industry goes. It's one plant among many that TSMC operates, most of the rest of which are in Taiwan. And so it's far, far, far from the scale of capacity that would be needed to give it the U.S. all of the chips that it needed if it lost access to Taiwan.

So is it a step forward? Yes, no doubt about that, but it's just the first step of many steps that I think need to be taken to build the type of supply chain resilience that would really insulate you from the shock of a crisis in the Taiwan Straits.

Alan Rozenshtein: Chris, a follow up question on the TSMC point. So, you know, one thing that I always wondered about when TSMC first agreed to build this plant in Arizona is what incentive it had, and specifically what incentive the Taiwanese government had in allowing, you know, its main manufacturer to build this stuff in the U.S., given that presumably the less tied the U.S. is to things physically happening in Taiwan, the less it might be incentivized to, for example, go to Taiwan's defense in the case of a sort of shooting war with China over this.

So is, is, is your point that this is a medium size factory to suggest that we're never going to get to a point where enough of this is moved to the United States, either because of technical reasons or because Taiwan just won't allow it, so as to really decouple the U.S. from its Taiwan risk?

Chris Miller: I think you're right that the Taiwanese look at TSMC's geographic diversification as a, as a challenge, as a risk to themselves, because they believe that the more the rest of the world depends on Taiwan, the more likely the rest of the world is to help secure Taiwan. And that is one of the reasons why there's been some reticence in Taiwan about TSMC's overseas investment.

I think also the company itself had never operated large cutting-edge plants outside of Taiwan before, and so just from pure operational perspective this was a new challenge for TSMC that they would not have undertaken that had they not gotten a fair amount of pressure from the U.S. government to take this type of step.

Kevin Frazier: Marshall, I was hoping you could discuss a little bit more the extent to which this analogy of semiconductors being the new oil is a relevant north star for some of our national security concerns and domestic policy concerns.

Marshall Kosloff: During the 2010s, there was this period where everyone basically said data is the new oil. That became sort of the hypey phrase to really describe it with. And what makes semiconductors—chips, of course—a much more accurate use of the oil comparison is the fact that chips are just at the center of America's domestic and foreign policy interest right now. Everything from the cars we drive to the literal computers we have, there are military applications, there's civilian applications.

So this should really be at the center of the way we conceptualize the degree to which our country is resilient enough when it comes to its chips capacity. Because I think something that became rather confused over the past two years is the actual objective here.

So is the objective that we entirely reshore all of the United States chip making capacity and make things like they were in the 60s or 70s again, when actually what we were trying to do and we are actively trying to do now, is actually just have enough of a domestic capacity so that in the event of a Taiwan crisis or other geopolitical disruptions, we have the ability to have enough of a resilient base of our own.

Kevin Frazier: Marshall, earlier Alan was talking about the fact that things seem to be going pretty okay in Arizona with respect to the TSMC build out. This is kind of a litmus test for the extent to which we can build big projects.

For anyone who's been paying attention to the TSMC build out, as Chris mentioned, even though it's a medium sized factory, it is ginormous if you just look at it on a geographic footprint. It’s changing the demography of areas of Arizona. My uncle lives in the middle of nowhere, Arizona. He's saying that his pickleball league has entirely changed and he's getting beat all the time. He's pretty upset about it, but more generally, what can we learn from how things are going in Arizona right now?

Is this a success? Should we be inspired that we can emulate this? Or is this giving us signs that we have a lot of room for improvement when it comes to building out this critical infrastructure?

Marshall Kosloff: So this is a multi-year project. There wasn't a world where we would wake up in 2023 or 2024 after the passage of the CHIPS and Science Act and just have semiconductor chip fabs across the country.

I think it's important to understand that while we focus on the national security impetus for reshoring domestic ship manufacturing, we also focus on the fact that the CHIPS and Science Act and specifically what TSMC is doing in Arizona are also a political project.

The Biden administration, when they came in, very much saw the chip reshoring project as a part of this broader effort to sort of turn the United States away from policies that really began in the 1980s. This idea that we would give the market control over different outcomes. We see this relating to free trade policy and NAFTA in the 1990s, then the debates over globalization going into the 2010.

So from the Biden administration's perspective, this isn't just about the question of can we reshore chips; it's about this question of can a government centric industrial policy push outcomes in a direction they didn't go?

And I think the thing that's very interesting when you assess this based on the idea that this is a political project as well as a techno industrial one, is you recognize that this revealed a lot of difficulties within the American political system.  So once the CHIPS and Science Act was actually passed and they began construction and we're assessing the success of these projects, we should understand that we should see this—we'll get into this conversation later when we talk about the abundance agenda—this has become intertwined with a debate over the degree to which the government tries to do too much.

So for example, requiring mandates when it came to the amount of union participation was involved in industrial projects, whether there would be requirements about on site daycare. There have been a bunch of different sort of he said, she saids in terms of whether or not the delays have been due to sort of sociocultural decisions around unions and childcare.

But it really gets to the fact that a big problem for American liberalism today has been this idea that we're trying to do too many things at once. So I think we should look at this through that perspective as well, too.

Alan Rozenshtein: So we'll get to the kind of broader abundance agenda questions later, but I want to stay on, on the sort of specific question of chip manufacturing in the United States and the CHIPS act and all that. So let me actually go to you, Chris. zooming out from the TSMC project in Arizona, where do you think American chip manufacturing is right now and in the next, you know, three to five years?

And I'm actually also really curious to get your views on Intel's role in all of this. I mean, I still have the Intel, the Intel jingle will for the rest of my life—I'm a child of the nineties, so it will live rent free in my brain for the rest of my life. And yet Intel's fall from grace has really been spectacular over the last couple of years.

You know, Intel CEO or now recent CEO Pat Gelsinger, who was brought in sort of as someone to reinvigorate the engineering side of the house, had this really ambitious plan; it wasn't working or at least wasn't working fast enough for Wall Street, so he was forcibly retired as far as we can tell. And Intel just really seems on, on the ropes, which is a problem, of course—Intel is the closest we have to a national champion. So riff on this, this whole mess

Chris Miller: You know, I think you're right that Intel's challenges of the last now five or seven years are reflective of some of the broader challenges—not just in the U.S. chip industry, but U.S. manufacturing in general.

And there's been some fascinating work done on the overlap and board membership between GE, Boeing and Intel, three of the great decline and fall stories of American manufacturing in the last couple of decades. I think that does speak to some of the, the shared challenges.

One of which is that U.S. firms that are publicly traded have to be responsive to quarterly earnings. And although our efficient capital markets are in general a very good thing, I think when it comes to manufacturing firms that have to invest a lot of money in R&D and new product development, there is a real impact here that was visible in GE, visible in Boeing, and visible in Intel—underinvesting over a sustained period of time, declining to take technological risks.

And I think we see Intel struggling to rectify the effects of that type of short term financially focused decision making. And now the company has been suffering from a talent exodus. It's been suffering from a loss of customers, and it's been unclear right now, whether the company can be turned around, and if it can't be turned around, it's a huge problem for the United States because it's still the largest chip maker in the United States and the largest advanced, the largest maker of advanced chips in the United States at scale. TSMC, of course, is a small player. Samsung is a small player. Intel still got the most capacity when it comes to advanced chips.

And that, that's why I think the U.S. turned both to incentivizing U.S. firms for the chips act, but also incentivizing foreign firms like TSMC, like Samsung, because there was a perception that if you only relied on U.S. firms, you were relying on a small segment of companies, many of which had developed some pretty deep problems over the past decade.

Kevin Frazier: Turning to you, Marshall, I'm curious. We've talked about some prior national champions—Boeing, Intel, some of the stalwarts of our key defense sector and key industries writ large—and some of the crises they're experiencing.

We saw a new, perhaps, bout of a crisis of confidence with the release of DeepSeek's R1 model just about a week ago or so, where trillions of dollars lost in evaluations. Folks freaking out about the ability of our key AI leaders to be able to compete on a global level and whether we had accurately put our eggs in the right basket with respect to investing in some of these large compute efforts and more generally in AI infrastructure.

Are you freaking out? You've got a lot of other things going on in life right now, but is this is this keeping you up at night? Did we bet on the wrong horse, the wrong strategy, or do we need to really see DeepSeek as a moment of revelation to pivot away to a different approach?

Marshall Kosloff: So if I worked in venture capital or at Microsoft, I'd probably be freaking out and not able to sleep right now. And I think the thing that we should note at the start though is that the DeepSeek debut conversation is very early on. So over the next few months, there's going to be all sorts of tests. We're going to really sort of, you know, take the tire to the road here.

But the reason why I think folks in the United States should be freaking out about what is happening right now is it appears as if the Chinese have been able to build a, at least American AI equivalent for much cheaper. If we're looking at the hundreds of billions of dollars that have been and will continue to be invested into American AI companies and AI efforts, we just see the Chinese version being so much cheaper. That really brings to question this broader effort that's actually being undergoing here.

And most importantly, a narrative that we've actually had here, because I think if we look to the tech investors and we look at the AI researchers, they've been very certain about one specific fact, which is despite all the different ways that the U.S. is uncompetitive of China, AI leadership was supposed to be the one thing we had. So the DeepSeek models debut is really brought that into question.

The other thing that it brings into question, though—and this is incredibly important. You're probably about to ask about this, but I think it's relevant to hit here—is it's brought to mind the possible failure of the U.S. export policy limits towards chips when it came to China. So a few years ago, as part of the Biden administration's effort to in many ways, continue a Trump administration's Cold War 2, great power competition, pick your euphemism, approach to China, significant export limits were placed on chips that could be used to power Chinese AI.

Two trends are really relevant here. So one, it's been widely reported that there's just a large amount of chip smuggling going on. This isn't saying NVIDIA is doing this, isn't saying that like a Chinese company specifically doing it, but there just actually exists a market for getting good chips far beyond the export controls into China.

The second part that's also relevant here too, though, is that the—and this is where writing laws you, you, there's a difficulty in writing the actual laws and the actual applications of the law—there were limits placed on the, the quality, the strength of the actual chips that could go to China. And NVIDIA actually developed a set of chips that were below the actual standards set by law, but it turns out were more than good enough to get DeepSeek where it got today. Now, those exemptions were recently fixed, but there was a one to two year period where you did have this loophole that was gotten around.

So it's really brought to mind like A) once again, the state of American AI leadership, but 2) the ability of our export control policy, that was very much treated as a big win and taken as a given of this was a success, into question.

Alan Rozenshtein: All right, so there's a ton there, and I want to unpack.

So it seems like there are three sort of separate questions here, or at least three that the DeepSeek issue raises. Sort of the first is this question of how far behind China is in the AI race? The second is, what is the relative importance of compute versus clever architecture? And then the third is this export control question, which Marshall, you touched on, but I think there's probably also more to say.

I want to start with a kind of basic question about how close China is to America's heels. So I, I understand if DeekSeek forces us to maybe change our timeframes and say, okay, well, we thought China had was, you know, two years behind us, maybe they're actually 12 months behind us. Maybe they're nine months behind us.

But why is that such a dramatic change, if in fact that is a change? Because, you know, it's a, China's a very large country–like it's over a billion people, they're very smart, they spend a lot of time focusing on STEM. Did anyone think that somehow we were going to be able to outsmart China forever? And relatedly, if, if the timeline has changed from us being two years ahead to, let's say, only 12 months ahead, is that such a big change in terms of thinking about what counts as a true strategic advantage in, in AI?

I mean, I, I'm just personally surprised if this is our Sputnik moment, because if so, it seems like we have been criminally underrating Chinese technical prowess, which should have been obvious for the last 20 years.

Marshall Kosloff: So I think the term Sputnik moment is actually the best utilized one here because this is a political reality. What was, Sputnik was not actually a true game changer. It wasn't as if Sputnik was launched and then the next day the Soviets have a missile advantage and where nuclear weapons are being rained all over American cities. No, what really matters was—

Alan Rozenshtein: It was just a metal softball that went beep, beep, beep in orbit, basically.

Marshall Kosloff: Exactly. So, what really mattered in the Sputnik moment story was the narrative.

So, what I'd really like to speak about here is just that if you spend time in venture capital, if you spend time in D.C. policymaking circles, the narrative really just was, okay, when it comes to defense preparedness, when it comes to the fact that the Chinese navy is significantly larger than the United States Navy, when it comes to our ability to manufacture critical technologies, we are in a really deleterious position vis a vis China.

But at the end of the day, we still have AI. There still is this thing that we have that we could build a lot on. So, that not being true, I think is a, is a huge, like really substantial narrative reset when it comes to the D.C. policy conversation.

Two, the other thing that really matters here is it really brings to question the actual model that we've relied on for AI success here. We do have these big tech companies, we do have these partners, we do have access to capital. This is why, you know, the Stargate project really matters. Not all of the $500 billion has been raised, but $100 billion is ready to go. They're going to raise more. That was our model. We have these companies, we have this capital, we have this lead. Therefore, within this very specific category, we could treat AI as this X factor when it comes to great power competition.

So that just going away when no one expected that is going to play a huge role in shaping, frankly, the next several years of policy conversation. And it's not quite clear to anyone other than saying the cope adjacent point, that it's early days, we don't actually know the long term implications of DeepSeek. That's just where we sit right now.

Kevin Frazier: Prior to getting Chris's take on this, I do just want to stress Marshall's point there that this is so early days with respect to the DeepSeek conversation. We're still trying to find out just how far ahead they supposedly are on this clever architecture. There's debate right now about the extent to which OpenAI is kind of like, yeah, I see your game. I can cross over to, I've got that in my repertoire.

So we just don't know entirely if this was as monumental of a defeat. But to Marshall's point, I think it is just a huge shot across the bow in terms of changing that narrative. It doesn't really matter if OpenAI comes out and publishes some archive deep cut research paper that says, oh, we knew this all along—the, the Sputnik moment has already passed.

So Chris, can you give us some more technical insights into how this may warrant a sort of shift in paradigm and policy?

Chris Miller: What I'm most surprised by is the extent to which D.C. and Wall Street was surprised by DeepSeek because in the AI world, DeepSeek was a matter of regular conversation throughout the last third of 2024, which is when they started releasing some of the papers that undergirded their reasoning model, which is the one that captured the public attention.

And so I, I was so surprised when DeepSeek hit the headlines that I looked back through my correspondence in email and searched DeepSeek and found over a hundred email conversations about DeepSeek dating back to mid 2024, which is evidence that in AI circles, it was not a surprise. It was known to be one of the leading Chinese labs. Best estimates are that they have tens of thousands of NVIDIA GPUs, speaking to some of the loopholes that Marshall mentioned, but nevertheless showing how reliant they are on advanced chips from the U.S.

And so the puzzle is actually why this was a surprise. I think I'm not fully bought into the thesis that the release of their newest model was intended to correspond with Trump's inauguration—although I'm open to that being an explanation. But it does seem like, I think the public concern, the fact we're talking about a Sputnik moment does not align at all with the realities of what the AI community was discussing.

In terms of the technicals, I think what we know is the headline view that they train their latest model with a tiny amount of compute is just not true. They're one of the better resource companies in China when it comes to access to chips. And we've seen now a number of AI leaders in the U.S. like Anthropic CEO say actually what DeepSeek has done is basically what, what we did for our last model.

Two, in terms of the cost declines that DeepSeek has demonstrated, this fits exactly the trend of cost declines that we saw in 2023 and 2024. There's been a, an order of magnitude decline roughly every six months in AI since late 2023. And so yes, DeepSeek is offering lower costs, but it's offering lower costs in a way that was already part of an ongoing cost war. And worth noting that the cost war was probably started, I think you'd argue, in late 2023 by a French company, Mistral, which first led the price cuts for high quality models.

So again, nothing really brand new here in what DeepSeek is doing in terms of offering cheaper AI. So it might be a Sputnik moment for D.C., but in some ways, I think it illustrates that D.C. needs to follow the AI conversation more closely because in Silicon Valley, DeepSeek was regularly talked about late last year, and I don't think anyone in Silicon Valley sees DeepSeek is anything other than what is to be expected given prior trends.

Alan Rozenshtein: I wanted to talk about the, the compute question, because, you know, as Kevin mentioned, one of the results of, of DeepSeek, DeepSeek releasing its models is, you know, in loss of a trillion dollars of valuation on the stock market, because everyone is selling off NVIDIA, worried that this shows that we won't need compute.

And I, I will admit, I was quite puzzled by this given that one way of viewing DeepSeek is admittedly that maybe computers are overrated. But another is to just remember there's this thing called the Jevons Paradox, which is well known by economists, which is the idea that as things get cheaper, you don't necessarily spend less money on them, because if they get cheap enough, you just use more of them, and the aggregate of that can be that you end up spending way more, even as the price goes down.

And compute seems like a perfect example of this. I mean, we're at the very, very beginning of integrating AI into everything. So even if AI gets a thousand times cheaper, that'll just mean that we integrate it into a thousand times more stuff. And at the end of the day, won't we end up just using more compute? To, to me, this seems like an obvious possibility, and yet the market sold off, God knows how much NVIDIA stock, so what am I missing?

Chris Miller: I think that framework is right. And I think that's probably why the day after NVIDIA sold off by 17%, it was back up retracing half of its losses.

I think the other key dynamic that is not just about DeepSeek, but it's about what OpenAI and Anthropic and others have been releasing over the past couple of months is that there's been a major shift towards getting your improvements from AI, not on the training, but on the inferencing, which is why you're letting models run for longer when you ask them questions.

And this is what DeepSeek does, but it's, it's part of the general trend here, and that is especially compute intensive. And one of the things we've learned with reasoning models is that the longer you let models think about a question, the better answer you get, which means more and more compute. And so, so yes, the Jevons Paradox is right, I think.

But there's also a separate dynamic, which is that all of these new reasoning models—DeepSeek included– suggest you're going to need more compute on the inference side as well, which is another demand driver. And it is where computer infrastructure really matters, because I think it's almost certainly the case that in the U.S. we don't have the scale of infrastructure that we'll need to begin deploying AI, especially AI over time in all the applications that, that I think you're correctly predicting we'll, we'll want them in.

Kevin Frazier: And one of those other immediate responses to DeepSeek was seeing folks on the Hill, namely Senator Hawley introduced a bill basically calling for significantly ramping up our attempts to keep AI expertise, AI infrastructure, right here in the U.S. or in the hands of our allies.

Marshall, have you seen a shift in the conversation with respect to export controls? Some folks were saying, see, it doesn't work, leakage is going to happen, let's abandon that, let's just open source everything. It's time to just compete on an open court.

Or do you think we are going to see a sort of doubling down on the thinking that was behind Biden's export controls and then the diffusion rule—is that going to be the policy that the Trump administration leans into?

Marshall Kosloff: Yeah, I'd say that's a good way to phrase it because as I noted, specifically with the loopholes that were exploited for the first two years of the export controls, those loopholes have been closed. So something has already been course corrected for. There was just a cost in terms of what had already happened. And then two, the smuggling is a law enforcement issue.

So I would say that the consensus is—basically what the interesting pushback probably from the right and the center has been that too much weight was given to the export controls, and they were treated as a success at a narrative level.

This is the broader critique of the Biden administration's industrial policy efforts. Wow, we passed the CHIPS Act. Wow, the IRA is investing unprecedented amounts in clean technology. There's, there's a gap between the headline and the actual result. And obviously there's a lot of Republican skepticism of the IRA specifically with the focus on clean energy, EVs, et cetera. And there's going to be a lot of debate about what happens there moving forward.

But I think with the export controls specifically, now that the reality of smuggling has been identified, and once again, you focused on the loopholes, I think there's going to be a continuance of the policy without the same amount of, let's say emphasis on this being this end all be all, not end all be all, but this is this really responsive policy we're pursuing.

Chris Miller: I think I agree that that there's going to be a renewed focus on actually enforcing the rules on the books. And this is something that I think the Biden administration underperformed on partly because just a lack of focus. But I think also partly because of an unwillingness to push other countries to help enforce the rules, and I'm interested to watch whether the Trump administration tries to use leverage against third countries to get them to play a bigger role in, in helping them enforce some of these roles.

Kevin Frazier: It is been fascinating to do a side by side comparison of the amount of resources going to our enforcers with respect to the export controls and just the size of the market. The market's going to win, right? The, the amount of enforcement resources you need just with respect to the United States are huge, and we're not investing that much. If you look to our allies—even in the top tier of the diffusion rule or the second tier of the diffusion rule—again, the investment in enforcement just isn't there.

And we've seen this and Marshall's commented on this in a number of podcasts, so I'll be sure to cite to you, Marshall here, the fact is we focus so much on just the what, right? You passed a, you instituted an export control? And then you kind of just forget about it and we don't actually think about implementation. And hopefully DeepSeek does generate a much more attentive focus to what it means to actually enforce these provisions.

Alan Rozenshtein: All right, let's, because we haven't gotten sci-fi enough, let's talk about Stargate. Chris, let's first start, if you could just give a general overview of what Stargate actually is and maybe how it compares to the current steady state in AI investment and infrastructure buildup.

Chris Miller: So Stargate was announced in the first week of the new Trump administration by OpenAI and SoftBank, the venture capital fund, as well as Oracle and MGX, which is a big investment group out of Abu Dhabi aiming to spend a total of $500 billion, they say, on AI data centers over the next couple of years in the United States.

So a huge sum of money, although it maps on to huge sums of money that are being spent by other tech firms. Just to put it in comparison, Microsoft spending $80 billion this year on data centers, Meta $65 billion. So it's part of a broader trend of just massive expenditure on data centers by America's AI and tech leaders.

Alan Rozenshtein: So just how big a deal is this? Because, you know, obviously, Stargate and Sam Altman and OpenAI and Donald Trump—who does not actually appear to be involved at all, but got a bunch of credit from Sam Altman in the rollout of Stargate, presumably for the good PR—they seem to want to say that this is an absolutely transformational change. And yet, Chris, as you just pointed out, relative to existing and planned investment, it doesn't seem that large.

So are we, are we all paying sort of too much attention to this? Or is there something new about this approach, either in its magnitude, or even if it's not, you know, that large compared to everything else, maybe it's a new model for how infrastructure investment might go forward.

Chris Miller: Well, I would say if you zoom out from the past two years, you've never before seen tech and software companies spending this much money on capital expenditure. So there's been a huge change in the last couple of years, which Stargate is part of.

And so we shouldn't, I don't think, say this is nothing new, because it, it is really a major change relative to what the tech sector used to do. The tech sector used to be capital lite, and now they're among the biggest spenders on, on facilities and on equipment in the world. And so Stargate is, is leading that charge alongside the other big tech firms. And so it, I think it is important.

And it also, I think, represents that OpenAI, which is in a lot of ways, the leading AI firm in the U.S. is going off on its own to build infrastructure, because of course, for a long time, it was in partnership with Microsoft. And now it appears to be doing this largely independently. And so that's also a big shift in the context of who are the major AI players in the U.S.

Marshall Kosloff: And I just want to follow up on that because I think the key thing is we've used the word spend, but I think when it comes to Stargate and why it's a big deal in terms of the broader narrative of Silicon Valley, what really matters here is that the investment side of things that there is hypothetically, and we should put the lessons we've learned so far on the podcast.

We can't just use the headline here and just say the $500 billion number for the next three to four years, like they need to raise hundreds of billions more to get there. But the point is hypothetically there are investors, capital, players who would put money into serious technical infrastructure. And if we're looking at the Silicon Valley story, 1990s, 2000s, 2010s, this was not a story of infrastructure to the same degree.

So I think the real focus on, on, on the hard, on the literal, and then on the accompanying real world problems as we discussed during the TSMC part of the conversation that accompany that, that's the part of the story we should really focus on here. The transition to the physical and the real is going to be bumpy. It's going to be difficult, but I think that's the investment side of the thing.

Not just, is Microsoft taking some of its war chest and spending money on its own programs, I think what really matters is the outside capital is coming in, and that's a really a real indicator of where things are heading.

Alan Rozenshtein: I want to focus for a second on this move from tech companies largely, you know, being invested in, I guess, labor and IP and software, and now really investing in physical infrastructure. Because of course, that is a dramatic change in how these companies operate. That's a dramatic change in how nimble they are going forward.

I mean, it is not a trivial thing to sink hundreds of billions—at this point, maybe a trillion dollars, if you sort of look over the last couple of years and project forward—into a bunch of stuff. And in the past, that has not always gone super well, right?

So there was obviously the giant fiber build out in the late nineties, which at the time was a bit of a disaster; a lot of those companies went bankrupt because they vastly overestimated how much demand there was going to be for this. Now it's fine, right? I'm sitting here enjoying my one gigabit fiber internet, but that took quite a long time.

So, you know, one question we might ask is, is how much sort of exposure does this put these companies in over the next couple of years because they have put these incredible outlays out that they have to recoup or else presumably the markets will punish them. And also relatedly, how does this kind of infrastructure investment compare to the durability of previous types of infrastructure investment?

And so what I mean by this is like, if you spend $100 billion on a highway, you've built a highway and sure you have to like maintain that highway, but the highway is useful for a long time, it's not like five years later, like someone invents a totally new highway and you have to rip out the highway and put it in. I'm not an expert on how fiber works, but at the day, it's kind of just like glass, and so presumably like once you've buried the fiber, it's pretty good fiber for some time, presumably it gets obsolete like everything else.

But that's of course not how these data centers and especially these cutting edge GPUs work, right? I mean, Moore's law might not be literally shrinking everything by a half every 18 months—at some point you get into sort of atomic level limits—but there's no question that the that the, the, the chips are getting meaningfully better and faster on a year by year basis. So, you know you sink a hundred billion dollars into a data center; is it even that useful a year from now?

So how do you deal with sort of the combination, to sort of sum up my questions, of sinking a massive amount of investment into stuff and that stuff potentially becoming quite obsolete much faster than previous rounds of kind of this scale of investment that we've seen—forget in tech history–just in American economic history?

Chris Miller: Yeah, I think that is a huge, a huge challenge. If you think about data center investment, obviously the power infrastructure, the building itself, that lasts for a while. But the chips inside companies generally depreciate over five or so years, which means that after half a decade, they're seen to be more or less worthless. And the implication of that is that you've got to make a lot of money off of those ships in the five years in which you see them having value.

And the good news is that big U.S. tech firms are extraordinarily good at making money–they’re the most profitable companies in the history of the world. But the challenge they face is that now, as they invest more, that just gets more difficult. And so that's why we are making indeed a huge societal wide bet on AI, putting a lot of money behind it and money that has to deliver an economic return that justifies the scale of capital being deployed.

Marshall Kosloff: I just want to pick up on something you said here, Alan, because Chris is the, you know, I think expert here when it comes to the specific technical aspects of this project. But I'm really fascinated as someone who covers politics and policy, the intersection thereof, in the limits of the metaphors we're using and how that could lead us in improper directions.

So once again, like, I love your point about how there's a gap between publicly funded and supported infrastructure like a highway and the actual reality of, of compute and the data centers we're building out there. And if we're actually looking at the sort of communications that go to D.C., the pitch for these policies, the picture, why these things are in the national interest—there's just a metaphorical gap between day to day actual public sector infrastructure and what we're actually dealing with here. So profit does actually matter. There is obsolescence on a shorter, medium and even long-term timeline here.

So I think we really need—part of the lesson from Chris's point about D.C. being so surprised by DeepSeek when the evidence was very much in his inbox, and a bunch of other person's inboxes as well, too—is that D.C., I think, really needs to not just sort of outsource the narrative and metaphorical thinking and actually think seriously about the ways that this is and isn't the same thing.

And also this gets even harder because AI conversation is now a conversation about national security. When we were having the conversation about fiber deployment in the 1990s, no one was saying if we're going to compete in this new globalized—well, I guess people were probably saying that, but that was a private sector conversation, right?

That wasn't the, there wasn't the idea that in order to survive geopolitically, the U.S. needed fiber to deploy. And if the fiber deployment didn't work, or it took a couple years for the deliverables to actually be there, there wasn't a national security set of stakes there. So now that that isn't quite true in this case, I think there's a lot of warning signs we should be aware of.

Kevin Frazier: And speaking of warning signs, Marshall, you, if I had to assemble a dream team for my trivia team at any bar, you would certainly be drafted because I think you have encyclopedic knowledge of, in particular, the World War II era and the buildup that we were able to create to win that war. The arsenals of democracy, you've got a podcast on the very topic.

Now, news of the government doing anything in an expeditious fashion on time under budget—you don't see it. Maybe that's a good, good Onion article, but it's certainly not something you read in the Times or the WSJ. And what I'm getting at here is the fact that we have so much money that's being talked about, $500 billion for the Stargate project.

But what does it actually mean for us to see that money get spent as intended on the timeline, that's intended? And how does your general thinking about the abundance agenda kind of explain whether or not that's possible or what narrative we should be setting so that it does become possible?

Marshall Kosloff: Yeah. And real quick, before I answer that question, you've referenced a riff that I'm actually really happy with because it speaks to the realities that tech and policy, when it comes to doing big things are pretty intersected.

So my critique of the way D.C. treated Biden-era industrial policy is they would say, look, we're spending tens of billions of dollars on the CHIPS Act, we're spending hundreds of billions of dollars on clean energy investments., we are going to build EV charging stations throughout the country. And that basically that headline was treated as the accomplishment. The accomplishment was passing legislation.

But if you look at Silicon Valley, like let's say we all get together and we form a tech company out of this, we raise a hundred million dollars, we have a great series. A, the B rounds. Great. No one's getting, you know, there aren’t any negative consequences; the raise in of itself is not the accomplishment. The accomplishment is, do you take your series A to get to a series B? Do you have your series B not lead to a down route during your series C?

Silicon Valley is very comfortable speaking in that way. D.C. is not comfortable speaking in that way because we've had this era where D.C. was not interested in questions of—and this is, you know, you've referenced the buzzy term abundance agenda; the other buzzy term or reference, within a certain nerdy quarter policy architecture—the other term that really matters here phrase that matters is the state capacity.

So the state capacity movement, policy area, et cetera, is focused on to what degree does the American state, federal, state or local, have the ability to actually accomplish the task that it says it's going to do. So if you look back at the 2010s, state capacity would say, okay, great, we passed Obamacare. To what degree can we actually launch a website that will be successful? That was the very much like state capacity story of that specific moment.

But now as we're sitting in the 2020s, where there is this bipartisan consensus that for national security reasons the previous eras consensus that we don't need to have the American state intervening in areas of technology and industry, the idea is that innovation and the market just won't solve these problems on their own.

This actually requires government develop a new ability to A) actually measure outcomes and actually understand that the accomplishment, once again, it's not the legislation, it's not the executive order. It’s okay now that we said we're going to do X, how many EV charging stations were actually built? What actual timeline are we operating under? So I think that's the key thing that we understand here.

And I think the abundance agenda and state capacity, I tie them together, even though they're very separate movements, because once again, the abundance agenda is basically saying, hey, if we look at American society, we see scarcity everywhere. So we have a scarcity in data centers. We have a scarcity in compute. We have a scarcity in housing, energy, et cetera, et cetera, et cetera.

You could basically find every single critique we could offer of America outside of the culture wars and say, oh wow, there's a scarcity problem there. The abundance agenda is focused on identifying those areas of scarcity and articulating why actually creating more plenty in those spaces would lead to us to having a better situation domestically and internationally.

State capacity though, says, okay, it's great, Marshall, that you said we need to have more housing—what ability does the government have through its legislation or efforts to actually increase housing? Is that actually effective? And these very much have to be tied together.

Kevin Frazier: And I really like this abundance framing with respect to AI. Just to get on my soapbox for a second—folks who listen to Rational Security will have heard this spiel, so I apologize to those listeners—but it's just not percolating in my opinion into the larger AI discourse in the way it needs to be.

And in particular, making sure that access to AI—and by access to AI, I mean both having the ability to pay for access to leading models, but also the know how, the knowledge diffusion of actual AI literacy—is not being talked about by anyone.

To the extent I've heard about AI literacy programs, I clicked into it and it's like a business program. It's an enterprise approach trying to get the largest companies to make their employees more AI savvy. I'm talking about making sure actual Americans know how to use AI in productive contexts.

So there's just so much to talk about with respect to scarcity. We don't have enough time, unfortunately, to dive into all of those manifold areas. So we're going to have to have both of you back. That's just the way it is, but. I think we're going to have to leave it there. It's been an absolute pleasure having you all on. Thank you again for joining.

Marshall Kosloff: Thank you.

Chris Miller: Thanks for having us.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to Jan. 6. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare .
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Chris Miller is a professor at the Fletcher School at Tufts University and Nonresident Senior Fellow at the American Enterprise Institute.
Marshall Kosloff is a Senior Fellow at the Niskanen Center and co-host of the Realignment Podcast.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare