Lawfare Daily: Janet Egan and Lennart Heim on the AI Diffusion Rule

Published by The Lawfare Institute
in Cooperation With
Janet Egan, Senior Fellow at the Center for a New American Security (CNAS) and Lennart Heim, an AI researcher at RAND, join Kevin Frazier, a Tarbell Fellow at Lawfare, to analyze the interim final rule on AI diffusion announced by the Bureau of Industry and Security on January 13, 2025. This fourth-quarter effort by the Biden Administration to shape AI policy may have major ramifications on the global race for AI dominance.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Lennart Heim: In addition, it also controls certain AI model weights, prominently not public model weights, and only model weights trained with more than 10 to the power of 26. Training flop, this means no, there's no publicly existing model right now. And then I think when it gets a lot of press, it now covers the whole world.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, a Tarbell Fellow at Lawfare.
Janet Egan: I think what the U.S. administration, the current one, is hedging on and what the future one might also place bets on is that this particular supply chain for these advanced AI chips is so narrow and is constructed of such targeted investment and intellectual property and expertise that it's very hard to diverge from.
Kevin Frazier: Today, I'm joined by Janet Egan, Senior Fellow at the Center for a New American Security, and Lennart Heim, an AI researcher at RAND, to analyze the interim final rule on AI diffusion announced by the Bureau of Industry and Security on January 13th, 2025.
This near last effort by the Biden administration to shape AI policy may have major ramifications on the global race for AI dominance.
[Main Podcast]
Going boldly into the night, in mid January, the Biden administration released the AI Diffusion Rule, an interim final rule. This rule represents the most comprehensive effort yet to regulate the export of advanced AI technologies and associated semiconductor chips to nations like China, deemed critical for U.S. security and global competitiveness.
The AI diffusion rule aims to tighten controls over the export of cutting edge chips and algorithms that underpin AI development, responding to concerns about the potential misuse of AI in military and surveillance applications by some of our adversaries. The Biden administration views these measures as vital to protecting national security while also fostering domestic innovation. However, critics argue that the policy could backfire by accelerating China's technological self reliance and fragmenting global AI collaboration.
There's a lot going on and a ton to unpack in that slight introduction. So, Lennart, we're recording on January 15th. The interim final rule was released just a few days ago. You've already produced an article somehow analyzing it and helpful graphics about it. So can you explain some of its key provisions and why we're regarding this as a new step forward in understanding export controls under the Biden administration?
Lennart Heim: Absolutely. Yeah. Thanks so much for having us, Kevin. I mean, for what it's worth, I put in like a long night in putting all of this together and paid a lot of attention to creating good figures because independent of much you write, that's generally the things which get shared around, so it's important to get them right.
I think for people who've worked on export controls before, it's, it's easier to digest such a drawn a page document than it is for beginners, right? These new rules don't stand in isolation. This is to some degree, like another consecutive move, which I think many of us actually expected.
So like to put it in like into history here, like Trump initially started doing export controls on these machines which produce AI chips, right, so they're not going to China anymore. We had Huawei and other entities being added to the entity list. And then with the Biden administration in 2022, we had these first export controls on AI chips in, yeah, October 2022, revised October 2003.
You might see a pattern here. Then a couple of months later in December, we saw another update to these export controls. This was, I think, the 2nd of December, doing on the export controls, particularly to China, key rivalries. And now we have this diffusion rule out there, which builds on top of this, right? So like when we read certain export control numbers there, we know what they mean. It was like, oh, this number, that's action AI chip.
So to summarize what this rule basically does differently in contrast to the ones before is, we still control AI chips—data center AI chips, to be precise, like these leading chips, which cost you like 20,000 dollars plus—and in addition also controls certain AI model weights, probably not public model weights and only model weights trained with more than 10 to the power of 26 training floor. This means no, there's no publicly existing model right now. And then I think when it gets a lot of press, it now covers the whole world.
And that kind of sounds crazy to cover the whole world and export control the whole world, but it's just how export controls work. You say there's a worldwide license required for this, and then you talk about the 100 pages of all the different exemptions here. Right. And then we started creating like different exemptions for different country groups, basically, which, yeah, we can go into more detail, but it's like, that's the gist of it, right?
Like it covers more countries, like it builds on top of the existing export controls and now we have like AI model weights on top of it.
Kevin Frazier: And Janet, could you help us understand a little bit more about how these countries have been broken up under the interim, the final rule here and what's significant about that decision to create these three clusters? What insights can we glean about how the Biden administration's thinking about the geopolitical race to lead in AI?
Janet Egan: Yeah, absolutely. So the one that's no surprise is the third group, which is essentially the geostrategic competitors. So we've got China, Russia, embargoed nations. And these are, these are countries that have already been controlled, but.
And then you've got the Tier One, group, which is U.S. trusted partners—so, you know, close allies, like the Five Eyes, a lot of NATO, Japan. Other countries that meet the U.S.'s criteria for good export control domestically and have geostrategic alignment with what the U.S. is doing.
But the real game changer here is the middle group, which some are calling Tier Two. So these are countries that's the rest of the world that didn't always used to previously be controlled. And I think I'd like to say that this is a big step change in approach to export controls. Generally it's been, you know, small yard, high fence. Think about really strategically about which specific countries do you want to be controlling and which ones do you really need to like put controls on. But now we're looking at all of the world being categorized into these three tiers.
That's not to say that these countries can't get access. And I think that's really goes to the point Lennart made about, it's all about the exemptions to the rule. So in some ways overnight it’s going to streamline and fast track a lot of approvals and exports of chips, but to do so, so transparently and explicitly and have controls that are being put out there for the general public to see is something that is quite a step change.
Kevin Frazier: And I think what's shocking, if you are a curious listener and you want to go see who's in our friends group, for lack of a better phrase, who's in our frenemies tier, and who's definitely on the bad actor list—it's a shocking number of countries in that second tier who may, for example, be a part of NATO, but not be included in that Tier One group where we're far more willing to allow for chips to end up in.
So Lennart, perhaps you can give us a better understanding of why we wanted to really cabin the handful of countries that made it into that Tier One group, and why we left even some of our most trusted allies down in Tier Two. What is it about them that we have some concerns, and what exemptions are in place to ensure that if you find yourself in Tier Two, you're not closed out of AI development on the whole?
Lennart Heim: Yeah. I mean, I can't speak how Tier Two eventually was picked, but I think Janet was already alluding to like it depends also what your domestic export controls are and which security guarantees we have here. So when we talk a little bit about the motivation here, one key problem one had previously with like previous existing export controls was, well, first chip smuggling. You just like, you know, you import chips to your neighboring country, and then they end up going over the border. This is, I mean, famously known for any type of export controls we have here.
And then secondly, well, computing power is like a pretty neat thing. You don't need it in your room to access it. So I can just build a data center in my neighboring country and just use the computing power from there. And those are the kinds of things when you then want to have like a close eye on. Like well, can we really be sure if we export a bunch of chips to this neighboring country, that first of all, these chips are not being diverted. And then secondly, even if you build a data center there, who eventually is using the computing power, right? That's the thing which matters here. And this is where the rule eventually helps, right? So we have like these key nations within Tier One and like the rest of the world in Tier Two.
But I think it's also important to understand here is not all the countries in Tier Two are new to export controls. There was previously a bunch of countries in the Middle East and Central Asia who had actually a license requirement. And I think most prominently discussion here was around Saudi Arabia and the UAE who wanted like access to more AI chips. And since October 2023, they needed to apply for such a license. And again, previously, every single chip, which and over every single shipment is single license, you need to ask for it.
And I think the most prominent case on this in the news was actually the G42 and Microsoft deal, right? Where like Microsoft and G42 want to build more data centers there, and they kind of, well, maybe let's just say they kind of begged the government to green light this and just like, let's go ahead and do it. And the U.S. was like, well, like, yeah, how do we feel about this? Right.
And this goes back in April. Like you can see this as this triggering moment where you're like, well, you know, got these controls. Now they're asking, do we now, like make a one time decision every single time? Like we feel this way about the UAE, this way about Saudi Arabia?
And like this framework actually gives you like, actually like a more transparent picture, right? Because notably before you could attach any conditions to this license and nobody knew about it. They were just like privately negotiated. Now, at least these things are in the rule. You know what it takes, which recurrence you ask for, and you don't need it for every single shipment because we got like these various exemption ways so you can just like know build these data centers—be it in the UAE, be it in Saudi Arabia, and like, again, as you were saying, rest of the world.
Kevin Frazier: Janet, can you expand a little bit on some of those exemptions and why they're so important? And add some color to Lennart’s brilliant remarks.
Janet Egan: I think I just wanted to back up what Lennart said about transparency, because for me the most striking thing here is that the U.S. has openly come out to say what is the rationale for their strategic decisions about who gets chips. And to put this in context, national security decisions are usually made in a black box.
So we see this in foreign investment screening with CFIUS where, you know, applications go in, things happen in confidential systems and behind government closed doors, and then decisions come out that are made in the national interest or for national security reasons. But industry doesn't normally get to see under the hood of what's going on.
And here, yes, there's like strong outcry from industry. You've got international countries that are affronted. You've got others who are just very vehemently against the rules. We've seen commentary come out of the European Commission; it's breaking up their single market approach. But what's interesting here is it's actually somewhat a continuation of the approach.
So as Lennart was saying, the G42 deal, Gina Raimondo was intricately involved in the settings of those conditions of allowing chips to go there, as has widely been reported. And this can't be done every time on a case by case basis. So actually having something laid out so publicly, I think is a really big step change in transparency as to how the government does business.
Kevin Frazier: And Janet, we know that come, I don't know, seven days from now or five days from now, Gina Raimondo won't be the head of commerce. President Biden won't be in the White House. Instead, we'll have President Trump, a new head of the Department of Commerce, a new head of BIS, on and on and on.
So what was the rationale for the Biden administration to announce this interim final rule, knowing full well, that the Trump administration was coming in, in a few days. And we don't know necessarily how the Trump administration is going to respond, or do we? Do we have an inkling? What's your read of the situation? Will this interim rule become a final one?
Janet Egan: Look, it's a great question. I think the first thing I'd say is that reading through the 168 pages of the rule, it's clear that this is the culmination of months and months of work by experts in this space. And I can imagine coming up to the end of a tenure, you don't want to see all that work or that careful thinking engagement going to waste. So I can imagine that is one of the drivers of getting this rule out before the change in administration.
But from the Trump administration perspective, I think it's really interesting. So this is an interim final rule. If the Trump administration does nothing, it passes and becomes a rule. And I think that's really interesting and how that sets up the new administration to discuss this rule with industry and international partners.
So one thing I'll say also is that it really aligns with some of Trump's stated agenda of America First, maintaining U.S. leadership in AI, maintaining U.S. leadership in tech, and being much harder on export controls.
But it also positions the Trump administration well when it comes to the discussion with industry and international partners, because it’s anchored the conversation in a potential way forward. And so rather industry or private sector partners or international partners saying, hey, this is a bad idea, we shouldn't do it, or this is a bad idea, do nothing, the onus is on them to actually come forward and say, here's what you should do instead.
And I think that it really positions a new administration well to have something to anchor on that they don't hold any ownership of and don't have to take the flak for if they don't like it, but it really pushes the envelope forward in what's expected of industry to cooperate on national security goals.
Kevin Frazier: Lennart, what's your read of the situation?
Lennart Heim: Yeah, building on top of this, I mean, like the people who passing this rule, they knew that term is coming to an end, right? So like, it would be, of course, they're trying to talk to people, like trying to like sell them, like, here's what we're thinking. Here's a rationale.
And I think what Janet was alluding to is like the issues I shared here is a bipartisan issue of having export controls—notably Trump started them. And this to a large extent solves these problems with actually the CCP and others getting access to these computing chips, right? So like there are many ways, so you can solve smuggling, remote compute access, and all the other problems. This is one way of going about it, right? And of course there are other ways of going about it, but I think it's very quite known to everyone that covers export controls that something needed to be done here, we need like a proper framework here and this was such an attempt.
Kevin Frazier: And just to play devil's advocate, I'm going to, well, I'm actually going to be even worse than that and ask you all to play devil's advocate. We know that some members of the GOP, for example, Senator Cruz, haven't been exactly on board with this new interim final rule. Senator Cruz, for example, predicted that this may be the end of U.S. dominance in AI.
So what's the counter argument to this approach? Both of you sound relatively supportive of this development, but what's the counter response? What's the worst case scenario that some folks are holding out as something we should be concerned with? Janet, I'll start with you.
Janet Egan: Yeah. And I'm quite sympathetic to some aspects of the view that this really erodes trust in the U.S. as a country that plays by free open market rules and it destroys trust in the U.S. tech ecosystem. And I think there's some real risks and sensitivities that come from this as well, that I'll cover in a moment.
But I think what the U.S. administration – the current one – is hedging on and what the future one might also place bets on is that this particular supply chain for these advanced AI chips is so narrow and is constructed of such targeted investment and intellectual property and expertise that is very hard to diverge from.
So we've seen a lot of discussion internationally about sovereign AI and wanting to get greater control over your supply chains and remove dependence on any one nation—China, U.S., others. But at the end of the day, there's not always the willingness to pay to make that happen.
An example we see this in is critical minerals. The U.S. and its partners have long called out the problematic nature of depending on China for critical minerals. Yet at the end of the day, there's been eight, nine years of discussion; there's been some policy action; yet we haven't diversified that supply chain significantly.
So I think it comes back to the same for AI advanced chips. It's such a narrow supply chain with many chokeholds throughout. We don't think that there's going to be any other country that's willing to invest the amount of time and expertise and money to make that happen. So the diminishing trust in U.S. tech when it comes to the AI supply chain is probably not going to be that problematic, but I think it will have an impact on tech supply chains overall.
And I think that's where the U.S. needs to do a lot more proactive action to show that it is willing to maintain that. Some engagement in the normal rules based training order and that it's willing to promote trusted independence of its tech sector. There's also the big China issue, but I think Lennart’s well placed to speak to that.
Kevin Frazier: The big China issue. Lennart, spill the beans. What's the, what's this big China issue?
Lennart Heim: I wish I could cover the whole big China issue, but just like really hammering home the point of Janet, which was she was just saying is, the promoting act here is important, right? You don't want to be just like, just like, like hammering down on, on like the rest of the world.
But I think the most prominent critical—like what I hear critics saying—is most just like, well, if, if we cannot like export freely into a Tier Two, which is like the majority of most countries in the world, somebody else is going to fill the void, right. And then we have the problem, like actually it's like China filling the void, selling Chinese system, Chinese data centers, like the Chinese tech stack.
And I think that's the critical thing to look out for, right? So the balance here is the key to get it right. The question is, did they get the balance right? And will this balance be updated over time? There is an answer right now, which I can speak to how I currently see the ecosystem. Somebody should look at it again half a year from now, a year from now, and every single year, and see like what's going on.
If you look at the ecosystem right now, I would just ask people to check the stock of NVIDIA. It's going well, right? To put it this way. We have just like pure dominance of the U.S. in AI. Like all leading AI companies are U.S. companies. All leading AI chips are U.S. chips produced by U.S. tools, right? And this is also the underlying factor for semiconductor export controls. Just like the U.S. can control this with its allies, notably all semiconductor allies are part of this Tier One group.
So the question is like, does China produce AI chips? Yes, they do produce AI chips. Notably Huawei is like producing the leading AI chips. The best AI chip they currently have in the market is the Huawei Ascend 910B. The thing is just are they exporting them and how competitive is this chip actually? And this chip is at least leading four years behind the leading thing what NVIDIA does. NVIDIA is coming out with the next one, so it will be like an even bigger gap there.
But this is just like pure hardware specs. What also matters is the ecosystem here, right? AMD, great hardware company, their market share doesn't look as good as NVIDIA's one, right? And like the software mode is like a big part of the thing here. So Huawei, first of all, worse hardware, worse software for these kinds of things.
And the other thing which matters here, well, if you want to export something around the world, how much can you produce of them? You need to produce like millions, hundreds of thousands of chips to get them out on the road. And these are all factors where the Chinese ecosystem right now is not really strong.
And if you look at the empirical evidence here, if you just check out like most of the leading AI models out there—again, caveat here, of course, we don't have all AI models in such a database, but it's a beautiful database by Epoch—I found two models in there which are being produced with Huawei Ascends. Out of 800 models, 250 of these models have known hardware, prominently mostly designed by Google's GPUs and NVIDIA's hardware.
And then also, a colleague of mine, Konstantin, in forthcoming work took a look at a bunch of data centers and looked at around the world, like, which data centers are using Chinese AI chips? We didn't find that many. So I would just ask everyone, like, yes, please tell us if you find Chinese chips being exported around the world, then building data centers, then building, offering such an ecosystem. We don't see this evidence yet. Like the U.S. is clearly leading on this, right. So therefore I expect it for most countries and companies, it will be like a fairly easy choice. Who do you want to partner with?
And most notably, it's not like you can maintain like a foot in both camps. If you want to be part of this ecosystem, you're kind of like asking, the U.S. government is asking you, it's like, hey you better come with us because part of this license agreement sort of times, like you need to cut ties to like Tier Three countries, companies, and like, don't do shady stuff with them, right. And that's an important consideration.
And they explicitly say in the rule, ideally, you should get a government to government agreement, right? So the U.S. government can negotiate with the hosting country, like some conditions here. And this is what we've seen as an existing agreement between the UAE and the U.S. with—I think the exact condition is probably not known, but I would expect there is like a big ask for like some tech independence from the PRC.
Kevin Frazier: Yeah, really interesting tactic to say, we're in a position of strength right now with respect to the supply chain. So we're going to put you in between a rock and a hard export control and say pick a side. And given the lay of the land right now, picking the U.S. seems to be—as you pointed out, Lennart—a pretty good pick. Janet, is there, are there some additional nuances you want to give light to?
Janet Egan: I think Lennart did a really good job of covering off the status quo today of where China's at compared to the U.S. But looking forward, the other thing I want to highlight is that China is already pulling out all stops to try and catch up to the U.S. in AI. We don't think this rule is going—at least I don't think this rule is going to make a substantial difference to that because there's already so much investment going in from the Chinese government, so much covert activity happening to try and get ahead. And so I think the security rules that are part of these export controls are a really big part of this. How do you make sure that value of IP is protected when these chips are exported?
Kevin Frazier: And Janet, if you could also give us a forecast a little bit about China's AI capacity. This is obviously—as Lennart earlier alluded to, as you teed him up—the big China issue, right? And one aspect of that big issue is just how advanced are Chinese models. And we recently saw DeepSeek release a fairly sophisticated open source model.
So how would those sorts of models be treated under and affected by this interim final rule? Do you think that this is going to quash the success of the Chinese in the open source space? What's the forecast there? And Janet, I'll put you on the spot for that question.
Janet Egan: Look, it's a great question and I think everyone was very impressed when DeepSeek's Version 3 model came out in December. I saw stats that it's been trained on using around 2,000 NVIDIA H800 chips for about U.S. $6 million, which is just nothing compared to what's going into the US AI models.
I think that the diffusion rule won't impact so much—I think China's already incentivized to do as much as it can with its own capability with open source. And the diffusion rule just seeks to help prevent chips from making their way to China when they're controlled chips.
But I think something that stands out to me has been that it looks like China's getting really creative with model architecture and other approaches in light of not having access to the most advanced chips. They've had to be creative.
So today, compute has been the proven way, scaling up compute has been the proven way of rapidly increasing capability of your models. Now, DeepSeek and other Chinese firms have had to find other ways. But now they've proven the concept. I think there's a likelihood that we're going to see greater investment in algorithmic efficiencies and algorithmic improvements in the U.S. system too, given that we've seen that that can also uplift more.
At the end of the day, you're still going to have the greatest amount of compute, the most advanced computing, and hopefully now even more advances in model architecture happening in the U.S.
Kevin Frazier: Really interesting. So, Lennart, you've identified kind of three tiers or three ingredients of AI innovation, of AI progress, right? Compute, data, algorithms.
Is the factor of compute just so important that maybe we, we can rest a little easy with respect to open source models in China? Even if Janet's prediction of improvements in data cleaning, data annotation, all those steps and improvements in algorithms continue. Should we rest on our laurels? Is this the end of the big China issue? Why or why not?
Lennart Heim: Yeah, I'd wish binaries would work most of the time, right? Unfortunately not in policy and yeah, for the AI triad you were alluding to, I'd wish I'd be the one who developed it, but like shout out to Ben Buchanan here who came up with this great concept. I just, I picked one of them and tried to just figure out what the hell is going on here.
Let's start with the practical argument. We’ve got, we’ve got data, we’ve got compute, we’ve got algorithms. Which levers can you push as the U.S. government? Compute is like one you can like push, right? Like it's just like to some degree a blunt tool, but turns out it's the only physical product on the list of these three ingredients here, right? So like that's the thing you can actually control here.
Again, with the downside, is it actually a physical product? Because you can also access it remotely. Gets a bit tricky here, but like, it's obvious since the beginning that like compute is like one input you can control. Is it the most important input? No, it's three inputs. The same when you bake a cake, you need all three of them. Right. And like, sometimes we can mix them a little bit.
And like, as Janet was saying, we got like compute efficiency improvements. And there's just like the history you've seen over all of computing over all the decades, basically, right. You get like more bang for the same amount of compute over time. That's what we've just been seeing here. And DeepSeek got a lot of bang for less compute. It's like pretty impressive, right? But it could still be within the trends where they actually need to crunch the numbers here, it could just be within the trend.
Just again, compute, like all of these things work in exponentials—we spend exponentially more compute, but these models get also exponentially more efficient and guess what? All of these things just add up and that's what we're seeing here.
So, do compute consoles still work? Now it’d be naive to think that's the only thing you can do and like it just works, right? Like it would work if you like, literally like take electricity away from people and you don't have any compute in your country. Sure, that'd be one way of doing AI governance. That strikes me as a little bit too blunt here.
What we mostly do here is just like you do a constraint on like how much computing power you have available for these kinds of things. And we just know, at least for pre-training for us, scaling laws, which means like bigger models trained on more data, therefore requiring more compute, they perform better. That's been the case.
Now many people are claiming this is hitting a wall. I would say to be determined if it's actually hitting a wall, but what we've also seen more recently is like this test time compute idea, right? So we'll just let the model think for longer. We produce more tokens, like a really long chain of thought there. It turns out to perform better. That's why everybody like is excited about OpenAI o1 and o3. They exactly do this.
How do you produce tokens? It's compute again. You need a lot of computing power. And to give an idea there, there's this famous ARC-AGI Prize where they're trying to like solve different kinds of riddles. And to actually get like the highest score, they spend more than $20,000 on compute, $20,000 just like for a single, like for a single task for solving it. And it's like quite a lot, right? This means a lot of GPUs need to be crunching for a lot of time.
So you have this double impact now. Yeah, to some degree, like, well, you cannot train as big of a model, at least not that many; then, right, you have like maybe less companies, maybe you can only train one of the models, whereas like the ones for more compute can train 10 of these models.
And then you have like limits and deployment here. And again, if AI delivers its big economic promise, then we first of all need to use it more, right? I think it's happening more, but like if everybody starts using it, if like we constantly have our AI agent running and whatever kinds of thing it's doing, they all run on compute. And like, again, if it delivers, this will just continue going up. This runs on hardware and you need this hardware for these kinds of things.
So, there you will still have like this compute constraints. It's a little bit distinct from these diffusion rules again, because the diffusion rules govern AI chips. They don't govern who's accessing models. They don't govern who's accessing compute remotely, at least only a tiny bit, right. So like you can still access all AI capabilities all around the world. This is mostly about just like, where are the chips? Who holds ultimate authority over these chips?
Kevin Frazier: Janet, you mentioned earlier something that I wanted to dive into more, our perhaps unhappy friends in Tier Two. You mentioned the EU isn't thrilled that some of them are in Tier One, some of their members are in Tier Two. Other folks have expressed a lack of support for the diffusion rule.
What's been the response among the international community with a focus on those Tier 2 members about this diffusion rule? How mad are the folks who are knocking on the door of Tier One?
Janet Egan: Great question. I think there's two aspects to this. There's the perception and the diplomacy aspect, and then there's the practical aspect.
And let's take the second one first. Practically, this unlocks a lot more trade in the near term and doesn't impact the vast majority of companies or countries in the immediate term. So from a practical perspective, suddenly you can get up to 1700 of the most advanced GPUs—even to Tier Two countries—without having a license.
It's more on the diplomatic side that there's been a lot of outcry. I think because it's seen to be a diplomatic affront to be less trusted or seen to be less than. And moving forward to sort of the Trump administration's approach, we've seen, you know, historically more of a deals based orientation from the previous Trump administration. And I wouldn't be surprised if membership between sort of Tier One and Tier Two groupings can move depending on country to country agreements and bilateral conversations.
So I think the third tier of countries that are completely restricted, we won't see that much movement there. But between the other two, I think it'll depend on bilateral discussions that incorporate broader trade agenda issues and also look at what those countries are willing to do to step up their own domestic controls and oversight measures to give the U.S. some assurance that there won't be chip and technology diversion to China.
Kevin Frazier: Lennart, anything stand out to you among the reaction from members of the international community or industry? You mentioned NVIDIA stock is doing well, but NVIDIA obviously would love it if the Chinese market was as open as possible. So how are some of these key industry stakeholders responding?
Lennart Heim: Yeah, maybe let me start with the international response.
Like, I kind of hate to be the guy, but just the numbers matter here. To some degree, it sounds like we're controlling all of the world, but here are the exemptions, here are the numbers. And I would claim the majority of the world, just like everything continues as usual. It's all fine guys. You never plan to build a cluster with a hundred thousand chips, so you need to spend billions to do it.
Right. And again, I don't like, you kind of know, undermine the right of doing it.
And again, I get it gives bad vibes or something, but practically speaking, we talk about when we look at computing power—AI computing power around the world—most of it sits in the U.S., most of it is majority owned by the U.S. big hyperscalers. Famously, there's a book called “Cloud Empire,” because what do they do? They deploy around the world, right? It's not the case that most governments within Europe, like use their own sovereign cloud to use what exactly? Microsoft Office. Microsoft Outlook. It's again, it's just like, speaking practically here, just like they're already dominating. They are already deploying all around the world on these kinds of things.
And U.S. providers, Tier One providers, they, they have the easiest way to deploy around – all around the road. Right. And this is like to some degree, a key mechanism you want to do here, which is clearly in U.S. interest, right? I have more trust in Microsoft building a data center in the UAE than a sovereign producer there, they would have more leeway and they can do this more easily. That’s like clearly in the U.S. interest in that case, right?
Of course, you undermine some rights. You don't say you get chips anymore, you only get the computing power. And maybe like more strings attached to these kinds of things. Because again, you can do more there.
Speaking to industry pushback. I find it hard to currently read into this, right? There's always been industry pushback on export control since the beginning. It was just never publicly such a big thing. I, I have a couple of hypotheses. Why is it a big thing? First of all, well, the rule leaked at some point. So people saw the rule and—
Kevin Frazier: Never a good thing when your AI diffusion rule leaks, right? That's just, that shouldn't happen.
Lennart Heim: It’s already diffusing. And I guess this is just like people to read it and I was like, well, no, we can come out. We can say something, but I think the most important factor here is actually there's a change of administration.
You got a shot at changing things, right? Would you have screamed at October 2022 export controls loudly when you knew these people are going to be in charge for another two more years? Now you've basically got a shot. It's like this new incoming admin. You're trying to prime them and tell them, here's what we actually think about these kinds of things.
And you also see companies, for example, Microsoft, at least coming out neutrally. Brad Smith said he feels confident he can fully comply with this. Right, I'll be saying like, Brad, this is a fantastic deal. The U.S. government just told you to go around the world and build your data centers everywhere. You're the chosen one. You know, you got your other friends, you got AWS and Google Cloud, they can also do so.
So like a criticism you can do is like, actually for new entrants, for smaller providers, this will be harder to fulfill, because we got these security requirements, right? If I want to run my cloud business and assume I got the cash to build 50,000 chips and more, I need to fulfill these security requirements, which are not trivial, right?
But for the big providers, they all already fulfill them. They might do a little bit more for the model security requirements, and AI developers need to do it, right?
So, yeah, so much about the industry pushback and the criticism there. And I think there's like, we see it now. And I think initially the debate was like really one sided, and I think now we have at least more think tanks, more non profit people coming out there and yeah, let, let's see what eventually lands.
Kevin Frazier: Janet, we have Lennart taking a very empirical, pragmatic approach to the future of this rule and more generally export controls, what are some of the questions that are top of mind for you, are things you're going to be keeping an eye on in the future? In the next weeks and months as we go through this rapid administration change.
Janet Egan: There's two that come to mind immediately. The first one is, and to push back a little bit against the pragmatic, practical approach Lennart puts forward.
Kevin Frazier: Push, push away. Go for it.
Janet Egan: The perception matters. The perception matters. You have other sovereign countries being told by the U.S., we get to determine the future of AI and this is how we'll do it. Countries do that, but they normally do it in private when they have control over a supply chain. And I think this matters and I think something I'm interested in is how the Trump administration can talk to close allies and partners and Tier Two members to get them more on board and to get them to understand that there's shared interests at play here. And there's room to take this more in an international sense.
The second thing that comes to mind for me is enforcement, enforcement, enforcement. So it's no secret that BIS has been reported to be chronically underfunded and already they're having difficulty keeping up with their ever-expanding mission of export control administration, compliance, and enforcement. We've just broadened the logistics and the remit of that organization immensely, and it doesn't make any sense to do this if it's not supported by new technologies or processes or resourcing that can do that.
So a question for me is to see whether this becomes a candidate for the Department of Government Efficiency to take a closer look, and to—rather than using more of the old traditional, this is how BIS has always done things approach—to think about what are we trying to achieve here? And what's the most practical and effective way of going about that? So there are two things I'll be watching in the new administration.
Kevin Frazier: Elon, Vivek, you've got an assignment here at DOGE, we need your eyes on the BIS. Lennart, what are you keeping an eye on?
Lennart Heim: I mean, yeah, let's use AI to run AI policy or something. Maybe, maybe that's how it works. Maybe that's exactly how it goes wrong. TBD, right?
What am I—yeah. I think, I think, Janet, like what she was saying is like with the Tier Two, but I'm like also criticizing my papers—like, maybe like Tier One is a bit too rigid, like, and it would be nice, like, it would be nice if you say like, hey guys, here are the conditions where you can join the club of the cool kids, you know?
So like just having like this, like more clear as like, not like, I think right now the diffusion rule leads to, which is like, there you are and you're going to stay there. And that's historically how export controls worked and we know we're entering like a way more dynamic regime here and this new dynamic regime is like being done by export controls, right?
And like, again, like highlighting the key conviction here—the diffusion framework done by export controls, right. And this just speaks also to the tools which the government has available in the government to do these kinds of things. And like, it's funnily enough, I, I be claiming BIS is our AI agency out there. Not the one we want, but it's the one we got and we could expand way beyond there. Right. There's like many things which needs to be fixed there in like many ways. So can BIS can like be improved on this one.
Another thing which is important here, because it's an export control framework, it does not touch on domestic issues. If I transfer a chip within the U.S., it's not an export. So, these conditions do not apply here.
So, border weight security, security of data centers is critical for a bunch of AI stuff, right. So we know, say if you build data centers anywhere else in the U.S. near the conditions, but if you built in the U.S., oh, whoopsie there, we, we, we don't have the mechanism for this one here, right.
So like as a minimum, we should, like, keep the standard high. I think it's fair to say that most data centers, like, fulfill similar requirements, but I think for the model weight, start with security requirements, we don't currently have them in the U.S, to my understanding. And companies would be welcome to correct me on this with ideally public statements.
This also needs to be done domestically because most models are being trained here, are being deployed here. So it's the most important to actually keep them right. And, and when we talk about, for example, the threat model here that like somebody steals the model weights, surely a threat model is they walk into the data center and they pick the right hard drive.
And like if they steal it again, it would probably be a bit harder than, than how I just described it. But the most common way stuff gets stolen online, it's not by somebody walking in some way—it's just like digital access means. Like, let's just call it the normal way of hacking. And U.S., like providers are still vulnerable to these kinds of things. And there's like a lot which needs to be done here.
Kevin Frazier: All right. Well, we certainly have a lot of topics to cover looking prospectively, so I look forward to bringing you all back on at some point. Maybe we'll also be talking to Elon and Vivek at that point to understand their investigation of the BIS, but for now we'll have to leave it there. Thank you all again for coming on.
Lennart Heim: Thanks so much.
Janet Egan: Thank you.
Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org.
The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillen of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.