Cybersecurity & Tech Foreign Relations & International Law

Lawfare Daily: Kevin Xu on the State of the AI Arms Race Between the U.S. and China

Kevin Frazier, Kevin Xu, Jen Patja
Monday, December 9, 2024, 8:00 AM
What are China's AI ambitions? 

Published by The Lawfare Institute
in Cooperation With
Brookings

Kevin Xu, founder of Interconnected Capital and author of the Interconnected newsletter, joins Kevin Frazier, Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin and a Tarbell Fellow at Lawfare, to analyze China’s AI ambitions, its current AI capacities, and the likely effect of updated export controls on the nation’s AI efforts. The two pay particular attention to the different AI development strategies being deployed by the U.S. and China and how those differences reflect the AI priorities of the respective nations.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Ask Us Anything Ad]

Anna Hickey: Every news alert in 2024 seemed to bring new questions, but fear not, because Lawfare has answers. It's time for our annual Ask Us Anything podcast, an opportunity for you to ask Lawfare this year's most burning questions. You can submit your question by leaving a voicemail at (202) 743-5831, or by sending a recording of yourself asking your question to askusanythinglawfare@gmail.com.

[Intro]

Kevin Xu: The newest round of export control is very much focused on the next layer of hardware capability that needs to be restricted from the U.S. perspective. And this is both the actual kind of selling of the end product, the memory, but also a lot of the equipment that will go into possibly allowing China to produce them themselves.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, senior research fellow in the Constitutional Studies Program at the University of Texas at Austin and a Tarbell fellow at Lawfare, joined by Kevin Xu, author and founder of the Interconnected Newsletter.

Kevin Xu: There has been, probably to your point, Kevin, an overemphasis on keeping China from having the stuff that we have, as opposed to spending more time thinking about investing in our own industrial base to build more of the stuff that we need and want here on our own soil.

Kevin Frazier: Today, we're talking about the state of the purported AI arms race between China and the U.S. following the recent announcement of updated export controls.

[Main Podcast]

There's a lot of myths and truths out there about China's AI ambitions and its AI capacity. So I think before we dive into some updates about what we're seeing the Army say, what we're seeing the Air Force say about their concerns about China's use of AI, can you give us a sense of your understanding of China's current AI capacity?

Let's save our conversation about China's AI goals for a second. Let's focus first on what are its actual capacities when we think about the sophistication of its models, vis-a-vis, let's say, kind of, our leading frontier models here in the U.S., when we're thinking about OpenAI or Anthropic or something like that?

Kevin Xu: So I think it's helpful for folks to understand a little bit of the historical context of AI In general, right between U.S. And China. There is the pre-ChatGPT pre-generative AI era of AI that was very kind of a hot topic already. And then, you know, we are now in the post ChatGPT, post-gen AI world. So I'll kind of separate those two things real quick, right?

So pre-ChatGPT, China has already been advancing in, now we call it traditional AI or traditional machine learning, in a very meaningful way in terms of using big data to do predictive analysis for all sorts of, you know, consumer product, whether it's fintech, whether it's e-commerce. And that has been very much seeped into its kind of manufacturing industrialization base as well when it comes to robotics, when it comes to you know, supply chain management.

And these are all things that, one has a lot of data being generated. And two, the lifeblood of any AI capability or capacity, as you call it, is still very much relying on both the quantity and the quality of data, which again, pre-ChatGPT, China has had a lot of already, and so did the U.S., right? We weren't necessarily behind in a lot of ways. It's just that these two superpowers have been kind of coming close to neck and neck when it comes to traditional AI application, if you will, pre-ChatGPT.

Now we fast forward to post-ChatGPT. That really, I think, shot, was a big shot in the arm for everybody who works in technology because for people who don't follow the GPT or the OpenAI journey, it came out of nowhere, essentially, right? And actually a lot of the, I think, has to do with COVID as well. The most meaningful GPT model for me personally, having, you know, work in technology for a little while now was actually GPT-3, which was launched in the middle of 2020, in the middle of COVID when, you know, the world was still being confused about whether the world is going to end or not, but GPT-3 came out with already very capable coding abilities.

And a lot of programmers and, you know, nerds like myself already looking at the auto-complete ability of GPT-3 as a model which has actually morphed into the first meaningful AI application, which is GitHub Copilot, which is, you know, the coding assistant product that was launched, actually, technically a few months before ChatGPT came into the world, right?

And I think when ChatGPT came on the scene, a lot of technologists in China were also caught by surprise. Like as much as they were doing to do traditional, you know, AI, quote, unquote, they were completely kind of flabbergasted by the ability of generative AI to be able to do this. So there was this huge kind of rush, like kind of a gold rush of VC funding, of state back venture funding into a bunch of AI model companies, both startups, which we can talk a few of them about. But also within the large technology companies in China, like Alibaba, you know, Tencent, Baidu, what have you. All of their internal AI divisions have started making, you know, AI models, right?

There's this term called a Hundred Model War in China in the last couple of years, where everybody and their grandma's making AI model, everybody knows it's not sustainable, but everyone's rushing towards this next moment of technology inflection. Which again has a lot of mirror images in Silicon Valley as well. There's all kinds of AI models, companies that were funded in the last couple of years, and a lot of them are being kind of absorbed.

So, you know, when you talk about AI capacity, I think there's a lot of weaknesses to the Chinese system when it comes to making generative AI applications, in particular, that we can dive into, but there are also a few advantages that China can take to their own liking, if they want to kind of advance past the U.S. But right now my personal estimation is that China in general, is still like a year to maybe a year and a half behind state of the art models in the U.S., but that gap is kind of always shifting depending on which day you ask the question.

Kevin Frazier: And I think it's important for folks to know that Stanford recently released a report suggesting that the U.S. was indeed ahead of China with respect to AI capacity. China was number two, the UK was number three. And they said, you know, it's still a competitive space with respect to all of these different nations.

And I think, as you point out, Kevin, one myth I'd love to tackle right off the bat is you hear some people speculate that the only thing going in China from an AI perspective is stealing Meta's Llama model, and they're not even stealing it because it's open source. But there's this narrative out there that China is just relying on Meta's Llama model, which is open source, as opposed to something like Claude, which is offered by Anthropic, which is a closed model. Can you help readers break down that myth or perhaps there is some truth to it? How much is China relying right now on the U.S. for making progress on AI?

Kevin Xu: I will call that a half myth. I think the part of it is true, is that Meta has released probably the most capable model in the open world, right? I think technically we call them open weights models, not open-source model. And I know that's a bit of a semantic difference. But at least for a lot of people in the open-source world like myself, whatever Meta has opened is not what we consider open source, because there are three kind of major components to any model.

So, you have the weights themselves, which is basically kind of the probability estimates of any kind of given input to output in the model. And then you have the actual data that went into the training of the bottle. And then you also have the code, the mathematical functions that make up the architecture of the model, right? And so far, most of the quote unquote, open models have only opened the weights, not the architecture slash code, nor the training data or the input data to train the model. And those two things are arguably way more important than the end result of the weights themselves.

Now, with that said, I do think a lot of Chinese companies have been very quick to deconstruct Llama every time a new version has been opened, right? And this notion of stealing is a bit of a, sort of a patting ourselves on the back when it comes to the U.S. perspective, thinking one: it's worth stealing, two: the act is stealing, when the information is completely open.

And that's very much a value norm within the open source community globally, is that when you open source something, it's there for the taking, right? There's no “my homework” that you are now stealing to ace your tasks using my open-source code, so to speak. Everybody is just taking whatever is open to iterate on, to build upon, to use for your own good if you want to, but also to build on top of each other. That is kind of the most sort of healthy, positive way of understanding open source as a development, a technology building process or development model, if you will, right?

So that's what I think is sort of the half myth part. I think the other half myth, and this is something that's a bit more newsy perhaps, is that over Thanksgiving weekend, or over Thanksgiving week rather, we have seen two Chinese AI companies come up with reasoning models. This is not matching Llama anymore. This is matching OpenAI’s o1 thinking model, right? This is sort of the next stage of AI model advancements, if you will. They have matched or come close to matching the capability of o1 model. And o1 is not open source, it's very much close hold or black box.

Yet, given whatever we know about o1, two Chinese companies have been able to kind of get close to replicating its capability. One is DeepSeek, which is actually a hedge fund that is doing AI modeling, you know, building. The other one being Alibaba. And they're actually both planning on open sourcing, or open-weighting their o1 equivalent. So they are even going a step ahead, as far as giving the best stuff they have for free into the open versus kind of state-of-the-art model development in OpenAI or Anthropic or Llama or what have you.

So the dynamic is changing literally week to week. And it's very fascinating to watch this competition or race or however you want to frame it.

Kevin Frazier: What's startling too is seeing the discrepancy in the approaches used by China and the U.S. with respect to leaning more into open source models, which we're seeing in China, versus the U.S. seemingly wanting to champion and embrace a closed approach.

So if we look at the National Security Memo, for example, that was issued by the Biden administration a couple months ago, there was a big emphasis on making sure the various defense agencies were capturing and helping cultivate the latest models offered by OpenAI or Anthropic and helping encourage that innovation from firms that have traditionally been producing closed models. And a big rationale has been, we want to make sure no adversary, in particular, China, can access that same technology.

What's the rationale behind China taking the alternate approach, really trying to champion this, as you pointed out, grandmas fighting grandmas over the latest model, seeing all of these hundred different models fight one another for dominance in the open-source space. Why is China leaning into that approach, given its broad ambitions that we can talk about in a second?

Kevin Xu: So historically, China has had a pretty long relationship with open source, dating all the way back into the early 2000s. A lot of its companies embraced open source at that time already as very much a way to kind of catch up quickly, right, in its kind of lagging, you know, positioning very vis-a-vis U.S. technology in general. You have some of the largest companies like Alibaba and others literally building their infrastructure stack on open-source technology at the time, to weed itself from the U.S. paying for the proprietary equivalent of, again, U.S. technology vendors. And I will say at that point, it was very much this classic kind of taking and building to catch up relationship when it comes to open source.

Now, fast forward to probably around, I would say four years ago, open source as a term has become a fairly key component in China's kind of technology slash industrial policy overall. One document that was interesting, was about three years ago or four years ago, the Ministry of Industry and Information Technology, MIIT, which is one of the, kind of, the governing bodies in China that regulates much of this technology industry, has fused the term open source into one of its policy document. And one of the goal, or guidance that it has given to his industry is that it hopes that China will be able to produce a two to three open source projects of global recognition by year 2025, right? Which is, you know, around the corner. And this was again, all before, you know, ChatGPT, general generative AI, et cetera.

So open source as a paradigm has been embraced not only as a way to catch up, but perhaps, and this is me speculating, also as a way to project China's technology power.  Almost in a weird soft power sort of way, like we're very okay with producing and making technology in the open, give it away to the world, let everybody and anybody who wants to use a database or an AI model or some random data engineering, you know, library, that you need to build your cloud infrastructure, use a open source package that just was made by a Chinese, you know, organization or company when it started, right? So that has been a positioning for a while.

And in the AI context, more recently, I think when the foreign minister of China visited the UN during UNGA, they were talking about kind of AI policy and what does AI mean for China's foreign policy. I think open source was also mentioned as a component of how China wants to be a quote, unquote, leader in the AI conversation in the world and promote AI's adoption, particularly in the Global South, right?

So it's a very interesting contrast, as you mentioned, Kevin, between the way open source has been used as a piece of foreign policy, or domestic industrial policy versus how the U.S. is currently viewing it, which is a bit more kind of close hold.

Kevin Frazier: And we're talking on December 2nd, which is the day that the Bureau of Industry and Security announced its updated export controls here in the U.S., trying to diminish Chinese access to key components in developing and further researching AI models. And a lot of that hinges on the idea that China is following a similar strategic approach as the U.S. with respect to developing and advancing its AI capacities.

So with respect to trying to diminish China's access to certain chips, with respect to enlarging the number of blacklisted companies that can't receive certain chips from various U.S. manufacturers and U.S. allies.

Are these export controls even being updated and expanded? Do you see them as being very efficacious with respect to trying to slow China's development of AI?

Kevin Xu: So obviously, Kevin, as you mentioned, we're recording on the day when the export control or the new round drop, right? So all of us will follow this topic closely are literally still going through the, you know, hundreds of pages of legalese and conditions and exceptions as we speak. So…

Kevin Frazier:  Kevin, you know, I'm a professor, so I have to give tough homework assignments. I'm sorry. I cold call people and you're getting cold called, you know, it's just, it's happening.

Kevin Xu: So to respond to your cold call question, based on what I've seen so far, I think both the previous round, which was in October, 2023, and this current round that we're you know, talking about right now, have very much been focused on limiting the hardware, and the ability to produce the kind of hardware, that China needs to advance its generative AI capacity or capability vis-a-vis the U.S.

And one of the most notable additions to today's, or the most recent update, is a high bandwidth memory, right? I think anyone will follow the hardware conversation of AI knows that high bandwidth memory, in addition to very powerful GPUs from NVIDIA or AMD are the key ingredients to producing the kind of computation capability needed to train larger, larger, larger models, which the assumption still is that as the computation scales, as the size of the model scales, so will the capability of the model proportionally.

This is becoming a bit of a hotly discussed topic in the AI world right now, so we won't get into who's right, who's wrong, because frankly, nobody knows. But I do think that the newest round of export control is very much focused on the next layer of hardware capability that needs to be restricted from the U.S. perspective.

And this is both the actual kind of selling of the end product, the memory. But also a lot of the equipment that will go into possibly allowing China to produce them themselves. And we're getting into a lot of interesting territories as well, because we've seen Chinese chip fabs being able to, say, produce a 7 nanometer capability chip, despite the assumption that the previous round of export control was more than enough to keep them from being able to do so, right?

And if you dig into the details of these, like, lithography equipments, you also find that older generation lithography equipments that China was still able to either procure or keep from, you know, before export control can be used in kind of creative multi-patterning ways to still carve out more advanced or smaller chips but with poor yield.

Right? So the conversation basis is they can make it; it's just a matter of how much they can make it within the time frame they need to be able to scale up this capability. And now we're going to come up with another round of sanctions that will keep them from being able to do that in the future, right? And I think all this is also based on the assumption that, the transformer model, right, the “T” part of GPT, is the answer to AGI.

And if it is, then I do think, and I have said this publicly before, that our current kind of set of U.S. export control measures has been tough enough, and I think to your question, efficacious enough to keep China, kind of, reasonably behind, depending on how you define reasonable, whether it's a time frame or, you know, versioning or whatever, so that we have this, like, maybe sizable, maybe not comfortable, maybe sometimes uncomfortable lead to keep going with this AI race, right?

And I don't think the assumption should ever be that with these export control, China will just actually lie down flat and give up. I don't think that's ever been the goal of these policies. Maybe some people had hoped that would happen. That's certainly not happening, given everything that's happening with Huawei and whatnot. But I think that is where we're going. So that is kind of the current understanding of the export control.

And I will say one last point on this, you know, late breaking topic is that I do think the kind of global alliance of export control is falling apart a little bit. We all know that there's a lot of important equipments being made from Japan, from the Netherlands, from other parts of Europe that are very much in the chip-making supply chain. And previous rounds of export control has had a decent enough traction with allied governments, so we are going to stand in the same line, so to speak, when it comes to executing that.

Because if you know the U.S. is the only one doing this, it really doesn't matter, right? And right now, based on reporting so far, it hasn't looked like the Japanese government or the Dutch government is going to kind of follow step-by-step with what the U.S. government has released so far. Maybe that will change in the coming weeks. But if that kind of alliance does not become as kind of lockstep as it used to be, then I do think the export control as it’s intended, will become less effective over time.

Kevin Frazier: And Kevin, just to really zero in on that: would you say that part of the reason we may see diminished willingness to step in line with the U.S.-based export controls is just the fact that China is a huge market? And if you can tap into that market, that's a lot of dollars on the table. If you're producing, let's say in Japan or in South Korea, if you want to expand your market right next door, it's hard to follow the party line set by the U.S. Is that a good understanding or are there other factors?

Kevin Xu: I think the profit motive and the size of the market is one part of the equation. I think the other part of the equation is the national security justification to all this export control overall that the U.S. has kind of placed on itself as the burden of proof, if you will, the party to really tell the rest of the world how important these sets of restrictions are to their own security.

Like, how does, how could this benefit Japan's national security? How could this benefit, you know, Europe's national or, you know, continental security or what have you? I think that has been probably a little bit of a tougher conversation.

And if you, kind of, project forward into a Trump administration where the definition of international alliance is very different, that you say from the Biden administration, then whether a country, doesn't matter how longstanding of a relationship we've had with this country, has been able to follow along so far in terms of export control, I think this is one of the pieces of the puzzle or perhaps one of the chips on the bargaining table that these countries will have to keep their right when it comes to negotiating tariffs or negotiating some other sorts of things with the Trump administration, before just kind of following along in lockstep with the United States, whether they sign up to the national security argument or not, while being lobbied by their own company.

And some of them are their own national champion. These are not some itty bitty little, you know, small business associations, lobbying them. These are the most important technology company in their respective country. Whether it's Tokyo Electron, whether it's ASML. They're very powerful and they ought to be very powerful voice in the way any government makes their policy. And of course, their motive, their profitmaking motive is very much geared towards being able to maintain at least some level of predictable access to China being a huge market.

I don't think these companies are completely kind of ignoring national security, you know, implications. I don't think they're exactly that naive, but it's gotta be a balancing act as opposed to every year, the stuff I just R&D'd five years ago that I was going to ship off to sell, it's just all of a sudden I can't sell to, you know, a quarter of my market. And that is a very difficult way to do business, even under the best circumstances.

Kevin Frazier: I, you don't have to get an MBA to agree with that statement. All right, that is a tough one. And I think you'll be pleased to know from a cold calling perspective, you definitely get a check plus, which is the five stars on my grade sheet.

Kevin Xu: That's something I never got in law school. So I appreciate that.

Kevin Frazier: Oh gosh. Yeah. You had all high honors. We know that. All HHs here.

So I'm keen to know on that national security angle, another key narrative is that China is racing ahead to militarize AI and to integrate its AI into military operations, into weapons systems. We see, for example, concerns raised by the secretary of the air force that AI is going to be a core component of China's potential invasion of Taiwan, as soon as 2027. We see a lot of national security experts raising concerns about the need for the U.S. to lead in AI because China is going to race ahead.

Recently, I've read about the fact that China seemingly isn't embracing the need for a human in the loop, with respect to the use of AI in a lot of these weapons systems thereby allowing them potentially to use AI in more risky situations or with greater speed.

So this whole narrative about China's emphasis on AI being a core component of its national security goals, where are we on the truth-myth scale there? Are we dealing with another half truth or are our pants on fire? What are your thoughts?

Kevin Xu: So first of all, I want to caveat by saying that I'm not a military expert. I spend more time doing investing, so I'm much more of the business tech guy, if you will. But of course, the military conversation is looming very large in the AI safety part of AI conversation.

One thing I do want to note is that during the very last meeting between Biden and Xi, this is on the sideline of the APEC summit in Lima, the two leaders committed to keeping a human in the loop when it comes to launching nuclear weapons. So there's that one. So this is a very, I think important, but kind of underreported commitment that frankly, on the one hand, it feels very duh common sense. Like, of course you should have a human in the loop before launching a nuclear weapon, right? Yet we are in this very interesting AI world where that may not be a case.

So, to put perhaps some existential fear aside for people who are listening, I do think both the United States and China and any well-meaning state actor with nuclear weapons will, or should, sign on to some sort of commitment that AI will not be the only component that decides the launching of a nuclear weapon. A human will be in the loop, right? So that's number one.

I think number two, a lot of the AI conversation when it comes to militarization has been wrapped up into drones. And of course that is another application of AI and it has been well underway again, pre-ChatGPT. Right? So this is not a generative AI kind of a sui generis conversation, as far as I'm concerned.

China has, of course, also be leading in kind of the commercial civilian drone industry with companies like DJI and a bunch of others where when the U.S. has been largely, actually, a consumer of these products, as opposed to also a maker of these products, right. Which is kind of bananas, if you think about it from our own national security perspective, and I know that a lot of people are waking up to this where we have our own drone companies now, but they're still relying on Chinese components to make the drone. So there's a whole lot of supply chain that you need to untangle to completely kind of domesticate, or kind of homegrown your drone industry, if they're used for the purpose of military application, right?

And if you separate those two things, the only sort of real AI conversation in the context of military that I've heard is one, of course, is the offensive, right? Use drones to fight the wars that we would have usually sent humans to fight. Thus, less human die, at least our own human die, but our enemy's human could still die with the drones that we use. So, very dark, but you know, that is the AI conversation when it comes to offensive.

The other side of the AI conversation that's frankly a lot more benign, but that gets wrapped up into the conversation, it's just a, hey, now we have all these really cool, you know, large language models, foundation models that we can perhaps apply to better manage the supply chain and operational back office processes of the Department of Defense, of the Army, of the Navy, or the equivalent of that in the Chinese side.

So the procurement and the operational side gets a little bit more efficient. That is just, you know, optimizing your back office, right? So that's not, in my mind, inherently military, but it's serving an organization that does military things. So I think before we kind of get too wrapped up by the militarization of AI, whether it's in China or the U.S., we should separate the end use cases first, and then think about what are the safety consequences, whether it's for a country or globally speaking that we need to be aware of.

Kevin Frazier: Speaking with your business tech guy hat on, I'm wondering if you were the AI czar, which we may soon have, according to recent news reports under the Trump administration.

If you were the AI czar, knowing that the transformer approach to AI development may not be the end-all, be-all with respect to winning the proverbial AI race, do you think that the, sort of, widespread embrace of competition, seemingly, in China, trying to incentivize as many different companies as possible to develop as many different approaches to AI, is maybe the better long term strategy with respect to achieving AGI? Or do you think having key identified national champions, let's say in OpenAI or Anthropic and really investing as much as possible in those companies is a better tactic? Wearing your business investment and kind of tech acceleration hat.

Kevin Xu: I think if I were an AI czar for a day here in the United States, you know, with the pro-innovation hat on, it's an open question, right, whether transformer is the end-all, be-all model, even though there's a lot of capital being devoted to this direction already.

And I think one, I would be very lightweight on the set of regulations that I will put on our best companies, whether it's Google, OpenAI, Microsoft, you know, the startups, what have you. And that's one of our key advantages, quite frankly, vis-a-vis China, where the Chinese’s best companies, doesn't matter how innovative, doesn't matter how capable it doesn't matter how well funded they are, are living under a much more heavy handed governing system overall on all sorts of things.

You know, even the most benign thing about releasing a next version of a new chat bot with a new model I just came up with that has like 30 percent incremental performance gains, but it's now a different version. I now need to go through the regulatory system to get some kind of approval or at least registration, at least some kind of hurdle, right, to even release this product to the consumer, to get some feedback, right?

And under that environment, ChatGPT would have never been released in the first place. ChatGPT can only happen in the way that it did because it lived in the United States or it started in the United States. It couldn't even do what it did in Europe, right? Or other sort of more like minded regime, if you will.  So that's kind of point number one. Because if the transformer is not the end answer, I would like, you know, U.S. companies to figure out what is the next thing that is not the transformer first, right? And see if that will move the needle forward.

Now, that being said, I do think AI safety is a legitimate concern. And one template that I would kind of, as the AI czar, replicate or scale out is, I think, what Anthropic is doing with our nuclear kind of energy commission here in the United States, which is that every time Anthropic came up with a model, they're going to work with these very important key regulatory agencies that holds the keys to these world destroying weapons or capabilities first, to have a very close red-teaming relationship.

You know, let these experts test out this new model first to make sure that it isn't inadvertently creating nuclear weapon, which is what everyone's fearing, right? You can just get an instance of Claude and now you can create a nuclear weapon, which is kind of silly to begin with, but that's what people are afraid of. Then let's remove that risk once and for all with the right expert within the government to do so.

And I would suggest, frankly, every country to do that for their own sake, before you release the next model, making sure all these, kind of, existential, if not low probability risks are being kind of removed first before we release it to the consumer to do more positive, more productive, more interesting things with AI. That's sort of the way I would think about it.

Kevin Frazier: Pretty strong pitch for AI czar, Kevin. I gotta say, that wasn't bad. So when we're thinking about some of the other myths or key things you think more folks should know and understand about China's AI capacity and ambitions, are there any outstanding myths or truths that you'd really want folks to know about?

Kevin Xu: I think one thing I would, well, maybe two things, right? One is just that Chinese open source, as a force, is becoming more and more recognized around the world. You know, the U.S. used to dominate open-source contribution. You know, when I was working at GitHub and working with open-source foundations, the U.S. also always dominate the chart when it comes to the amount of contribution. And we still do, but China has been catching up very quickly. It is by most measures, number two in the world, I think Germany and the UK kind of battles out number three, depending on the project, right?

And that is a, I think that, net good thing. For China to actually produce and give more and not just take more. And I think there is a good way to encourage that without wrapping that into too much military or kind of national security, geopolitical related conversations, because for the most part, code is just code, and they're just very utilitarian tools to build things, right? And China is becoming a much more of a contributor, not just a taker of that.

And the second thing I would mention is, and this is more of an observation than an opinion. There is actually no AI institute or AI safety institute in China, as far as I know. You know, we are standing one up in the U.S., you know, Japan, Singapore, you know, South Korea. A lot of countries are standing up these AISIs, right? We had a summit of sorts of these institutes in San Francisco not long ago. China was not present as far as I'm concerned.

And one of the reasons China as a country actually does not have a singular body that quote unquote does AI safety, and that's a very interesting phenomenon to kind of maybe peer into what does the Chinese leadership at least think about the danger part of AI versus the productive part of AI. There are a lot of turf battling within different government agencies right now to be the Chinese AI Institute owner, safety institute owner, but that's still like a TBD kind of a development. And I think that's very instructive in terms of thinking about perhaps for whatever reason, the Chinese government doesn't think AI safety is nearly as high of a priority, among all their priorities they have to tackle, as we think over here, or we think they should have over there.

And that's something that we should think about when it comes to the overall global conversation of AI safety. It's not to say China doesn't care about AI safety, but as far as the positioning is concerned, it's still something that they're trying to figure out what they should do domestically, and how they want to project those views to the rest of the world.

Kevin Frazier: One final question with respect to both the ambitions of China and the U.S., I think there are some concerns, count me among those who have this concern, that the perpetuation that we are engaged in AI arms race, might distract us from other potential AI use cases, right? We've seen that the creation of an arms race, like in the Cold War, can lead to important advances in technology that later have downstream consumer impacts that are positive. You know, that's how we got the microwave. That's how we got the computer, arguably. You can point to all these positives as a result of competition in innovation.

But do you think there's any merit to the idea that by perpetuating this contest between two superpowers, we may experience a delay in the realization of more productive AI use cases or societally beneficial AI use cases?

Kevin Xu: I think it's mostly positive from my perspective. I think whether you take the race or the competition literally, slash seriously or not, it has, in the United States, first of all, really revamped the conversation around energy sustainability and security, right? Like nuclear power is making a comeback. There's a lot more embrace for nuclear technology overall from a civilian use case, and that has very much been powered by the need for AI infrastructure build out because of the power hungriness of the GPUs that needs to go into it. And that could probably be also a good catalyst to upgrade our smart grid overall, our entire energy infrastructure. So I think net, that's probably a good thing.

Now, when it comes to distraction, and I don't know if I can tag this to the AI competition per se, but I do think maybe an over obsessiveness with keeping China away from getting our chips capability, whether it's equipment or software or, you know, actual, a product has gotten a little too much attention. We're talking about export control today.

But you know what also happened today is the resignation, or some actually rumor to say a push out, of Pat Gelsinger, who was the CEO of Intel, right? So he resigned over Thanksgiving holiday as well. And that is just like more salt on the wound of a, really a national champion in the United States that has really not gotten his stuff together, and is continuing to flounder, in my opinion.

And if we don't build up that capacity to just make our own chips and to use that as a way to boost other parts of our industrial capacity when it comes to making an electric vehicles or batteries or all these sorts of other stuff, then I think there has been probably, to your point, Kevin, overemphasis on keeping China from having the stuff that we have, as opposed to spending more time thinking about investing in our own industrial base to build more of the stuff that we need and want here on our own soil.

Kevin Frazier: Wow, well, you've certainly given us a lot of things to keep track of. I think the implementation of this latest round of export controls will be fascinating to watch. Who may be appointed as AI czar, of course, will give us a big sign about where AI policy may be headed under the next administration. And of course, keeping our eyes on China and its own developments.

A job for many people, including you, Kevin. So get ready for being cold called once again, at some point, but for now, we're going to have to leave it there and have you on down the road.

Kevin Xu: Thank you so much for having me.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Kevin Xu is the author of Interconnected, a bilingual newsletter exploring the intersections of tech, business, money, geopolitics, and U.S.-China relations.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare