Lawfare Daily: What China Thinks of Military AI with Sam Bresnick
Published by The Lawfare Institute
in Cooperation With
Many Pentagon officials and U.S. lawmakers likely lay awake at night wondering what Chinese leaders think about the use of artificial intelligence in war.
On today’s episode, Sam Bresnick, a Research Fellow at Georgetown’s Center for Security and Emerging Technology joins Lawfare Managing Editor Tyler McBrien to begin to answer that very question and discuss his new report, “China’s Military AI Roadblocks: PRC Perspectives on Technological Challenges to Intelligentized Warfare.”
They discuss how Sam found and analyzed dozens of Chinese-language journal articles about AI and warfare, Beijing’s hopes for these new and emerging technologies, and what, in turn, keeps Chinese defense officials up at night as well.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Sam Bresnick: We worry that China is catching up to us. In China, they worry that they are still behind us and that the gap may be widening. However, over the medium to long term, they hope that they can use these AI technologies to leapfrog the United States and come out on top militarily.
Tyler McBrien: It's the Lawfare Podcast, I'm Tyler McBrien, Managing Editor of Lawfare, with Sam Bresnick, a research fellow at Georgetown's Center for Security and Emerging Technology.
Sam Bresnick: There are also a lot of concerns about China's ability to develop and deploy AI enabled military systems because of difficulties developing individual parts of a whole AI kill chain.
Tyler McBrien: Today we're talking about Sam's new report, “China’s Military AI Roadblocks,” and what dozens of Chinese language journal articles reveal about how the People's Republic of China thinks about AI in warfare.
[Main Podcast]
So Sam, you just published this report: “China's Military AI Roadblocks.” But before we get into the meat of some of the findings and the methodology and how you did all of this, I want to explore a bit of the report's origins why did you undertake this project and what kind of questions were you trying to answer?
Sam Bresnick: Yeah, so the reason I undertook this report was because I was hearing two very interesting and what I thought to be under substantiated claims about China's AI, military AI capabilities. The first was that China is catching up to the United States or even pulling ahead of the United States in its military AI, in its ability to develop and deploy military AI enabled capabilities.
And the second was that China doesn't really care about AI related risks as much as we do. And I would ask the people I met with, whether they're in the government, whether they're in the private sector: why exactly do you believe these two things? And I was never quite satisfied with the answers I got.
And so I thought it would be interesting to, instead of speculate about these things, why don't I go and read what Chinese defense experts are writing about these two questions themselves? What do Chinese military experts think about China's military AI capabilities? And what are they saying about China's take on AI related risks?
And from that, I essentially decided to spend about six months reading through a very large set of journal articles, Chinese language journal articles, basically on the future of AI and warfare.
Tyler McBrien: That's a very admirable instinct to go straight to the source. But I'm also curious what answers you first got, what was some of this conventional wisdom surrounding Chinese military AI capabilities that didn't quite ring true or that you wanted to pick apart, so to speak.
Sam Bresnick: So on the risk side I think there's a lot of concern in the U.S. that because China feels like it needs to catch up in AI and military AI, that they're willing to cut some corners on maybe testing and evaluation, or they're willing to, cut some corners, take more risks as they roll these things out because they feel that they're they need to catch up in order to be economically competitive and competitive in the military sphere. Essentially what I couldn't get was any like legitimate citations of official government documents or really authoritative scholarship that back that up.
And then it's similar on the military AI side, right? You hear a lot of people in the U.S. --- let's call them defense AI startup ecosystem --- who are saying, essentially, China is catching up and we should be very concerned that they will soon surpass us unless the Pentagon starts spending a lot more money and resources on AI.
And they might be right to a certain extent that the Pentagon needs to spend more on AI. I, again, wasn't totally buying their assertions because a lot of it was based on the fact that the Chinese military and government were spending a lot of money on AI, which is not the same as making a huge amount of progress.
And of course, there is a degree to which they are making progress. We see that. I wanted to get a better sense of, are they really closing this gap? Are they really jumping ahead of us? And given that’s the million dollar question in the Pentagon at least, this sort of tries to get at that from a certain angle where I'm not trying to answer that full question, right?
Is China ahead of us or really catching up with us? I'm trying to get a sense of what do Chinese defense experts themselves think of that question? Does that make sense?
Tyler McBrien: Yeah, definitely. And you mentioned a second ago that the Chinese military, the Chinese government, has been spending quite a bit on AI technology.
I know that this report builds on past work that you've done and that Georgetown has done on this purchasing. So before we go any further, could you just set the scene as well for what you came into the project knowing a bit about in terms of Chinese military spending on AI technologies?
Sam Bresnick: Yeah, so those reports were published actually before I joined CSET, were written by CSET colleagues, but what they found essentially was that China is spending a lot of money on military AI applications that the United States is also interested in, whether that's autonomous vehicles, decision support tools, predictive maintenance and logistics; they're definitely interested in a lot of the similar, a lot of technologies that are similar to ones that the Pentagon is interested in developing themselves. And it's very hard again to put a dollar value on how much they're spending. CSET’s also done some work trying to look at China's total AI spending because that number was, a few years ago, there was I think $100 billion a year was being bandied about in Washington, again, without citations. And we did some work and found that it was much closer, I think, to $10 billion. Again, there's a lot of hyping of the China threat, so to speak, going on in DC, especially related to AI.
And so this report, again is interested in right sizing that threat and trying to get a sense of okay, yes, we understand China is a threat to some degree, but what can we say? How can we right size that threat?
Tyler McBrien: So you have this idea, you want to get a sense of what Chinese defense experts are thinking. How do you go about finding these papers? Are they all publicly available? And I'm also curious how they may differ, these journal articles may differ from a traditional American or Western European understanding of how a journal article is pitched, accepted, edited, published. What does that look like in the Chinese context?
Sam Bresnick: So that's a good question. I'm not an expert in the publishing standards and processes that go on in China. What I can tell you is the majority of the journal articles I reviewed for this project were published in journals that are managed by the PLA or players in the Chinese military industrial complex, whether that is, CASC and CASIC, the big aerospace state owned enterprises that do a lot of defense contracting; AVIC is another state owned enterprise that manages a couple of these journals. And then, military affiliated universities, like ones including the Seven Sons, which a lot of people focus on now in terms of looking at PLA research. I can't exactly speak to the differences in the publishing standards in those journals, but I can tell you that they are managed by the PLA or PLA-affiliated universities and big important state owned enterprises or other players in the defense industrial base.
Now, the way I got to these journals; the majority of them are very difficult to find on the internet. And that's because they are, the way I accessed them was through CNKI, the Chinese National Knowledge Infrastructure which has millions and millions of Chinese language papers that have been published over the last 20, 30 years.
And CSET has a subscription to a service that makes CNK available. And so I was able to do a paired keyword search, looking for combinations of keywords in journal article abstracts and titles that allowed me to have a sample of about 250 papers published between January 1, 2020 and December 31, 2022.
And so after I got that sample of 250, I read all the titles and abstracts narrowed down my sample to about 130 articles, and then from there read all of them and kept a detailed log looking at both technological issues the authors noted China may be facing, their take on AI risks associated with military AI, and then also a little bit the technologies, future technologies that they want to develop or technologies they want to develop going forward that would allow them potentially to obviate some of the issues they're facing now.
Tyler McBrien: Yeah, it sounds like half the battle was just amassing this corpus which sounds like quite the undertaking. But as you're reading, what were the things that were immediately coming to the surface for you? You mentioned in the report that there are quite a few hopes but also quite a few fears, I think, that were emanating from these reports.
But I want to start with the sort of the hopes. You called some of these quote, exuberant expectations. So what do Chinese defense experts hope from further investment and development and application of military AI?
Sam Bresnick: Yeah, so the defense experts whose papers I read generally are very optimistic about the future of AI, whether that is the sort of medium or longer term application of AI in military context.
And they believe that if China is able to develop these technologies faster than in the United States that China could leapfrog, or what they often refer to as overtaking on the curve, which is a little bit of an awkward translation, but it's essentially this idea that through these technologies, if they beat the U.S. to their application, China could overtake the United States militarily, right? And that these technologies would give them a big advantage. And despite those hopes, right now, there's a little bit of mirror imaging going on because as I mentioned before, there's a lot of concern in the United States that China is overtaking us or catching up to us in military AI, and what I can tell you from reading all of these articles is that the Chinese have the same exact concern or an inverse concern, right? That China remains behind the U.S. in the development and application of AI-enabled military systems. And that some of them even note the gap is growing between China and the U.S. However, over the medium to long term, there's a lot of hope that these technologies will allow China to realize a lot of military capabilities that they cannot right now. And we can get into those a little bit later, but I'll just foreground this with this sort of inverse concern that the United States, we worry that China is catching up to us. In China, they worry that they are still behind us and that the gap may be widening.
However, over the medium to long term, they hope that they can use these AI technologies to leapfrog the United States and come out on top militarily.
Tyler McBrien: Now, I'm really glad you brought that up this mirror imaging that you mentioned, because as I was reading some of the hopes and fears that came out of your research, I couldn't help but think, wow, a lot of this sounds quite familiar if you've been following the AI, military AI debate in the U.S. as well. So it's such a great point, but I want to pick up on something you just said: what are some of these specific, as specific as you can be without getting too far in the weeds of these military capabilities that could be enhanced or enabled by AI technology?
Sam Bresnick: The most well-known and famous application of AI in military context is this idea of Lethal Autonomous Weapon Systems, right?
It's what people call LAWS. And these can be, autonomous vehicles, whether in the air, on the ground on the surface of the sea or under sea. But they actually go way beyond that. And a lot of the technologies that I cover in my report have to do with less well known, but also really important applications of AI in military context.
And these can be, the gathering of intelligence, surveillance, and reconnaissance data, right? Where you're using AI to go through huge amounts of data and find patterns in various data sets that could inform targeting or other decisions, right? There's this decision support angle where, whether you're using that data to suggest tactical moves in real time on the battlefield or simulate future wars.
There's AI applications for missile targeting and defense. There's AI enabled cyber warfare and cyber defense. And I could go on and on, but there are, AI applications--- the last three I'll mention are, simulations and training. There's an idea that you can use AI to, better train soldiers and officers to operate in certain contexts.
There's AI for logistics, right? To make your supply chains and your resupplying, refueling operations more efficient, and then there's predictive maintenance, which is actually a really important one where militaries, China and the United States especially, I think, want to use AI to figure out when parts are on the verge of breaking or nearly broken so that you can avoid breakdowns and replace those parts which can save you money and time. All told AI has a whole lot of uses in military context. And I should add, it's really just getting started. There's probably many more that will come up in the next few years.
Tyler McBrien: So the range of applications you just gave, I think, really run the gamut in terms of feasibility and time horizon.
So I think by way of foregrounding more discussion of that, I want to know your view of the state of play of military AI in China right now. What are current capabilities as far as we can tell, and current limitations? What's the current AI landscape in China, in the PLA?
Sam Bresnick: So to get at this, I'll talk a little bit about the thrust of China's recent military modernization.
And this is according to high level Chinese officials, including President Xi Jinping, occurring in three separate but often overlapping phases. And these are known as mechanization, informatization, and intelligentization. And mechanization is the incorporation of advanced machinery, vehicles, and equipment into the PLA.
And this was supposed to have basically finished by 2020. And then the PLA was supposed to have moved on to focusing on informatization, which is integrating information systems networks and data into all aspects of military operations, spanning command and control, ISR, cyber, et cetera.
And then from informatization we're supposed to see a jump toward intelligentization, which is the incorporation of AI and related emerging technologies into military capabilities. And as of now, we're seeing informatization, they want to continue informatizing while intelligentizing.
And so AI comes in a lot of important areas. And one of them is in this military concept China has developed called systems destruction warfare. And this posits that prevailing in future wars will require a country, or China, I should say, to target an opponent's critical systems, such as communications, logistics, command and control systems, even aircraft carriers.
And the goal here is to incapacitate an adversary's ability to wage war rather than engage in attritional warfare, targeting each and every enemy platform, which is more akin to what we're seeing in in Ukraine, for example, and so this systems destruction warfare concept is really central to how China thinks about fighting future wars.
And AI is crucial to this idea because they want to use AI to identify weaknesses in adversary systems that they can then target. And they'll do this using another concept called multi domain precision warfare, which, similar to the U.S. JADC2, the multi domain precision warfare wants to interlink command and control communications, computers, ISR again, to coordinate firepower and expose the weaknesses in these systems. One other thing I should mention, China wants to use AI for cognitive warfare as well. So that means, influencing or manipulating how adversary populations or government and military officials think and act.
Large language model-fueled propaganda, mis- and disinformation, deep fakes, and other psyops would be an important part of such a campaign. My paper doesn't really go into that, but that is an important part of China's future or, we could even say they might be using it now, their ideas for AI warfare.
Tyler McBrien: And I think this is actually a good opportunity to transition to some of the fears. Everything you just mentioned, the Chinese are well aware, the Americans are well aware, could be used either against an adversary or an adversary could use it against yourself. So what fears and anxieties surfaced from some of these readings?
There's quite a robust conversation, at least in the United States that I've been following about the dangers, the potential dangers of military applications of AI. They range from the apocalyptic Skynet killer robot scenario to more, I think, almost human mundane misunderstandings that could occur.
So what were you interpreting from the journal articles?
Sam Bresnick: Yeah. So I split these concerns into two buckets and I can talk about both of them. The first were concerns about AI risks stemming from the AI systems themselves being insufficiently explainable, controllable, reliable or predictable and maybe having biases.
And the Chinese scholars would often talk about the problems, these concepts are closely connected to what we call trustworthy or responsible AI, right? You want to be able to trust that an AI system is going to act or behave in the way it is designed to. Given the current state of the technology, a lot of the experts noted real concerns about the trustworthiness of AI systems, and that would essentially create four or five different issues for these systems applications in military contexts.
And so the first, is it's difficult to guarantee service member trust in AI systems when the technologies themselves are potentially insufficiently trustworthy. It's difficult to manage the risks of miscalculation and escalation if these systems, are unreliable or even uncontrollable, right? Because if they act in a way that you can't control, they do something you, you don't want them to do, you can have a serious accident that leads to escalation that could be either conventional or nuclear.
Then the third would be to maintain-- they have concerns about maintaining the security and integrity of AI enabled military systems.
These systems can be hacked, they can be manipulated by adversaries. And then moving on from the concerns about risks, there are also a lot of concerns about China's ability to develop and deploy AI-enabled military systems because of difficulties developing individual parts of a whole AI kill chain.
And the six that I focus on in this report are data, sensors, standards, testing and evaluation, networks --- cybersecurity of computer networks --- and then communications technologies.
And so just starting with data, data is really important for artificial intelligence because the algorithms train on data and you need to have a certain amount of data. You need to be able to manage it and organize it. And from what I read in these papers, a lot of these Chinese experts are worried that essentially because China has not fought a war in 40 years, they have limited combat data on which to train AI systems, the PLA remains reliant on drills to generate data which lack the scale and intensity of real battles and that makes it difficult for AI systems to grasp the complexities of real future battles, let's say.
Then there are concerns about managing and organizing that data. A couple papers noted that sometimes PLA data is not digitized but written on paper and that it can be inaccurately labeled because of this. And then there's poor circulation of data among different PLA branches or units, which I think is an important point because if one PLA unit has really good computer vision data from drones but another doesn't, and there's no sharing of that data, it can lead to bottlenecks in the development of these AI-enabled military systems.
Because of these data issues, or what I'll call training data issues, several of the defense experts whose papers I read noted concerns that these systems may be poorly generalizable or only useful in narrow applications. And then there's concerns about analyzing data in real time, right?
Even though a lot of these papers were published before the October 7th, 2022 export controls on semiconductors, there is a lot of concern about China's ability to marshal enough compute to analyze this, the amount of data they would need to compete with the U.S. and make use of these AI systems alongside difficulties integrating information from a range of sensors including, satellites and under-sea sensors.
That sort of encapsulates the data picture.
Tyler McBrien: I want to return to one of my earlier questions, as you've given us such a good sense of the expectations, the hopes, some of these limitations, fears around trustworthiness, and then also technical limitations. Again, a lot of it does ring familiar.
So pardon the leading question, but just how different is thinking among the experts that you surveyed through their journal articles from U.S. experts that you follow in insofar as you follow them, what are the big differences that came out or jumped out at you?
Sam Bresnick: I think in China there is a lot of concern about, because essentially they are a little bit newer to this world than is the United States and because, the U.S. has been fighting wars essentially constantly since a lot of Chinese sources mentioned, the war on terror, right? There's a lot of concern in China that the U.S. has a military data advantage, that the U.S. has superior sensors--- because we've been working on these issues, with drones, for instance, and satellites for years and years, and that we have a head start in these things --- and then really importantly, and I think less observed are the PLAs concerns or these Chinese defense experts concerns about standards and testing and evaluation, right?
So I'll get into this pretty briefly, you need military standards because you have a lot of different players in the defense industrial complex in China developing military systems whether, AI enabled or not. And if you don't have sufficient standards, it's unclear if it makes it more difficult, I should say, for inter system interoperability, meaning for systems developed by different players in defense industrial base to communicate and work with each other.
And so the defense experts whose, you know, journal articles I read, noted a lot of concerns about developing sufficient military standards for what they call intelligentized military systems. And this is something that pops up in American writings as well and I think is a little bit similar, but my sense is that China is having a lot of difficulty with this right now, as is the testing and evaluation practices, right? Like you need sufficient testing and evaluation to make sure your AI enabled military system is going to work the way that it is intended to. And so the U.S. military spends a lot of time figuring out how to test and evaluate these systems.
And from what I could tell from these journal articles is that in China, they're really in the beginning stages of trying to figure out how to do this. They're somewhat new to this. They claim to lack standardized testing and evaluation procedures, they're worried about the high costs of testing, they're worried about the tests themselves causing safety issues. And then again, there's this idea of the narrow applicability of some systems because of the difficulty of testing for a range of possible situations.
I would put those sort of four areas: data, sensors, standards, and testing and evaluation as very much related to issues the United States mentions, but it seems to me from this survey of Chinese arguments that China is really facing a lot of issues in these four areas that could complicate the application of AI enabled military systems.
Tyler McBrien: So now that the report is out in the world, who did you have in mind for your audience? I think in addition to maybe some of the people you mentioned at the top of the podcast who are, repeating some conventional wisdom without necessarily citing any sources and how your findings may fly in the face of some of those assumptions.
So in addition to maybe that crowd, who do you really want to read this report and what do you hope they would take away from the report? And I will also caveat by saying--- I know you're a researcher and so I'm asking a bit of an advocacy type question --- but certainly you had an audience in mind for this.
Sam Bresnick: Yeah. So I really had, it's a cop out to say the U.S. government. What I would say is the Department of Defense obviously does a huge amount of work trying to understand China's military AI capabilities and appetite for risk. And I'm hoping that people in the DOD who work on these issues will read this and use it as a potential data point for really understanding that there are voices within the Chinese system that are concerned about AI risks.
And that again, China is not a complete military AI power quite yet, it would appear right? And then the other one would be the National Security Council, because those are the people running these AI risk dialogues with the Chinese. And again, there's been some coverage of concerns stemming from those engagements that again, China does not really take AI risks so seriously. And, I would say that is believable in these dialogues for sure, because maybe China doesn't want to tie its hands or wants to innovate to try to catch up.
But what this report shows is that within the Chinese system, there are people talking about AI risks and potentially this paper could help ground some of the discussions or could help the National Security Council and officials involved in these discussions, maybe ground some of those concerns and act as jumping off points for future editions of that dialogue.
Tyler McBrien: Yeah, I wanted to pick up on that because I was heartened to see that the report ends on somewhat of a hopeful note, that you mentioned that there is an opportunity that the two sides can find common ground can use this to establish some confidence building measures. Was that an editorial decision of just not wanting to end on a doom and gloom note? Or is there some truth in that hopefulness that there are these shared anxieties --- there is this mirror imaging going on --- which could lead to hopefully productive dialogue to mitigate the risks associated. Is this hopefulness a true feeling?
Sam Bresnick: That's a really good question. I don't know if I'm hopeful so much as I think we need to try, because these issues are extremely important, and there's been a lot of coverage of China's willingness to take more risks in different security areas, whether that's in the South China Sea, whether that's in space, there is a sense that China, because they are rising and because they feel hemmed in by the United States, they want to increase the risk profile of their activities to try to deter the United States from where they feel encroachment on our part.
And I think the same, a similar dynamic is happening in the AI realm where they don't want to hamstring themselves because they believe AI is such an important part of both their military and economic development going forward. And so at the political level, I think it's going to be difficult from a government to government perspective to come to binding agreements to do AI arms control or even, exchange information on AI capabilities.
But I think given the gravity of the situation, the importance of these technologies, we need to try as hard as we can to create a sort of shared understanding even if it's not binding agreements, but to create a shared understanding of where the two sides see risks coming from where there are potentials to discuss those risks.
And I hope, again, that this paper shows that within that Chinese system, which can be so opaque, that there is evidence that people are concerned and maybe through some kind of dialogue going forward we can discuss those risks or convince the Chinese that those risks deserve to be discussed, even if it's difficult from a political standpoint.
Tyler McBrien: So you mentioned that this is quite valuable research because of the opacity of thinking in the Chinese government. But there is still quite a bit of opacity left. So I wonder what your views are on maybe the shortcomings or the blind spots associated with this research or this methodology.
What are we not seeing? What's the generalizable aspect of this research, to level set a bit.
Sam Bresnick: Yeah. The first big thing is the influence of the authors is completely unclear, right? The majority of these people, these defense expert, are either affiliated with the PLA or work in the Chinese defense industrial complex.
But it's very unclear how much power they have, how much influence they hold within the Chinese policymaking system. It's also totally unclear how plugged in they are to the latest happenings the latest technological breakthroughs in the PLA, for instance, right? It's also totally unclear if even they are aware of those breakthroughs if they could, it's unlikely they could discuss them publicly in journal articles like this.
Generally, though, I think this is a very good approach for getting a sense of where Chinese academics or defense experts, defense academics, let's call them, where they stand, what opinions they believe. Again, because a lot of these journals are managed by the PLA or affiliated institutions, or, again, these big SOEs, it appears as though they are potentially less censored than Chinese media, right?
Because these journal articles these, academics are talking to each other, right? Whereas the mass media is for wide dissemination and that's more controlled. Of course there could always be censorship and there could be self-censorship and we'll never know the extent to which either of those are affecting the articles in this sample or in this corpus of articles.
And another shortcoming is this only goes from 2020 to 2022. And an area of future research would be to try to track these opinions through, 2023, 2024 and beyond to really keep a finger on the pulse of how these, defense academics are thinking about Chinese military AI achievements.
Tyler McBrien:
What other areas do you have in mind for future research? Are you going to keep pulling at this thread, expand the corpus or the time limits that you've set or what other future projects do you have in mind?
Sam Bresnick: Yeah. So the way I think about my research at CSET is in two buckets. This one this project falls into the first bucket, which is has to do with perceptions, right?
That sort of work explores what do Chinese experts think of the PLAs capabilities, Chinese AI development more broadly. And I think that's really important. But the second bucket. is more focused on what can we actually tell is happening in the military AI space in China. And so we have a couple of projects looking at the defense industrial base in China which are trying to determine which companies are involved in AI and other related emerging capability development for the PLA.
And then we have a project looking at the Chinese drone market and drone capabilities. Seriously hardcore military projects. What I will say is what everyone wants to get a sense of: is China ahead of the U.S. in military AI, right? And this is pretty impossible to figure out both, it's very hard on the classified side, it's very hard on the unclassified side.
So what I'm trying to do is take bites out of that apple, looking at different measurements, different data, let's say, to try to get a sense of China's military AI capabilities and the directions in which they want to develop it while not trying to take on that whole question. Because I think it's pretty much impossible to figure out at this point.
Tyler McBrien: I think I speak for many, if not all, listeners when I say that I'll definitely keep an eye out for your research for CSET's future projects. And I want to thank you for joining me today.
Sam Bresnick: Thanks so much. Great to be on here.
Tyler McBrien: The Lawfare podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look out for our other shows including Rational Security, Chatter, Allies, and The Aftermath. Our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineer this episode was Noam Osband, Goat Rodeo.
Our theme song is from Alibi Music. As always, thanks for listening.