Cybersecurity & Tech Executive Branch

Lawfare Daily: Aram Gavoor on the Biden Administration’s AI National Security Memo

Kevin Frazier, Aram A. Gavoor, Jen Patja
Monday, October 28, 2024, 8:00 AM
Discussing the first-ever national security memo on AI

Published by The Lawfare Institute
in Cooperation With
Brookings

Aram Gavoor, Associate Dean for Academic Affairs at GW Law, joins Kevin Frazier, Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin and a Tarbell Fellow at Lawfare, to summarize and analyze the first-ever national security memo on AI. The two also discuss what this memo means for AI policy going forward, given the impending election.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfareby making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

Transcript

[Intro]

Aram Gavoor: There's a lot of policy in here that undoubtedly will survive. Although some of it would be within that classification of at least political disagreement, the civil rights emphasis and components. That, I think is at least one of the main areas of contention between the political line.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin, and a Tarbell Fellow at Lawfare, joined by Aram Gavoor, Associate Dean for Academic Affairs at GW Law.

Aram Gavoor: The structural changes, that I think is the sleeper in all of this, is the procurement. The fact that there's gonna be streamlined procurement is gonna send some waves, and that's a big policy call.

Kevin Frazier: Today we're talking about the first ever national security memo on AI. This long-awaited document provides a chance to analyze how the U.S. aims to position itself in a competitive and perhaps combative race to lead in AI.

[Main Podcast]

Aram, we're talking the day after the release of the National Security Memo on AI, or to impress my friends in Washington, the NSM. It didn't emerge out of thin air though. I think there's a sort of temptation to always see these new documents as pathbreaking, as emerging from thin air, and from the new policy thinking.

But instead, there's a long history of AI policy building towards this moment. So before we dive into the actual content of the NSM and its significance for AI policy and national security, Aram, can you give us a sense of how this all came to be? How did we end up with this really important document?

Aram Gavoor: Sure, and thanks so much, Lawfare for having me on and really appreciate, Kevin, this conversation. So, I guess the big backdrop, zooming out a little bit, is there is no federal statute directly on key that regulates AI. There's a whole bunch of other federal statutes that regulate other subject matter for which AI is just a manifestation.

And Congress has been pretty good at funding things or requiring certain types of training, etc. But this has been a dominant executive branch policymaking model with regard to the advancement of policy, and for quite some time.

The Obama administration laid out a couple of general EOs, President Trump as well. The latest from President Trump, Executive Order 13960 actually is a holdover that was not rescinded, but indeed adopted by President Biden. In the Biden administration, the big policy products are these: in 2022, Office of Science and Technology Policy—OSTP, for Beltway people— issued the blueprint for an AI Bill of Rights, which introduced not just AI principles and safety, and security, but also a significant integration of Biden administration civil rights component-type weaving in.

Then the president on October 30th of last year, so almost a year ago, five days shy of a year ago, signed Executive Order 14110, which laid out the broad enunciation of AI policymaking, which did include a significant amount of civil rights. I mentioned the civil rights aspect because that's part of what I think is on the political line right now. There's a lot of common sense and there's a lot of tautological even, and some of the advanced policymaking that talks about keeping the technology within American supremacy, fostering it, utilizing it within government. That's all within the cone of what I would view as relatively nonpolitical. And then the application of civil rights principles is really where there's a bit of a disagreement on how it's played out.

So following Executive Order 14110 and the build outs of it, the ops of management and budget promulgated an M memo, M2410, on April 26th, if I recall, of this year, to all the non-national security, intelligence community agencies laying out a pretty predominant model of strictures, in part borrowing from the GDPR for data privacy with regard to high-impact, high-use cases. The M2410 lays out that if there's civil rights-impacting or safety-impacting AI, especially GAI, generative AI, that there has to be a higher level of scrutiny and redteaming. Then in September, I think 26th, M2418 was promulgated by the Office of Management and Budget, which really lays out procurement particulars with regard to AI.

And then, we have the instant product, or series of products that we're talking about today, the National Security Memorandum that was rolled out yesterday, as required by Executive Order 14110, Section 4.8. And associated with it is a framework to advance AI governance and risk management in national security. So, the National Security Memorandum is really meant to be a doctrinal document, laying out U.S. military intelligence community, national security doctrine for how AI is to be used, adopted, kept safe, and kept responsible.

The framework to advance AI is meant to be a somewhat modular document that is updated with regard to AI use restrictions, risk management, cataloging and monitoring AI use, and training and accountability. And then the last of the policy products for which we do not have access is a high side, so subject to classification, document that I understand relates to export controls. So that's how we get to today, procedurally.

Kevin Frazier: I can't resist the temptation because we're a mere, I think, 12 days away from the election. So, there's an election sized elephant in the room, which begs the question: why does this even matter? Not to sound too cynical, but we're emerging, we're on the precipice of some change in administration.

And so, a lot of the conversation around this NSM has been: is this actually a sort of NSC 68, the historic document that changed our policy approach in the Cold War? Or is this just a nice, hey, we care about AI and it has national security implications? How can we think about that before we dive into the weeds of the actual documents?

Aram Gavoor: Sure. So, to unpack that a little bit, there's, I think, three components I want to lay out. One is, that from the Biden administration perspective, at least from what I saw, because I was at the War College yesterday when National Security Advisor Jake Sullivan rolled it out, and I got the vibe in the room, had a lot of different side conversations, and also this, in one respect, is the delivery of a promised product that was promised last year, right.

And there was a 270-day timeframe for which the National Security Memorandum draft had to be provided to the White House, which triggered sometime, I think, in mid-to-late June. And then given the breadth and the reach, and the depth of the National Security Memorandum, the framework, and then the high side document, which I haven't seen, makes sense for, it might take up until around now for it to be issued.

Will it survive? Well, that's highly probabilistic in part based on the election. President Trump, at least late on, Candidate Trump, for the Republican Party platform, I'll read you the text: Artificial intelligence. We will repeal Joe Biden's dangerous Executive Order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology. In its place Republicans support AI development rooted in free speech and human flourishing.

So that's quite critical to Executive Order 14110, for which this National Security Memorandum is a follow-up document. So there's some non-zero risk of this going away if President Trump is successful in the election that's taking place in a couple of weeks. However, I think if you look a little bit deeper, there's a lot of policy in here that undoubtedly will survive. Although some of it would be within that classification of at least political disagreement, the civil rights emphasis and components, that I think is at least one of the main areas of contention between the political line.

In terms of the longevity, also in the presidential election, if Vice President Harris is successful I think there's some likelihood of all of this surviving and perhaps in a Harris presidency, based on the political rhetoric, she may lean into the civil rights component even more. Right? So, I think this takes time to pressure test, it takes time to work through, and the longer this policy, or at least aspects of it, remains stable, that's when we'll be able to tell what this means.

Because ultimately, if the executive branch changes a lot of its functionality, and there are some significant structural changes that we can get into, and also if the private sector takes this as a demand signal, which in some respects, at least Jake Sullivan yesterday said this was asked for by significant aspects of the private sector, then you do have your durable change.

I think really the big premise that I want to talk about is the nature of the technology and how it's developed versus other transformative national security technologies.

Kevin Frazier: And that sounds excellent. In terms of getting into the weeds now, looking at the specific National Security Memo provisions, what are we seeing this suggest about policy? We see some big headlines in the memo doubling down on AI, Focusing on AI, channeling AI, directing AI, it really is emphasizing that no longer are we taking perhaps a wait and see approach to AI and national security, but instead using the weight of the government's procurement power, as you were hinting at, to really direct AI in a specific direction and that direction is making sure that the U.S. maintains its supremacy, especially vis-à-vis rivals like China. So what are the core provisions that you want to call out here that listeners should be attentive to?

Aram Gavoor: Sure. So with the AI itself as a technology, let's look back to the pages of history and see other transformative technologies that were fostered by the U.S. government and its research arms, and spending and funding.

 Nuclear physics led by the U.S. government, information technology, significant advances in computing, rocketry, stealth, submarines, any kind of really advanced communication technologies. Many of those were at least funded, developed, if not conceived in the context of government investment.

This is the first technology where it's the private sector that led the way and laid out a completely new model for the national security infrastructure of the country to take a technology that it itself did not develop and is looking to apply and then further advance within the executive branch.

So, that's why this is a little bit different. All of the government structures leading up to this point were based on that current, that prior model that I laid out. Here, I think there's a significant change. And there is a big press, I think, in this National Security Memorandum to not just encourage the adoption or the responsible adoption of AI, but to encourage the adoption and development of frontier models.

This is the advanced stuff that is not publicly available to us. This isn't about taking, let's say, like a earlier model of GPT and then applying it in the context of a national security application where it's already inferior by the time it's ready to go, housed in secure data, et cetera, you know, secure data facilities and for utilization on the high side. This is something much more advanced and sophisticated.

So, for example, Jake Sullivan yesterday gave an example of a precision missile system. Well, the model that we currently have is we have static weapon platforms, right? It typically takes, you know, some level of years before you have additional variants of them. And then you build upon the platform usually in a combination of hardware and software. The model that the National Security Council is thinking about and that is looking to advance, especially with frontier AI models, is something very different.

So let's go back to that static platform, a precision guided missile. So the hardware might be static for a longer period of time, but it's entirely possible that for it to be most effective, the software might be updated on a far faster interval, maybe monthly. Especially to keep up with and perhaps beyond any sort of battlefield adversaries’ electronic warfare capability.

So to do that, that requires a completely different mindset, that is much more agile, much more flexible, and also the goals kind of building off of like the post 9/11 counterterrorism/CT, ODNI level cross-siloing, so getting rid of silos by having cross-cutting mechanisms. The goal also, I think, is for this NSM to foster sharing of these frontier technologies across agencies, sharing potentially even data sets, although that gets a little bit trickier with regard to the civil rights components and the high impact, high use cases.

So that's something that's very different. And if that holds, that idea, that concept, that is something that's quite transformational.

Kevin Frazier: I think what also stands out with respect to the NSM's coverage is the fact that this isn't just a national security document in terms of just referring to the Pentagon or just referring to the armed forces, but instead we see a whole panoply of issues being addressed. There's industrial policy here. There's energy policy here. There's administrative law and procurement law. So, can you give us a sense of some of those specific provisions and what sort of directions we saw issued to specific agencies in the NSM?

Aram Gavoor: Sure. So, Kevin, I want to acknowledge. I've read everything. I've only read it once, and there's all of these different documents, and I'm still getting my head together as I'm writing work product on what that might look like in the context of a blog post, so we can walk through this.

So just looking at the title of the document, the document tracks with the subject of the memo, three distinct things: advancing the U.S.'s leadership in AI, second, harnessing AI to fulfill national security objectives, and third, fostering safety and security, as well, of trustworthiness of artificial intelligence. So the audiences are many. Audiences are U.S. policymakers, govies, us, you know, listeners on this pod, strategic partners, strategic competitors as well, as well as industry.

So this is meant to be, as Advisor Sullivan described yesterday, a demand signal to industry as well to indicate here's what the U.S. government wants to do, here's what it's looking for. Here's how you should design products and design offerings. And it also provides as a permissioning mechanism for the government to move forward.

Now, I think the challenging piece is that if I'm looking at the rollout of M2410, which is that M memo from the Office of Management and Budget that applied 14110 to the non-national security agencies, the juxtaposition of you should be comfortable using AI, use AI, but then also redteam a lot is a little bit of a challenge. It's a little bit of a shark, I think, for some of the agencies. They're trying to figure out how to balance it through.

Now, this NSM is not just it's a sort of tepid with regard to GAI like a M2410 is. It's actually saying you need to really reach for the stars here and try to do some very advanced things, while still having some of those countermeasures in place. Now, keep in mind, some of the countermeasures are tautological, like comply with the Constitution, comply with civil rights laws. And many of which, if it's, you know, domestic use definitely, definitely, definitely apply.

If it's, you know, foreign use, you're looking at the National Security Act of 1947, Executive Order 12333. That's where that type of U.S. constraint has historically ended, right? That's where, that's how the U.S. government and the president is able to order extrajudicial killings abroad, in the interest of national security with almost no oversight, just, you know, there's the War Powers Resolution of 1973 and there's a disclosure to Congress, et cetera, that type of thing.

So that I think is going to be a little bit tricky, but also on top of that, a lot of the agencies have already erected and have been working on privacy considerations, et cetera. And some level of responsible thinking on all of this. Now, of course, there's a level of opacity with regard to the IC, but at the same time, looking at the pages of history, the U.S. has actually been the world leader in privacy. as a concept even. And also many of the civil rights concepts that are sort of adopted in the world today. So if you just kind of look back there, it's, in some respect, there's something to latch on to. But there's a little bit of a tension there between full supremacy, fully leaning forward on national security, and then also some restraint.

But I think also the drafters of this document would be thinking, I'm able to infer it from the broader text, is that because of the private sector lead of all of this and some of the bad examples of where some of the purveyors of the technology in Silicon Valley, some of their employees, got cold feet about helping the government, on some of the use cases.

So it makes sense that this document is aligned in such a way where it doesn't create too much blowback for the private sector where they feel like they can lean in a little bit more without the hesitation, oh, what's this stuff going to be used for? So there's a lot of balancing going on here.

Kevin Frazier: I would say so, and I think it's important to call out for listeners as well that when OpenAI, for example, initially was getting a lot of attention for ChatGPT's initial release, there were provisions there that said they did not intend to use their models or to allow their models to be used for military purposes. Over time, we have seen those sorts of provisions disappear or be softened.

Similarly, Anthropic, the arguably more safety-oriented AI lab, has seemingly been leaning more and more into national security. And so the sort of inevitability of the militarization of AI seems to have finally happened for better, for worse, what have you, as you pointed out this is the doubling down moment. This is the moment where the Biden administration most likely for any future administration has decided that the U.S. is going to lead in the national security uses of AI, especially vis-à-vis China.

And I think the provisions that are spread throughout the document that really emphasize the importance of pushing back against rivals. Sometimes China's explicitly named, sometimes they're just inferred. But we see this huge focus on making sure that China isn't able to, for example, steal IP from the labs, not able to interfere with the supply chain with respect to AI development. So seeing the national security ramifications of this, it's hard to miss that this is very much in the context of a greater wave of AI being a somewhat inevitable force for U.S. aims, especially against China. Do you have any, have you picked up any other insights? Did Advisor Sullivan have much to say about that China dynamic?

Aram Gavoor: Yes, yes he did. So the posture of the Biden administration is, and this is pretty consistent, is that it’s strategic competition.

It's not just strategic competition, but also strategic alignment where there is an alignment between U.S. and any sort of foreign adversaries’ interests. So one example, one example of this would be the coordination between the U.S. and the PRC to disrupt the supply chain of fentanyl precursors, to the United States, so essentially on drugs. So, that’s that is one example that he enumerated yesterday of where there's an alignment of interests.

In another respect, this document, although not stating the PRC, and there are other adversaries besides the PRC as well, or at least strategic competitors, I wouldn't even say adversary it's a strategic competitor, is that the U.S. should vigorously compete where interests are not aligned with any strategic competitor and potentially even strategic allies as well. And that's how U.S. maintains supremacy. But the goal also, and this is in the backdrop, is with a concurrent emphasis on increased communication, having some sort of communication line, even mil to mil communication, PRC, U.S. mil to mil communication, so that vigorous strategic competition does not hinge on conflict.

The goal is, nobody wants conflict. The plan is strategic competition, open communication where it is appropriate, maintaining some level of connection, searching for ways to have commonality with regard to interests, because this is also meant designed to be a U.S. export as well. It's a vision of a free and democratic world with regard to the responsible and sound adoption of AI. So, you know, outside of this document, the U.S. has participated at the Bletchley Convening a couple of years ago, last year in Seoul as well, laying out that type of a framework and trying to build connections and camaraderie.

Yet you're right, there's a significant component with regard to maintaining intellectual property and technology itself. So that's why there's a high side export control. But that's sort of unsurprising, right? Like this is not the first time the U.S. is engaged. This is just a manifestation of like a longstanding U.S. policy on these types of things.

Also, in reference to certain types of data, right? So that the U.S. government has done a fair bit to catch up from the types of conversations that we've had years ago, where something like 8 percent of machine learning Ph.Ds find their way into public service just because if you can get $800,000 right off the bat at a Bay Area company, you're going to do that. So there's increased amounts of tech industry sponsored fellowships in the U.S. government. There's now there's AI officers, chief AI officers sometimes dual-headed with the Chief Data Privacy Officer, like you have with the Department of Defense. So, this type of learning process is moving forward.

Even with regard to the GSA's 2024 AI Training Series, where a colleague, Jessica Tillman, and I were responsible for one third of them, of those sessions. We handled procurement with regard to AI, and then Stanford and Princeton handled leadership and then technology, the technical aspects of it. This really is, I would say, in the good sense, a whole of government approach with regard to the technology. And I think regardless of who wins in the, you know, in the general, a lot of the non-controversial stuff of which there's like a majority of it is, in my view, just like good old, well thought out, well developed policy that many people should agree on and should not be politicized.

I think that's going to continue on. And then, of course, depending on the outcome of the election, it's pretty classic. You know, if you have a cross-part Oval Office transition, you know, look in the first two weeks for a lot of things being rescinded.

Kevin Frazier: As you pointed out, I think you're one of the few folks who I would call a sort of government AI whisperer. If anyone knew how government was doing with respect to adopting AI, it's gotta be you.

And one of the provisions that really stuck out to me was the emphasis on recruiting and retaining AI talent. And so with respect to the idea of, perhaps a change in immigration policy or a liberalization of some immigration rules to try to bring in more of that AI talent, is that one of those bipartisan measures you think might withstand the test of either administration Harris or Trump?

Aram Gavoor: Yeah, I think, so you mentioned immigration, I think there's a significant commonality of interest with regard to making certain types of visas, like nonimmigrant visas in particular, and potentially even immigrant visa categories available for people who have that type of talent and knowledge.

Those types of categories already exist, right? So there's L visas for specialty knowledge, exceptional folks can have an O. And certainly this could serve as a mechanism for which there could be commonality and sort of a moment of bipartisan support and immigration with regard to this type of tech and keeping, keeping that talent in the United States.

And then, of course, there's F visas. There's STEM, OPT. There's all these different types of mechanisms for which there's a lot of discretion within the executive branch to be hospitable. And I would be very surprised if that wasn't exercised. Although there's also a national security risk to that, right? So that, I think it has to be balanced.

Kevin Frazier: And thinking about the political significance of this NSM, for those of us who weren't at the War College listening to Advisor Sullivan and then getting the sort of immediate spin from stakeholders who are in the know, what was the vibes in the room, right?

Was this regarded as some groundbreaking moment or was there a certain degree of ambivalence about its ramifications. What have you seen from key stakeholders in response to the release of the memo?

Aram Gavoor: So there's definitely enthusiasm, definitely among the political ranks, but that's unsurprising, right? What I was most impressed by was for nonpolitical folks throughout the federal government that I've been in touch with, there's a general support for this. I think there is a significant level of almost apprehension, if not some level of fear with regard to the adoption of advanced AI models and algorithms.

And this document is a permissioning mechanism in some respect. It's a facilitation mechanism, because undoubtedly Silicon Valley, you know, Bay Area influence is in this, right? This is not necessarily like a restrictionist document. It also isn't necessarily like a heavy, heavy competition document either, right? This is about, there's almost like a pragmatic nature to the document, which is, well, if we're talking about the most advanced frontier models, well, there's only a couple players who can produce those.

And if you're, you know, trying to get into that type of, you know, mass compute, quantum computing, really sophisticated applications that require, let's say, hardware that's in the possession only of the U.S. government, that is in part the audience as well. So, what I have not observed speaking informally and nothing untoward with many agencies, is that there's no adverse, oh man, this is not, this is going to suck. That's not the vibe that I'm getting. And my guess is no matter what the executive branch is going to be focused on a lot of the key principles.

I think there is a distinct difference in views between fostering safety, as well, like the NIST AI Safety Institute, that's something where there's going to have to be a fair amount of attention as to how successful that is, whether that works right as an entity. If we're taking the text of the Republican platform, the emphasis on free speech, at least what I understand it to mean in that text, really is about not really laissez faire, but more of just providing flexibility within the industry itself for the direction that it wants to go in, and that necessarily means fewer direct restrictions.

Kevin Frazier: There was certainly an early win for the ASI, the AI Safety Institute, where in the NSM, it calls out the ASI as the singular point of contact for AI industry stakeholders. And so I really wonder about the longevity of that provision, especially if we see a Trump administration that maybe isn't as willing to embrace the ASI because I can also imagine quite a few agencies are thinking, huh, you know, I was developing a good relationship with Anthropic or OpenAI or what have you, and to now have this upstart institute become the focal point of their attention, I think it's going to be a really interesting maneuver, whether that sort of coordination has any legs or not.

Aram Gavoor: Yeah. So, I mean, I think this sort of remains to be seen. I think you fairly stated sort of the skeptical position. I'll take it one step further: is it a regulatory agency that is unsupported by a statute or an express authorization of Congress, especially at a time when there is judicial skepticism for those kinds of things with Exxon, Arthrex, Loper Bright, Kaiser, even Corner Post, with regard to the temporality of suit, Jarkesy, a number of other doctrinal and judicial realignment of the relationship between executive branch administrative agencies and the regulated public that are in favor of the regulated public. So I think a lot of it remains to be seen.

I've gone on record myself thinking, and I did this right after like the big Senate testimony of Sam Altman and Google and actually and IBM in May of 2023, that I do think that there should be some federal regulatory presence, but almost as an advisory mechanism that has a two-year reauthorization that has relatively weak powers and is really sort of meant to be a consensus building entity and providing guardrails for really the types of use cases that are pretty significant. So for example, like nonconsensual deepfake pornography, right? That's like a good one that nobody thinks the government should have any function doing in any way whatsoever, at least domestically, right?

So this is really just like the wait and see. And of course, if the thing is designed without a statutory framework to undergird it, you know, that could survive, or it could be like a day one, and therefore it is gone, type thing.

Kevin Frazier: Before we go our merry ways, before we inevitably get back together again to discuss how this NSM is evolving and being received, I wonder, if there's anything that you were surprised was left out, was there anything you were looking for and anticipating that perhaps you didn't see be explicitly expressed?

Aram Gavoor: That's a good one. I haven't, I think I'd have to go through maybe two more iterations of reading all of these materials. Perhaps it's like 14,000 words in all.

Kevin Frazier: So we'll give you like two hours. And then if you could just get back to us.

Aram Gavoor: To really, yeah, exactly. To really analyze the negative space associated with it.

But really, I think the structural changes that I think is the sleeper in all of this is the procurement. The fact that there's going to be streamlined procurement is going to send some waves. And that's a big policy call. The procurement structures that exist predominantly are meant to foster competition, correctness, thoroughness, all of those other features that are all good policy.

And the choice, the intentional choice to focus on a streamlined, cross cutting procurement policy certainly is an expression of seriousness for really staying on the cutting edge, because you're doing that with a necessarily, at least a structurally higher likelihood or lower ability to sort of dissuade concepts like lock in, right? Where there's like a couple of vendors and they're just there and it's more difficult for smaller players to get in.

But that's the great game that the U.S. is in right now and that's sort of been consistent with a lot of other major procurement mechanisms in the past. The difference here is you know, is that the U.S. itself developed the SR-71, right, with Lockheed Martin, and that was the plan. And it wanted to do so for a while, and like other major military technologies, there's always a couple big, big, big players who can bid, you know, in a sealed bidding that's classified. And then, ultimately, one is selected and it moves forward.

But here, the technology, again, I'm circling back, already exists. So there's a latent capability that is already demonstrably proven with adequate compute mechanisms of adapting to confabulation, et cetera, that really need to have the applied use case. And then also perhaps certain governmental capabilities, like the, you know, there's mass compute out there, but then there's real quantum computing capabilities within the U.S. government's domain as well, and sort of mirroring those up, stitching them together and integrating them in a way that is consistent, integrated, and sufficiently sustained to develop fruits of use cases that we don't even know of yet today, right? Like it's still discovering like, whoa, it'd be really good for this, that type of thing.

So like one example, you know, that I gave in the executive branch-wide training series on national security AI procurements, when I got a question, like a simple question from the audience, but a good one, audience of like 1400 people is like, well, how accurate do these things need to be? Well, we're talking about GAIs, whether they are large language or large graphic models. Well, if you're four days out and it's from a hurricane, you know, you're from Florida and if it's for meteorological purposes and you're able to be 5 percent more accurate than the best model out there, that's awesome.

But if you're engaging in utilization of an integrated AI for let's say theater defense for an aircraft carrier battle group. And you need to have you know, pretty clear understanding of targeting in a 200 or even a 300-mile radius, especially to be able to deal with hypersonics, it better be pretty darn accurate so you're not mis-targeting, let's say, like an innocent commercial airliner that's like happening to be flying like a couple hundred miles away just on its merry way.

So, those are the types of like frontier capabilities that I am inferring because obviously they're not going to be saying that in public with no Chatham House rules or classification regime.

Kevin Frazier: Well, and I think too, the thing that was maybe missing on my end was and correct me if I'm wrong, just an appreciation for the amount of resources that are going to have to be allocated towards this effort.

I mean, when we talk about spending an AI, this is not millions of dollars, hundreds of millions of dollars, billions of dollars, hundreds of billions of dollars, but really to be on the frontier: trillions of dollars. And the magnitude of those resources, in my opinion, maybe wasn't fully expressed there.

But from my understanding, what I've heard is that Advisor Sullivan, for example, did make some pretty astonishing statements about just how much energy development, for example, we're going to have to spin up if we're going to realize frontier AI on a sort of national scale, seeing perhaps 25 percent of all energy production go toward AI is not outside the realm of possibility with this new vision of AI as conceived by the NSF.

Aram Gavoor: I agree. I mean, that's where it just in the past couple of weeks, I mean, the only technology that exists to be able to provide that type of energy, especially if you're considering like the rest of the grid that's under pressure from electric fuel from EVs and things like that is nuclear, right? And I think that actually is perhaps the strongest case for the return and reconsideration of nuclear energy that I have seen in my lifetime.

And yes, I agree that it probably is some trillions but over a period of time, like maybe 10 years or so. But what I think this NSM does and does well, is I think there's a level of cognizance that AI could, and perhaps is becoming like the new counterterrorism. For those of us in government, 20 years ago. And I was like an intern 20 years ago, but I was in government, like in ’07.

Kevin Frazier: It still counts. It still counts.

Aram Gavoor: Yeah. I mean, the U.S. is still in Iraq, you know, and the green zone is still active. Was just everything. They're just throwing money at CT, CT, CT, CT. And a lot of it was waste. Some of it was highly effective and there was a lot of structural change. Like, you know, the Homeland Security Act of 2002, Reliability Act of 2005, Patriot Act, all these different statutory schemes, you know, amending the National Security Act, creating ODNI, all of these different concepts that laid out different doctrine, as well as structures and investment.

I think there's a cognizance that we don't want AI to be the next CT, in the sense that, you know, you have to use that flashword to get more funding and you can just do like random stupid things that, you know, you just want in your own sub-agency and therefore you get the green light. There has to be more of an intentioned application.

So we're not talking. I mean, I'm sure Copilot is very useful, but not like for purposes of, you know, helping those aircraft carrier battle groups function. We're not talking about that. So I think this takes steps to sort of learn from some of the mistakes from the War on Terror. And again, I don't want to trash that too much because the U.S. was in a very reactive posture. Like it needed to do a lot of things really fast. And whenever you do that, you sort of get a certain result that's suboptimal.

Like if you look at the, you know, the massive loan schemes, cash loan schemes during COVID, right? There's a ton of oversight and there's a ton of misapplication, but that was part of the design was just to get the money out fast and the cost was deemed acceptable at the time of decision.

So here, I think there's a more intentional, structured way to do it. Some of the provisions, 3.3c lays out that AI Safety Institute under NIST. NIST getting in the space for this stuff was a Trump administration thing. Now, I think, again, the policy differences will be with regard to the civil rights things, with regard to even looking, let's say the framework to advance AI governance. So this is the framework document, page three. So the years like the big no nos, use restrictions. Well, some of them are pretty tautological, right? It's unlawfully suppress or burden the right of free speech or right to legal counsel, especially for U.S. citizens. Well, I'm happy that it's in there because it needs to be in there, right? And that's just broader constitutional protection.

But then there's a couple other ones too, which are pure policy calls: detect, measure, or infer an individual's emotional state from data acquired about the person, except for a lawful and justified reason. So that is something that is clearly a policy call. Or infer or determine, relying solely on biometric data, a person's religious, ethnic, racial, sexual orientation, disability status, gender identity, or political identity. That's, you know, one of the types of things that I'm sure, you know, Republicans are looking at.

But then there's a lot of other stuff where it's very straightforward: do not remove the human in the loop for actions critical to informing executive decisions by the president to initiate or terminate nuclear weapons deployment. Thank you. That's good. We want that in there, right?

Kevin Frazier: Thank you. Thank you. Thank you for the human before the nuke. This is good.

Aram Gavoor: There's a maintaining the Article Two a core executive Commander-in-Chief prerogative. I am happy that's written down in a document somewhere. There it is.

Kevin Frazier: There it is. There we go. Well, for the rest of it, we will indeed, unfortunately, have to wait and see. But Aram, your two cents on this are always worth a nickel, even with inflation. So, thank you very much for coming on, and I'm sure this is not the last time we'll be talking.

Aram Gavoor: Always a pleasure.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.



Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Aram A. Gavoor is a professorial lecturer in law at The George Washington University Law School, where he teaches national security law, constitutional law, federal courts, and administrative law courses.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare