Cybersecurity & Tech

Lawfare Daily: Adam Thierer on the AI Regulatory Landscape

Kevin Frazier, Adam Thierer, Jen Patja
Tuesday, April 1, 2025, 8:00 AM
What's new in AI regulation? 

Published by The Lawfare Institute
in Cooperation With
Brookings

Adam Thierer, Senior Fellow for the Technology & Innovation team at R Street, joins Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law and a Contributing Editor at Lawfare, to review public comments submitted in response to the Office of Science and Technology Policy’s Request for Information on the AI Action Plan. The pair summarize their own comments and explore those submitted by major labs and civil society organizations. They also dive into recent developments in the AI regulatory landscape, including a major veto by Governor Youngkin in Virginia.

Readings discussed:

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Adam Thierer: We're talking about will the technology policy of our nation align with the Trump administration's new focus on sort of retrenchment, reindustrialization, a move away from, you know, globalization and global engagement. And there's a real tension there for a lot of companies and a lot of policy analysts, including myself.

Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at the UT Austin School of Law, and a contributing editor at Lawfare, joined by Adam Thierer, senior fellow for the technology and innovation team at R Street.

Adam Thierer: And so now we're in a weird situation for something that is clearly a national and global, you know, general purpose technology being aggressively regulated at the state and local level. So what is the federal government gonna do? Are they gonna assert any sort of authority? Are they gonna talk about interstate computational commerce and speech or not, and do something to protect it?

Kevin Frazier: Today we're talking about public comments on the AI Action Plan and other developments in AI regulation.

[Main podcast]

Today we're diving into the evolving AI regulatory landscape, starting with the federal AI Action Plan born from an executive order signed by President Trump and now the subject of intense debate. The White House Office of Science and Technology policy recently wrapped up its request for information inviting public input on how AI should be governed, but while Washington gathers feedback, states aren't waiting around. Lawmakers from around the country are racing to pass AI regulations on their own, creating a complex and sometimes conflicting legal patchwork. What does this all mean for the future of AI policy, and more importantly, who's really shaping the rules of the road?

Alright, well let's start with, what the heck is the AI Action Plan, or what will it be and what stage are we at in the process of seeing that plan come to fruition?

Adam Thierer: Sure. Well, the Trump administration came in with one singular promise on artificial intelligence policy, which was to eliminate the Biden administration's AI executive order. Once it did that, as promised, it substituted its own AI executive order in its place, and then created a process by which the public could comment on the so-called development of an AI Action Plan for America.

Of course subsequent to that being issued by the White House, JD Vance has also made some important speeches about AI policy, and there've been a few other AI related things happening in the administration. But we're still in very much this holding pattern, waiting to see what the administration does after abolishing the executive order. Everything's still very much up in the air.

Kevin Frazier: And one of the key things that we know from this relatively sparse announcement of what we'll see from the Trump administration on AI. Like you said, it won't look like the Biden era; it won't be AI safety focused—we have this kind of keyword of AI dominance, or as Vice President Vance has expressed making use of AI opportunities, something he doubled down on recently at the American Dynamism Summit, hosted by a16z, but that doesn't give us a whole lot of meat on a pretty broad goal of AI dominance.

So with these comments that came in, we had the OSTP seek comments from civil society, from anyone really who wanted to submit their own recommendations for what this AI Action Plan should include.

Let's start with you the, your own regulatory filing in this matter. What was your comment and what would you like to see in this AI Action Plan?

Adam Thierer: Yeah, there's a lot to unpack there, but really the, the operative phrase of the day on this front is AI opportunity, or AI opportunity agenda. And I think a lot of the comments that went into the White House in this AI Action Plan—including my own, admittedly—really doubled down on that idea of AI opportunity and tried to put some meat on the bones of it.

And that, that theme is one that is good in, in my opinion, to express sort of policy or technological optimism from the highest office of land, I believe is important for a nation. I, I think this is something that the Clinton-Gore administration did very, very nicely in the mid nineties when they were articulating policy for the internet and electronic commerce. More recently, we've seen Obama and the first Trump administration and Biden talk a little bit about opportunity, but I get the sense that this administration really wants to make that the cornerstone of its policy, this opportunity agenda.

Now what does that mean? It means different things to different people, but what I think we're starting to see is a broad based consensus around some key principles that AI policy should be investment and growth focused, that the first order of business should be expanding opportunities for innovators and entrepreneurs relative to other nations and especially China, making AI equal parts about competition and innovation as much as it is geopolitical competitiveness and standing in national security. Investment oriented policies that focus on promoting AI more than restricting it. Risk-based and sectorally targeted types of policies as opposed to broad brush European style regulation.

Those are just some of the themes that I saw echoed in many different filings—and many of those were in my own filing—and it really just came down to a question of like how hardcore you wanted to go on those principles.  And you know, I was surprised; some of trade associations sort of fully embraced this sort of AI opportunity agenda and talked about almost utilizing terminology like America First kind of policy.

And Kevin, I know you wrote a wonderful piece recently for Lawfare where you talked about this, you know, like how do you, how does this Trump administration, America First vision translate into AI policy. I thought you did a really nice job with that, talking about how, like we're seeing a confluence of those themes now in the world of tech policy in a major way.

Kevin Frazier: Yeah, and, and this was really born out of the recent speech by Vice President Vance at that American Dynamism Summit where he specified America is going to charge ahead on AI full steam, subject to one key constraint. And that's where I kind of coined, I guess, America First, America Only AI. Where in this instance, yes, it's America pushing to the frontier, but what was really telling about Vice President Vance's speech was a qualification that pursuit of the frontier would be balanced with making sure that that AI stack from the data we use, from the labor we use, from the experts we rely on, from the compute itself was domesticated—not something that we'd rely on European stakeholders for or Taiwan for, but how can we make sure that there is a pro-worker to use Vance terms, a pro-worker approach to AI development in the U.S.

And that's a tall order. There's an inherent tension between pursuit of the frontier while also making sure that you house that development right here in the states. Because looking previously at earlier waves of technology, we've seen that immigration—high skilled immigration, for example—has been a key component of that technological development.

Where we see similar policies appear in this AI Action Plan—tellingly we saw a lot of labs try to navigate this sort of America first America only paradigm. So OpenAI, for example, was pretty insistent and pretty supportive of making sure that there's a lot of investment in the US AI infrastructure. And a lot of the labs were very supportive of things like export controls that don't necessarily align with a free markets or a laissez-faire approach to economic policy.

So this sort of more industrial hands-on approach in the AI space has been really interesting to see that develop, particularly as you pointed out, with the rise of China's prowess via DeepSeek, via Manus, we're seeing this emphasis of labs saying, hey, it would actually be okay if the administration were to give us a little bit more of a moat to make sure we stay ahead of our Chinese counterparts.

And what was surprising to me was seen, as you noted also, the sort of safety language was absent from most of the lab's filings. So—with the exception of Anthropic, which has always kind of been the more safety oriented of the labs—OpenAI, Google, Meta were not really saying hey, you know, let's, let's be cautious, let's slow down.

In fact, OpenAI had some pretty bold claims that I'd love to get your take on. Things like, what could we do about copyright to make sure that there is sufficient training data for these models. OpenAI was very explicit saying hey, you all need to lower the barriers to making sure we have these key inputs.

Do you think those sorts of bold claims and bold proposals are the sort of lengths the administration may be willing to go, or do you think it may try to find some middle ground between that AI safety perspective and the wishlist of Open AI?

Adam Thierer: Yeah, let's unpack this a little bit because at a high level we're talking about will the technology policy of our nation align with the Trump administration's new focus on sort of retrenchment, reindustrialization, a move away from, you know, globalization and global engagement.

And there's a real tension there for a lot of companies and a lot of policy analysts, including myself. You know, I wrote a piece for Lawfare with Keegan McCardle about—it was called “The Trouble with AI Treaties”—and I talked about our concerns about American engagement with global regulatory officials as it pertains to AI safety and its regulation.

But at the same time, I think global engagement, investment is essential. I want to see more digital trade. I wanna see us engage with more nations, especially of our allies. And so there's an inherent tension there. You know, we saw in the JD Vance's speeches, the one in Paris, and then a subsequent one, an effort to really distance ourselves from the rest of the world, but especially from Europe on a lot of these things. That has real serious trade-offs. And I think a lot of companies now are trying to figure out, well, are we okay with this or not 'cause we still have to play ball in global markets with those regulators and all those AI safety bodies and everybody else. And a lot of people are asking them to sign at least onto like best practices or principles of like ethical development if not formal, actual obligations and, and laws and treaties.

So this is a real tension. I think a lot of these companies, a lot of these trade associations are hoping that the Trump administration goes to bat for them in a way that the Biden administration did not. The Biden administration was, was very engaged internationally, but all too often it was not in support of American companies. It was in opposition to a lot of their interests in some cases, doing a lot of work with European regulators to go after America's largest technology companies, digital technology companies.

So it's definitely a new day, but it's unclear exactly how this story ends because we can't entirely disengage from the globe. We can't completely retrench. There, there's an understandable desire to build back up certain lost types of technological manufacturing capacities, but that doesn't mean we can just flick a switch and it's all over overnight. And that tension is palpable in these filings and with a lot of these companies.

So, I think it very much is a to be determined kind of thing with regards to where the administration goes from here. A lot of the filings from these companies basically said we should continue to have international engagement, but we should limit it to sort of best practices and a focus on broad base safety in the abstract. There's a lot of high level aspirational principles and statements and phrases in these filings that go undefined. And so it's easy to talk a big game about international safety and engagement; it's another thing when you get into the devilish details of what it actually means.

Kevin Frazier: And I would like to highlight a couple filings that stood out to me as being particularly engaged in the details. So for those listening or watching who want some homework—sorry, it's just a habit as a professor to assign you all some, some homework at home—the Institute for Progress made dozens of recommendations that I think were in comparison to some of the other high level, more abstract principles relied on by other companies or other commenters, the Institute for Progress outlined a R&D moonshot for AI interpretability, how we can make sure we're diving into how these AI systems actually work.

They called for more investment in hardware security and also attracting AI talent by modernizing the visa processes, and then as I talked about with Tim Fist and Arnab Datta, they reemphasized their focus on creating special compute zones.

So in terms of the actual nitty gritty policies, we need to see, I think the Institute for Progress did a great job of giving some, some meat for people to dig into. On the other hand, we saw just a lot of folks trying to reconcile with the fact that we're not sure which part of the administration, which part of the government is going to necessarily play the lead on AI governance.

And I think that's a, a big issue that I would really encourage the administration to prioritize is giving folks some certainty and some clarity around. Who are the individuals, who are the actors that are going to play the lead role here? Because right now no one quite knows, right? We, we may see a lot of experts leave NIST. We may see the AISI—the American or the AI Safety Institute—take on a new form; we're not sure what it's gonna look like. Will it become AI security like in the UK? Will it just disappear?

No one quite knows. And that regulatory uncertainty is almost worse or, or has similar attributes to heavy handed regulation because both aren't conducive to innovation. And so that I think is the biggest question I have for this Action Plan.

Adam Thierer: Yeah, that's exactly right. You've got it exactly right.

So there's an incredible amount of uncertainty right now about the specifics in terms of what we mean by AI safety governance in the United States and before the Biden administration left town, it not only had a major executive order on these issues, but a whole lot of agency activity going on related to AI safety and then specifically set up the AI Safety Institute, as you noted, to basically do these things within the Department of Commerce at a high level.

Well, this became a legislative debate. We talked about this on a previous podcast about how there were major divisions among Republicans on Capitol Hill about what to do about AI safety writ large, and specifically did they want to essentially bless this Biden administration creation called the AI Safety Institute, and give it new formal powers.

There are multiple bills that were pending last year that went in different directions on this front, but at the end of the day, we didn't get any of them. This is all too often the case. Congress just ultimately doesn't get around to finalizing much. It's a lot huffing and, but ultimately nothing getting done. But that's where the debate begins this year, and it's not clear where Congress is going to go on AI safety, and it's not clear where the Trump administration is going to go.

But many of the companies that filed—many of the trade associations and various other advocates and analysts—all agreed that some sort of governance capacity, writ large, for AI safety, probably was smart if for no other reason than to have a single place where you could go to talk to somebody in the U.S. government about high level frontier model regulation or governance in the abstract, and secondly, it would provide a useful type of tool when trying to counter more aggressive efforts by international regulatory authorities, including the various AI safety institutes that have been set up by other governments.

However, we still don't have statutory authorization for any of this. We still don't have an AI bill. And of course, the Trump administration's not too keen on this idea of just saying, yeah, well, Joe Biden gave us this, we'll just sort of like bless it and move on.

So I think what's gonna happen there is you'll see a reformulation of the AI Safety Institute and you'll see maybe Congress try to pass something that renames it; gives it soft law powers, not hard law regulatory authority; sort of like an effort to better coordinate standards or negotiate sort of agreements; or to have a clearinghouse of information, ust a place where a lot of stuff can be filed and considered without it actually being formally acted upon.

I think that in the short term is the messy AI safety governance model that America's going to have for at least the next couple of years. But who knows, maybe Congress moves faster on this, but when has that ever been the case?

Kevin Frazier: You know, I, I would say I would make some bets on what Congress is gonna do, but if anyone were look, were to look at my March Madness bracket, they'd say, you know, Kevin, maybe, maybe lay off making any bold predictions.

What I do think is going to be really interesting is to see in this interim between the end of the request for information and the due date of the AI Action Plan 180 days from January 23—which back of the hand math, sometime in July—will we see any other sort of outreach or formal consultation or even informal consultation will be interesting to see how the administration maybe tips its hand as to where it's leaning. So that's one thing to look out for.

To your point, hopefully Congress takes some step to also clarify what role it wants to have in this conversation. I'll give credit to Professor David Rubenstein at Washburn. He really emphasizes this point that this AI governance debate. It comes down to the, who, who should be governing what. And you've thought about this in extensive detail, and we've talked about this in extensive detail. Congress if it doesn't act, states are going t,  states—and we've already seen this and we'll get to developments in Virginia and elsewhere in a second—but you've tossed out an intriguing idea that folks are rightfully, I think giving more due and more consideration sort of pause, but a pause that would have preemptive effect on the states.

Can you describe that in a little more detail about what that would look like for Congress to, to intervene and maybe make sure that a similar privacy patchwork doesn't develop?

Adam Thierer: Yeah, that's right. So, just so the listeners know what we're talking about in terms of, you know, the who, who is regulating here or who is, you know, acting on AI policy, as you noted, it's not really Congress right now. There, there's a lot of, a lot of discussion, but nothing really moving and getting to a final stage.

But meanwhile, in the, across the United States, there are currently over 900 AI related bills pending, and the vast majority of them are state and local in character, mostly state. And states are moving in multiple different directions with many different AI related bills. It's hard to summarize them all; there's a there's a couple of different buckets we could get into. But the bottom line is that this is a complete reversal of the way early internet law worked, which was very much national in focus, driven by the administration in Congress with the state and localities in the backseat. And this is completely reversed.

And so now we're in a, a weird situation for something that is clearly a national and global, you know, general purpose technology being aggressively regulated at the state and local level. So what is the federal government gonna do? Are they gonna assert any sort of authority? Are they gonna talk about interstate computational commerce and speech or not, and do something to protect it?

Of course, there's been debates for a long period of time about state preemption on numerous fronts. It was just sort of always assumed in the field of internet and cyber law that like this was more of a national, you know, focus, a federal focus. That's no longer the case, and there's a strong appetite at all levels for regulating if for no other reason than to get Big Tech.

And so it's still not very clear if Congress will be willing to preempt, but then we get into a question that's been debated in the pages of Lawfare now by many, many different authors, which is exactly how the hell do you do that? Because when you try to regulate and then preempt a general purpose technology, the scoping problem is enormously complicated. This isn't like we're preempting, you know, the sale of, you know, pig bellies in a, in a farmer's market. You know, I mean, we're talking about something that's ultimately a multi-dimensional, cross-sectoral, general purpose technology that affects every sector. And so that makes scoping any preemption very hard.

It would also make it difficult to do what I'm about to tell you and propose, which is my idea, which is that if you can't get any sort of formal regulatory structure or preemption structure, you can consider a moratorium. In the past, Congress has at times considered the idea of, say, a little bit of a timeout policy-wise on new regulatory enactments at the federal and sometimes state and local level.

So, for example, in 1998, Congress passed the Internet Tax Freedom Act of 1998. That prohibited the imposition of multiple and discriminatory taxes on internet access. That moratorium was extended and then eventually made permanent by Congress—basically full blown preemption. We've also had federal moratoria on things like commercial space travel. When Jeff Bezos sends, you know, Blue Origin rockets up into space with William Shatner on them, there's actually a federal moratorium on certain types of regulation for space tourism so that that market could develop. It's sort of an effort to have what, what's called a learning period. Like we can figure out what works and maybe what doesn't and, you know, get, get all our ducks in a row.

Could something like that work for AI policy? Well, I've suggested it in my filing to the administration and in a, in a series of essays for the R Street Institute. Again, the scoping problem is enormously challenging. It's one thing to say preempt or put a moratorium on, say frontier AI model regulation, like high powered systems that maybe California wants to regulate like they did last year on existential risk grounds. You could do that 'cause you could say a threshold of like 10 to the 24th power as a model. You know, if they try to regulate that, that's a federal thing, and you have it, the AI Safety Institute handle that, or some other body.

But when you get into things like AI discrimination or AI, you know, bias, which is a big part of AI law at the state level, that's a lot harder 'cause states are gonna say, that's our purview; you know, we, we have these civil rights laws, we have consumer protection laws, you shouldn't step on our toes. The problem is, if they all do that and you have dozens upon dozens, if not hundreds of policies, then you're mucking with the interstate market of computational commerce and speech. So that tension is just really palpable right now.

And the bottom line is, even for those people like me who want preemption or a moratorium, it's not clear we're gonna get it. You know, Congress is still struggling to act. And by the way, last session, zero bills were introduced that had any sort of preemption language in it. Zero. So I don't see any this year either yet. It could still happen, but you know what, pretty soon the clock will be ticking. We'll be talking about like the midterm elections and it'll all be over.

Kevin Frazier: Yeah. It's already 2026 in D.C. right? Always be looking ahead.

What I think is important here too, and for folks who besides my dad and my wife who have read any of my earlier writing, they'll notice I've, I've moved away slightly from a, a more AI safety focus to a more kind of wait and see approach, especially with respect to what I refer to as boring AI.

When we're not talking about the frontier, when we're not talking about systems that in the hands of bad actors could presumably be used to, let's say, hack critical infrastructure—that's a, that's a huge concern, and we can talk about that later—but with respect to boring AI, if you think about just how frequently we've already seen smart folks totally miss the boat on the pace of AI development, use cases of AI development.

No one saw DeepSeek coming. No one quite knows how AI agents are going to change our daily experience. These are the autonomous systems that we'll be able to act on our behalf. When tasked with something like booking a trip or scheduling a reservation, they'll just do it on your behalf without any oversight. How that's going to look, what we should do to make sure we govern those responsibly—that's a tough task, and we know that Congress usually isn't in the weeds on the forefront of these technologies.

So something like a learning period is, is really intriguing and I do wanna call out some listeners may be saying, hey, all else equal. I'm really glad we have at least a privacy patchwork where we have things like the CCPA in California that because it's California, has a sort of Sacramento effect where we see some widespread privacy protections make up for the absence of any Federal Comprehensive Privacy Act.  

But as you pointed out, and something that I think is important to discuss is the fact that a lot of these state laws may not be additive in the same way privacy laws are additive or complimentary in the sense that privacy enhancing law generally trends in one direction, right, in terms of protecting data, in terms of giving users more control over that data. But if we see states have different conceptions of what is proper content or discriminatory content coming out of an AI model, and those conflict—there are contradictions in those laws or contradictions in the definitions, as we've seen in a lot of these bills—that is not complimentary. Those are at odds and are going to make it so hard, in particular, for startups and medium-sized labs to compete with the big boys, which as we know won't foster innovation.

And I think that tees up nicely a recent development in Virginia. So can you give us a sense, Adam—you've, you're, you're the guy who has all these great numbers coming out about just how many proposals there are pending in the states. One bill in particular grabbed attention in national headlines this week and that was Virginia's comprehensive AI policy bill. So can you tell us a little bit more about that bill and what just happened last night on March 24.

Adam Thierer: Yeah, that's right. Well, Governor Glenn Youngkin decided to veto a major AI law that was passed by the Virginia legislature last month, and basically was one of these copycat AI regulatory models that has been pursued by a group of state lawmakers working together, a multi-state AI working group. And they did successfully move a piece of legislation in Colorado that Governor Jared Polis signed last May, and then there were many, many states that actually had almost identical types of bills pending, including Virginia, who eventually got around to passing it.

But Governor Youngkin vetoed it, talking about the impact on small business and investment in Virginia, which is a major digital state of course, and talked about the problems associated with the kind of patchwork you discussed. And let's just be clear, I wanna put meat on the bones of one thing you said, Kevin, 'cause it's really important because there can be a lot of conflict about the substance of these bills and what they attempt to do in terms of dealing with like concerns about algorithmic bias or discrimination, but there's really a couple of things to keep in mind.

First of all, these bills, in some cases don't even define artificial intelligence, the, the phrase, the term the same way. And then you get into the specifics of things like how they define high risk applications or specific categories of developers, deployers distributors, sometimes again, defined differently or not at all. And then you get into the various compliance costs associated with the kind of impact assessments that are required and other things.

At some point, if you have dozens, maybe even hundreds of these types of bills, again, you know, regardless of the substance of them, you have to ask yourself, as you pointed out, could a small or mid-size enterprise actually deal with that kind of burden. And that's what I think prompted Governor Youngkin to veto that measure, and also has prompted a policymaker in Texas who floated a, a similarly controversial bill to basically completely revise it and eliminate most of what was in the original bill that looked like the Colorado measure. And so that Texas law is being considered right now as we're taping this podcast in Texas, but it's a totally different bill.

And so there's been a little bit of a turn among some states—not all, but some—on like the wisdom of these rules, but this is also leading to some in Congress wondering like, should we also consider preemption? But in the, in the meantime, it's a state house by state house battle. In this particular case in the Virginia State House, the governor has vetoed the measure.

Kevin Frazier: And something I really try to stress to my students in my AI and law course is this question of institutional competency and capacity to even govern and regulate a lot of these tough questions is we can write great laws, we can be as clear as we want in the legislation, but if you don't have folks in the state government, agencies in the state government who understand these systems, who have that experience and that expertise, then you're gonna run into issues that muddy the waters and hinder what could be a really dynamic tool.

And as you pointed out, we're seeing this actually play out in Colorado right now, where we had that bill get passed and now Governor Polis is saying, whoa, let's pump the brakes maybe we need to rejigger this legislation to be a little bit more understandable, a little bit more humble with respect to its aims. So we'll see how that plays out.

One thing I do wanna flag is that there are some really interesting positive bills moving forward in certain states right now. So one in particular for those Pacific Northwest folks in Washington state, Rep. Keaton introduced the Spark Act. So the Spark Act would set up a private public partnership grant program through the Washington State Department of Commerce. It's designed to increase investment in certain AI companies who then work with the state to develop novel use cases of AI that serve the public. So, really interesting model of saying if you're building something great, if you wanna serve the public, if, we're the state and we're here to help. I know for some, that's, that's a, a naughty phrase, but in this case, the Spark Act, I think deserves really serious consideration by the Washington State legislature. And Rep Keaton, a Republican, right, we've still seen that this is not necessarily a partisan issue when it comes to advancing and promoting AI.

Another one that's really interesting is an AI Literacy Act in New York State by Assembly member Torres that would help K-12 community colleges and workforce development programs upskill and provide AI training to nNw Yorkers. Another really important effort that all states should be considering, and I hope more folks pay attention to some of these just simple, more humble, less spicy, less sexy—for lack of a better phrase. These acts merit a lot of attention, but unfortunately won't draw the same sort of headlines that we're seeing for things like–

Adam Thierer: That's right.

Kevin Frazier:  –SB 1047 equivalents in various states.

Adam Thierer: Yeah. Yeah, there's been a bill like this in Utah that has a so-called learning lab in it, and sort of a sandbox approach to encouraging policy experimentation where developers who are maybe already a little bit regulated, but have a new AI product can come and work with state officials to try to figure out like a plan for how to deploy it safely.

There are also more simple steps. You talked about AI literacy and education. I'm not opposed to the idea of maybe giving various agencies and public officials more resources or technical know-how to just know how to deal with algorithmic and high-powered computational systems. For me, a big part of the problem with a lot of these state bills is they put the cart before the horse and don't think about how do we enforce existing laws already on the books or maybe staff up certain agencies to get better about doing so.

And so, you know, there's no shortage, as I already pointed out of laws in the books about civil rights, unfair and deceptive practices, consumer harms, you know, various other things. The question is how do you go about targeting and enforcing those? And if policymakers want—and some state bills do this—they can essentially do an inventory. They can say like, what do we have on the books that maybe already covers this? What agencies are already equipped to do this or maybe need more resources? And then you fill gaps from there.

That's a more bottom up, flexible, iterative, and agile approach to technology governance, but it's not popular 'cause it's messy. It's very sporadic and messy. But that's the American system of technology policy in a nutshell, and I think it's better than this sort of top down sort of one size fits all universal approach like the Europeans try to take of trying to do everything preemptively all at once. Technology just moves too fast for that to work, we need a more flexible and agile response. But the problem is with something like AI, there's a lot of concerns out there that have led to this sort of rushed approach, and I'm hoping we're gonna maybe now take a bit of a step back, rethink that a little bit and come up with a more practical and sensible and pro-innovation, but also pro-safety kind of approach to AI policy.

Kevin Frazier: Yeah, and that, that's certainly been top of mind for me as well. It's something I need to—and now I'm assigning my own homework—is to dive into some of these bills and to see to what extent states are even considering things like funding a technologist for their state AG.

If you talk to state AGs across the country, they'll commonly tell you, you, you know, we have a roster of experts we call from time to time, or we just wait to be lobbied or advocated by some trade association or some civil society organization, which is great, but we need to have that in-house talent to be able to inform these conversations. So eager to see if anyone listens to that approach.

But Adam, before we let you go back into diving into the weeds of 900 different bills, any trends you're looking out for in this remaining period between the end of the request for information and the AI Action Plan? Should we expect more speeches from Vance who appears to be the kind of AI John the Baptist for the administration, or are we going to see some other stakeholders kind of take over the mantle and what, what should we be looking for?

Adam Thierer: Well, I, I think we have to wait and see what this administration wants to do about a lot of the other issues where they're conflicted. You know, when it comes to things like export controls for advanced technology, or immigration policies for high, high tech talent or speech policies, or antitrust issues—these are areas where the administration is, it's not so clear cut where they stand, and in some cases, they actually want more regulation.

JD Vance just days after his Paris speech was on CBS doing an interview where he said, we should break up all the big tech companies. Well, all the big tech companies are the ones that are extracting big promises and investments from White House at Rose Garden ceremonies. I don't know how you square that circle.

Meanwhile, a lot of the bills and things that many conservative lawmakers favor at the state level, they advocate for aggressive algorithmic regulation when they think AI is biased against conservatives. Conservatives in Florida and Texas have taken bills like that all the way to the Supreme Court.

And, and then of course, as I mentioned, the export controls fight is huge and is dividing everybody in Washington in a major way. I don't really think this administration knows where it wants to be on that issue. Joe Biden left them one of the biggest expansions in export controls in American history—it was the biggest, in fact—and the administration has a lot of hawks that say like, yeah, we're down with that, that makes sense, but then there are others that are, you know, hearing from industry and, and developers saying, oh my gosh, we have no idea how we're going to be able to exist in a world with these sorts of controls.

So there's a lot more conflict than people realize out there. And this is not all just a, a, a deregulatory push. The administration actually does want some policies here and there that are a more regulatory variety.

Kevin Frazier: Well, plenty of open questions and plenty of excuses to have you back on down the road. Thanks again for joining, Adam. We'll have to leave it there.

Adam Thierer: Thanks so much, Kevin. I appreciate it.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare .
Adam Thierer is a senior fellow for the Technology & Innovation team at the R Street Institute.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare