Latest in Podcasts and Multimedia

Cybersecurity & Tech Foreign Relations & International Law

Lawfare Daily: Mark Chinen on International Human Rights Law as a Framework for AI Governance

Kevin Frazier, Mark Chinen, Jen Patja
Wednesday, October 23, 2024, 8:00 AM
How can IHL be used for AI governance?

Published by The Lawfare Institute
in Cooperation With
Brookings

Mark Chinen, Professor at Seattle University School of Law, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to discuss his recent work on international human rights law as a framework for AI governance. Professor Chinen explores the potential of IHRL to address AI-related challenges, the implications of recent developments like the Council of Europe AI treaty, and the intersection of philosophy, divinity, and AI governance.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Mark Chinen: To the extent that there are causes of action and ways to get redress for those violations, then as is true with any activity that, you know, has these human rights implications and violates the hard reforms of international law, then I do think that they should be accountable for those violations.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, joined by Professor Mark Chinen of Seattle University School of Law.

Mark Chinen: You know, the concern is you're seeing a lot of fragmentation of AI governance, and that leads to, of course, conflicts and you know, the spaghetti bowl of norms where there's inconsistent treatment, those kinds of things.

Kevin Frazier: Today we're diving into the complex landscape of AI governance through the lens of international human rights law, drawing on Mark's extensive work in this field, including his recent articles in the Capco Institute Journal and Yale Divinity School's Reflections Magazine.

[Main Podcast]

Mark, when people think about AI regulation, there's no shortage of issues that come to mind. We know these camps about doomers and ethicists, and we hear about Senate Bill 1047 in California, and then all of these bills in Congress. And yet you're coming at this regulatory challenge from an international perspective, from an international law perspective. What is it about AI that you think lends itself to being governed at the international level as opposed to by the state or by national actors?

Mark Chinen: Well, first of all, Kevin, thanks so much for allowing me to be a part of this conversation. I think it's fair to say that most of regulation is taking place at the nation state level or at the local level here, say, in the United States. But there is an emerging, what might be called a nascent, form of governance of AI at the international level that is both purely and truly international and also very much transnational in its scope and application, so that what we're seeing happening in states like California and then in, of course, in countries and regions like the European Union are making their way to the international level so that the kinds of norms that are emerging at those lower levels are then sort of crystallizing at the international level. And then, of course, will have their effects at that level of governance.

It's also fair to say, though, that it's almost as if artificial intelligence governance has a kind of fractal kind of geometry in the sense that obviously there are very different layers of regulation from and governance starting from the, you know, the firm or individual locales and then at here in the United States at the state level, the federal level, then making their way up to regional and then international levels.

But within each level of governance, you see the same kinds of say, governance tools, being used to address these issues, as well as, in many cases, the same actors that particularly say, the large technology companies, are actively involved in governance at all of the levels I've just listed. And then you also see parts of civil society, like particularly non-governmental organizations who are starting to be active in all of these levels, as well. And of course at each level governance has different nuances because of the nature of the level of governance and at the international level that we have to bump, we obviously bump up against international law itself, its strengths and its limitations, the power of states, and of course, you know, geopolitical concerns, you know, clearly these have an influence on AI governance at that level, and to me that makes it fascinating.

And then, of course everyone acknowledges that AI applications are poised to affect, you know, almost every major domain in human life, and many of those effects are going to be international in scope. Of course, they already are, so it really raises the need for governance at that level.

Kevin Frazier: Well, and I think what's staggering about your scholarship is pinpointing that international human rights law is not just, oh, what is the UN doing, right? There are so many different bodies of international law upon which we can look at potential regulations of AI.

And so, for example, you've pointed out that under Article Two of the International Covenant on Civil and Political Rights, there may be concerns about a lack of transparency around automated decision making. You've also pointed out concerns under the EU Agency for Fundamental Rights, what concerns it's pointed out under the Charter for Fundamental Rights of the EU. Can you walk through some more examples of the conflicts between different AI applications and some of these international or regional or transnational human rights documents.

Mark Chinen: So just to give listeners a kind of framework for this conversation, we should know that when we talk about human rights, there's several aspects of it as they apply at the international level. One is just human rights as a set of principles about how we should treat human beings. And that is part of the conversation that's taking place at that level. And then we have, you know, the international rights as rights, legal rights, very similar to those that we're familiar with here at the domestic level. And then as Kevin, you're referring to, there is the body of treaties or the body of rights that are codified in what are now numerous international human rights treaties. And you've mentioned one of them. And then finally human rights can be understood as the practices and institutions that now exist at the international and regional level that enforce and further those rights at that level.

Now, when you're referring to, you know, these specific rights that is certainly part of now the work of human rights at the international level as it applies to artificial intelligence. So you've listed articles related to the rights to participate in government. And I'll just elaborate a little bit more on that. The current, the concern really is that as artificial intelligence is being used at the governmental level to provide governmental services or to adjudicate disputes in regulation of welfare benefits, for example, the concern is that the algorithmic decision making will lead to impacts, adverse impacts on humans that are recipients of these of these kinds of benefits or government services and leaving very little recourse to these recipients if they receive adverse decisions. And so the concern at a human rights level is whether that detracts from a person's ability, right, to participate and also to interact with their government. And that certainly will occur and does occur with rights to participate, say, in, in elections, those kinds of things.

You've asked for other examples. One that is frequently raised is our rights to privacy that are embodied in several international human rights documents. And there the concerns are obvious, that, particularly with certain kinds of artificial intelligence applications, you know, inadvertent disclosures of private information can occur or there's even, and there's also concerns about the kinds of computer-human interactions where people might be because of, you know, the nature of these interactions might disclose private information, which will then, you know, then as it were leak out so that again, these rights to privacy are implicated.

Then there is the right to life and that, you know, obviously that's a very broad right that has several facets. And, you know, we can begin to talk about things like things related to autonomous, lethal autonomous weapons.

Then there are rights to due process that are embodied in many human rights documents. I've already mentioned, you know, concerns about algorithmic adjudication or the uses of AI in the adjudicative process. There are real concerns that when it comes to law enforcement about, say, for example, predictive analytics being used to, you know, being used in connection with say with bail decisions or parole decisions, all of those you know, observers are very concerned that if there's an over reliance on those applications, that our people's human rights in those areas are going to be violated.

Kevin Frazier: Well, there's a lot to unpack there, which is excellent. And I guess one place to start would be your initial dive into various conceptions of human rights and how it can be thought of and actualized in different forms. And I think a common critique of human rights is that it is too mushy. It is too amorphous, especially with something in need of such specific and tangible regulation as AI. Is your goal, or do you think that the important role for international human rights law to play setting broad principles or becoming the sort of hard law that we actually hold companies accountable to?

Mark Chinen: I see it as a combination. And by the way, you're quite correct to say that international human rights and human rights generally is subject to much criticism.

As you say, one big reason because they can be considered amorphous, particularly when you talk about them at the principal level. Then we can you know, talk about the extent to which states really comply and conform to human rights norms as well. And there is also the critique that it, you know, they purport to be universalistic, right, universal in application. And so there's real pushback sometimes about that because that's sometimes very hard to justify. Often it's, human rights are criticized as being a product of Western culture and then imposed upon other the world.

Now, to get to your point, my sense is international human rights should serve as the sort of overarching, you know, articulation of human values and the source of, I don't know, the goals to which AI governance is aimed. So you're quite right to say that specific human rights norms might not have a direct impact on the way in which, you know, on particular AI applications, the way they're structured, how training occurs, all of those kinds of things. But in my view, they should serve as the source of the more specific norms that are going to have a direct application on artificial intelligence.

So, by and large, I do see them more as a set of principles, but at the same time, you know, international human rights do have that harder legal edge to them, and states have agreed to comply with them. And there are mechanisms in place to enforce them. And there are and will be, in my view, instances when artificial intelligence applications are going to lead to the violation of those harder rights. And to the extent that there are causes of action and ways to get to redress for those violations, then as is true with any activity that, you know, has these human rights implications and violates the harder forms of international law, then I do think they should be accountable for those violations.

Kevin Frazier: Thinking through that as well, you've mentioned quite frequently the nation state as the unit around which international law is centered. And so when we think about companies like OpenAI, who have a model that is used globally and who have employees globally and data globally, what is the method of actually taking these principles and ensuring their application to companies? Companies that exist in some states and operate in many more nation states. So that's one level of analysis that I'd love to dive into.

And relatedly, is there a sort of, not to be too cynical, but are we seeing more kind of cheerleading among AI labs, maybe for international governance, and I'm using air quotes there, out of the expectation that it's a little bit harder to actually have teeth to those international norms and principles, rather than something as specific and hard as, let's say, a domestic law passed by Congress, for example?

Mark Chinen: Those are very good points, Kevin. You're quite right to say that one of the major, might as well call it a major weakness of the international system at this point in connection with human rights, as it applies to AI, is that international human rights do not apply directly to private entities. That law applies only to nation states, and to some extent in rare circumstances to individuals so that any norms, that international norms that affect business activity by and large have to then be enacted into law at the nation state level in order for those international norms to have any application.

Now it is the case though that nation states have entered into treaties, say for example, with regard to corruption and bribery that do have a direct impact on business activity and you know, have been enacted into law at the national level. So, it isn't the case that international law never has direct impact on businesses, but it's usually tough to have that happen because at that level, of course, you need the consensus of states.

To respond, you know, to the other part of your question, I think we can go in two directions. We probably should try to flesh out both of these directions. One of them is that there are moves now at the regional level to take the norms which do exist at the international level that apply to businesses, which include the obligation of businesses to respect human rights, and then to turn that principle and as were the components of that requirement and make them harder law so that we see that, in Europe, in particular, moves which have made one aspect of this respect for human rights, which is to engage in a full bodied human rights due diligence assessment. That has now become law at, you know, at the EU level with a directive that was passed this year. So you begin to see these international human rights norms, as they apply to business, starting to become harder law, not at the international level, but at the regional and perhaps, and also at the nation state level, too. So that's one area in which international human rights are slowly beginning to have this more direct impact on businesses, including the large technology companies.

The other half of your question, though, is you're quite right to say that we have at the international level and as I said earlier, at every level of governance, often the same actors that are, as it were, you know, jostling with and interacting with other actors as norms are being developed at each of these levels. And you're quite right to say that larger technology companies, I guess I, you know, I don't sit in those boardrooms and have those are not privy to those conversations. But as you say, Kevin, you can see a move where it would be easier to press for norms at the international level because, out of that expectation that they will have less bites, right, than those that are in place at the domestic levels. That certainly might be part of the strategy.

Now, I will also say, though, because that kind of coordination and those norms are not being arrived at the international level. Certainly in here in the United States at the federal level as well, then, you know, the concern is you're seeing a lot of fragmentation of AI governance, and that leads to, of course, conflicts and, you know, the spaghetti bowl of norms where there's inconsistent treatment, those kinds of things.

So there, there are some pragmatic problems with governance at lower levels, but of course it has its advantages as well. I mean, there is a reason why, say the European Union has adopted a concept of subsidiarity because out of a sense that, you know, most governance decisions you know, should be made at the lowest possible units. And we could talk about why that's important for the legitimacy of those kinds of norms, as well as just for the pragmatics of achieving consensus.

Kevin Frazier: Yeah, and with that notion of subsidiarity, that makes me want to turn back to what you were mentioning about the biases we have seen historically in international human rights law. And I think an aspect of AI's development and adoption is the fact that different societies have very different willingnesses to accept and integrate AI into different aspects of their day-to-day lives.

So I was speaking with a individual who has spent quite a bit of time in Zimbabwe and Vietnam, and she was telling me about uses of AI to detect TB. And to have a reliable means of detecting TB, that requires sharing quite a bit of sensitive personal information with the government, which from a Western point of view is generally a faux pas of sorts, if not explicitly legally forbidden. It's harder to share that sort of PII with any entity in the West, I'd say, than perhaps in, in other nations. And yet there's this willingness there to say, oh, well, we want this more robust TB diagnosis mechanism.

So that was a long foray into saying: is it possible to have international principles for AI governance, given just how starkly different the adoption will look like in a place like Zimbabwe versus New Zealand? Or pick two spots on the map and we can see that AI adoption will probably look quite different and public willingness for that adoption will probably look quite different?

Mark Chinen: I agree completely with you that's one of the reasons why as I think about international human rights, that's why I advocate them as a kind of a sense of overarching norms that inform this kind of AI governance that then, as the word trickles down. Or, and at the same time, right, it's emerging from the bottom up and they sort of meet in various parts of this, or various levels of governance.

But I guess I would say this, that we want that kind of particularity because as you say, it might be that different countries have different kinds of understandings of privacy, of intolerances, right? And we certainly want that to happen, because these countries, you know, have made our cultures make decisions about what is good for their societies, and I think we should, and obviously they should be honored.

At the same time, you know, virtually every country in the world has signed, say the Universal Declaration of Human Rights and are signatories to one of, at least one of the two major human rights instruments that fall under that declaration. So a strong argument can be made that the international community as a whole has endorsed the rights that are embodied in those documents. Then, what that, at least at a minimum, in my view, that does is it provides us with language where we can have conversations about the way in which a particular country might view privacy.

As, in your example, in Zimbabwe, or we might have it as particularly in Europe or here in the United States, where we might have different understandings of what privacy protection means, but at least we've got, as I said earlier, this language where we can have that conversation and there will be differences of opinion over  what constitutes a violation of privacy. What is protection of that privacy? Et cetera.

And then also I just want to be quick to say there are some, I think, irrespective of the culture, you know, actions and events, I think we would that, most of us would recognize as a violation of human rights. Now even though sometimes are contested, right? We can think about China's treatment in Xinjiang, for example. I think most of us would say that human rights are being violated because of those actions. Obviously, China has a different view and has, you know, defended itself even through human rights language. But even in that situation where there's conflict, I think it still gives us grounds then, right, for criticizing what we view to be actions of other cultures? So it's kind of like a tension, right, between this universalism, but allowing for the kind of particularity that you're describing with your examples.

Kevin Frazier: Well, and as you've pointed out in your scholarship, AI may be more amenable to international regulation in different ways depending on what sort of, what aspect of AI we're trying to govern. So for example, you've highlighted the regulation of supply chains with respect to AI and human rights as potentially a really ripe area for the application of these human rights laws and norms. Can you detail why you think supply chains in particular may be a spot where we can kind of shine that human right lens on in specific detail?

Mark Chinen: What I'd say, Kevin, is that I don't know if I've advocated that this is an area, but it's a, where I guess I would say that it's an area that is right for that kind of regulation, as well. But that also applies to businesses generally. Ao that when we go back to when I discussed the international norm of this requirement of businesses to respect human rights, the recent directive, for example, adopted in the EU that requires large, large companies to engage in that kind of due diligence includes due diligence over the participants in that supply chain. Then as it applies to artificial intelligence, I think that, you know, the concerns are that we have artificial intelligence applications being developed by, you know, large technology companies, but also startups as well.

And the real concerns are not so much about, well, there are obviously concerns about what's occurring at that level, but then we have to keep in mind that these applications are going to have end users who will then deal with, say, customers and the public, which will have these human rights implications and effects. So, we're going to need, and this is what the EU is trying to do with, not just AI, well, with the Artificial Intelligence Act, to try to have governments over those kinds of impacts as well. So not just with the developer, say of the technology, but also in the relationship with the users, et cetera.

But, well, I should also point out, Kevin, that at, within each firm, the larger ones in particular, you already see that kind of governance occurring in the form of, say, end user agreements that prohibit users from using the technology in particular ways you know, obviously to, say, infringe on intellectual property, but you can easily see them, prohibitions on actions that are going to, say, harm someone's civil rights.

So we're seeing at sort of the firm level that these kinds of norms, obviously they are not legal norms, but they're being practiced. And now they're rising to the level of legal norms, particularly in Europe. So yes, I do see, as do other observers, this room for yeah, governance over what happens to the technology once it leaves the firm's hands. And of course, you know, in the current model the technology firms that are developing this technology are going to continue to have control over the software, right? Because it's subject to leases, et cetera.

Kevin Frazier: And you mentioned briefly, and in case listeners didn't catch it, the EU's Corporate Sustainability Due Diligence Directive. Can you explain a little bit more about how that may apply to AI and what exactly that imposes on labs?

Mark Chinen: So the due diligence directive will apply to all companies, even those outside of the European Union that have over €450 million of turnover in the European Union, say, two years prior to the time in which that's measured, and that will include the large technology companies based here in the United States. And because of that, they will be required to engage in a human rights due diligence, which means that as they are developing products and services, AI applications, they will have to engage in this kind of broad assessments of the possible impacts of their technology on human rights. Human rights as they are embodied in Europe, but also incorporating the human rights instruments that Kevin and I were discussing earlier in this talk.

So not only must it be that kind of an assessment, there has to be certain safeguards in place to ensure that those things, that those actions or those effects to the extent they occur are mitigated. There has to be this monitoring. There have to, there are going to be some reporting requirements about those possible impacts. One of the major developments is that the members of the EU are going to, or will have to provide causes of action for victims of violations, of these, of the act. And also to the extent that say AI technology damages, cause damage because of violations of human rights, that there needs to be a cause of action so that they can get redressed. It's a major development in international human rights as they apply to businesses.

Kevin Frazier: It's really fascinating and I think it's going to be something that hopefully a lot of people are keeping an eye on as we continue to see AI companies evolve both through sourcing compute, through sourcing sufficient energy, through sourcing expertise. I mean, there's just so many ramifications there.

Switching gears just a little bit, I think you're one of the few scholars in the legal academy I've ever met who's published in the Yale Divinity School, or with the Yale Divinity School. And in that article, you mentioned the aura of inevitability with respect to AI. And I was hoping you could dive into that a little bit more and why it matters with respect to AI governance efforts writ large.

Mark Chinen: The notion of aura comes from conceptions or the history of technology and the various conceptions of technology. So that technology has often been viewed as inexorable. That, you know, there is this march of progress or the quest for knowledge or even, say, of course, market imperatives, which continue to drive us to improve and develop technology. And there is that school of thought that says, there is no stopping it.

There is also an idea that technology is neutral, that it has good and bad uses, and so it's not so much our need to regulate technology, but it's more, you know, its effects. And that there is a notion that technology is actually a product of political decisions that societies make that not that not only lead to technology being used for social control, but much broad, more broadly, again, is, you know, part and parcel, right, of the community, which enables that technology.

But to go back to then to the aura of inevitability, as you can see how easily, you know, easily plays into rhetoric that, well, you know, technology is developing faster than the law can ever, you know, keep up with, and to an extent that's true. But I think it also can be used as, and also can be used to discourage people from even trying to address you know, the concerns that are raised by, say, by artificial intelligence, out of a sense that really there's not much that can be done because it's going to happen anyway.

I think though, you know, just given the history of the regulation of other technologies, such as the automobile. Every time that a new technology has emerged, you see similar arguments that the law cannot catch up or keep up with these developments. And yet we have found a way. And obviously it occurs slowly because of the nature of law. But at the end of the day, we do find you know, societal values embodied in law that then do have an impact on these technologies. And I think that should be also true with artificial intelligence.

And in that vein, by the way, you might have heard of Collingridge’s dilemma, where he says that he, that, you know, the problem with regulating technology is that when a technology is first emerging, it's impossible to assess or predict all of the impacts, both good and bad, that it will have on society. But by the time those impacts are known, then it's too late, to regulate. And it is a real dilemma and hence, you know, irresolvable.

But to the extent that you can respond to those, to that dilemma, one is, and apparently this is one approach, is to try to get input as early as possible among all stakeholders as to what possible impacts might occur as this technology is being developed, acknowledging fully that no one can predict the future.

The reason why I think that's appropriate to this aura that surrounds, that sometimes surrounds artificial intelligence, is there are some attempts being made to get stakeholder and public input into sort of the development of some of these technologies, including AI. And I would like to encourage the public, encourage particularly those of us who are lawyers to get involved to the extent we can in those conversations and not be intimidated by, you know, the technical aura that surrounds artificial intelligence. It is the case that much of it is, you know, highly abstract or particularly for those of us who may not be mathematically inclined, I'm certainly one of them, to shy away from it. And of course there is a sense that where experts sometimes are not open to hearing from lay people because of our lack of expertise.

But on the other hand, it's only members of a community or those of us in the public, we're the ones who are going to have a field of brunt or any effects of that technology. And we also have more information about our own communities as well, where those voices really should be heard. So to the extent that we can get involved in these kinds of earlier discussions, I wouldn't want that aura that surrounds AI to prevent us from getting involved.

Kevin Frazier: Well, and I think you raise a number of points, which is, especially when we look at prior efforts to regulate emerging technologies, we have found a way. The car didn't destroy humanity. Obviously it's led to a lot of negatives.

Mark Chinen: It did.

Kevin Frazier: Yeah. Something that stands out though, is I have yet to meet a person who says we got international law right with respect to social media. And to close, I would be really interested in hearing: what have you seen or what haven't you seen, or maybe what would you like to see about AI international governance to improve upon what we saw in the social media era? What lessons should we learn about the failures or successes about the application of international law and norms to social media in the AI context?

Mark Chinen: I do think that they're closely related, Kevin, not social media and its regulation, that’s not something that I have a great deal of expertise on or knowledge about.

But what I can say is again at the international level, there are working groups within various international agencies that are looking at exactly this, the relationship between AI and social media and generative AI, et cetera. I don't think I would say this as something I want to see in place, but something that I applaud, I guess it's a better way to put it, is the opening up of those spaces to experts, to nongovernmental organizations, to, you know, engage at that level, and to express these kinds of concerns at the international level in terms of how AI might be affecting folks just as social media affects us.

The other thing I'd say is that, of course, at that level, it is highly abstract, right? And quite removed from sort of our daily experience. But again, this is one of the, I think, one of the advantages of understanding governance at this sort of, as kind of this set of fractal geometry, is that the kinds of, inputs that we make and have at this local level, for example can, and do make their way up to the international level. And because of the internet and the ability of say, international organizations to open up room for comments that is certainly occurring in at the United Nations level, where the public can actually give comments to certain kinds of proposals that are being made there around the governance of AI.

It's an opportunity for us to get involved. Fully acknowledging though, that by the time we reach that level, you know, we as individuals might have very little influence. And yet at the same time, I think it's important for us to be involved.

Kevin Frazier: Yeah. I love that challenge.

Mark Chinen: Yeah. It's like, letters to the editor. I don't know sometimes how often, how effective they might be, but it's yet, it's still an opportunity to get involved. And then of course then to support organizations and maybe state initiatives here at the local level that, you know, some day might emerge as a norm that makes its way to the international level.

And then, you know, not to end on a pessimistic note, but we have to keep in mind that, you know, again, these actors are working at all levels. And so, and they are very powerful actors. But again, you know, the tools that have been used here, for example, around corporate governments in other, with regard to other human rights issues, say, for example, in the garment industry, all those things, they do have an impact right on, on corporate behavior.

We could have a whole separate conversation around corporate governance and, you know, shareholder activism and other ways in which companies have been, and, in a sense, need to respond, right, to public concerns and that certainly will include artificial intelligence.

Kevin Frazier: Well, we may have to have you back to discuss just that. And, but before I let you go, I do think, I came across something that I think too few Americans know about, which is before we had social security, there was a gentleman, Dr. Townsend, who is just an average doctor who wrote an op-ed and that op-ed created the Townsend movement, which was basically giving senior citizens $200, and the catch was they had to spend that 200 dollars within the month. And that was going to stimulate the economy during the Great Depression. And that notion was extreme and people knew it was extreme, but it gave rise to the social security system we have today.

And so to your point, if you share an idea, especially in a field that is as novel and as right for study as AI, you never know what's going to catch fire. Obviously it's a little hard to go viral these days. There's a lot of competition out there, but some idea might take hold. So, I'm glad you're issuing this challenge to the listeners.

Mark Chinen: Well, that's a great illustration. Yeah that, that makes me feel better.

Kevin Frazier: There we go. Well, on that note, thanks again for coming on, Mark. I think we'll go ahead and leave it there.

Mark Chinen: Well, thank you very much, Kevin.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.



Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Mark Chinen is a professor at Seattle University School of Law.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.