Latest in Podcasts and Multimedia

Cybersecurity & Tech

Lawfare Daily: AI Policy Under Technological Uncertainty, with Alex “amac” Macgillivray

Alan Z. Rozenshtein, Matt Perault, Alexander “amac” Macgillivray, Jen Patja
Tuesday, July 23, 2024, 8:00 AM
Discussing how to think about AI policy in an everchanging environment.

Published by The Lawfare Institute
in Cooperation With
Brookings

Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, and Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, sat down with Alexander Macgillivray, known to all as "amac," who was the former Principle Deputy Chief Technology Officer of the United States in the Biden Administration and General Counsel at Twitter.

amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty, and Matt and Alan talked to him about how to do just that.


To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Introduction]

Alexander Macgillivray: There's no sort of state of nature that this uncertainty could bring us back to. So the question is, under a state of uncertainty, do we like the regulatory framework we currently have, or do we want to build a different one?

Alan Rozenshtein: It's the Lawfare Podcast. I'm Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare, co-hosting with Matt Perault, the Director of the Center on Technology Policy at the University of North Carolina at Chapel Hill.

We're talking with Alexandra Macgillivray, known to all as “amac,” who was the former Principal Deputy Chief Technology Officer of the United States in the Biden administration and General Counsel at Twitter.

Alexander Macgillivray: It won't surprise you to hear that I'm of neither camp in terms of it being completely protected versus completely unprotected. I do think we are going to have a lot of trouble with some of the types of regulation that certainly many people have been calling for.

Alan Rozenshtein: Amac recently wrote a piece for Lawfare about making AI policy in a world of technological uncertainty. And Matt and I talked to him about how to do just that.

[Main Podcast]

Matt Perault: So you served as principal deputy chief technology officer in the Biden administration and as deputy CTO in the Obama administration.

We're at a point right now where people are starting to think about who's going to be in the White House in January 2025, and there may be new people coming into the CTO office then. I'm curious what guidance you would give to people who are stepping into that office about how to do that job effectively.

Alexander Macgillivray: First of all, it's just an awesome job. There is to my mind, no better job in the U.S. government. It's wonderful because you get to work with incredible people, but also because you get to be a part of a whole bunch of different policymaking, some of it that is specifically about how public policy gets made for technology in the United States, so things like, should we have national privacy legislation? Of course we should. And things like network neutrality or encryption, that sort of thing. But also you get to be trying to make tech better within government to make government better for people. That combination of things is really a fun perch to be at.

You're not really in charge of writing or running any code, so it's not like some of the other places within government where you're having hands on impact on services for people, but instead you're trying to think about the architecture of all that and bringing more great people into government. Both times I had just a really enjoyable time, like the work itself is fun. It's sometimes frustrating, of course, but it's really great, and just a huge difference between the Obama administration, where we were doing things like at the beginning of the Obama administration, they were getting laptops into the White House, to the latter part of the Biden administration, where you really have people in all of the different councils, the National Security Council, the Domestic Policy Council, who are strong on technology.

So then the question for the CTO's office is you're no longer the only people in the room who have that technological background. But instead, how do you add value? How do you make sure that you're that we're bringing that lens to things and bringing some of the insights that we have from our unique backgrounds in there?

Matt Perault: There's one thing I'm curious to push on a little bit, which is you're talking about it from the perspective of someone who's been in democratic administrations. I assume a Republican CTO office and a Trump-led CTO office would function differently in some respects, but I would also assume that there are parts of government that might vary a little bit less in terms of public policy, but also just in terms of how they think about effectuating the mandate of their office.

And I would think in the CTO's office, maybe there'd be a little less variance from party to party. I'm curious how you see it. What would we expect if it was a Trump CTO in January 2025?

Alexander Macgillivray: Yeah, it's interesting. The first Trump administration, I was a little bit involved in that transition on the Obama side and there they originally just wanted to keep the whole CTO team. I think because they didn't really understand all the things that we had done.

They really liked what we were doing in terms of making the government more modern and making it work better for people. And they didn't get that we also wanted privacy and good public policy. I think now it seems like so much of what Trump has said on the campaign trail has been very anti-government at all.

So whether they're still in favor of a good government that gets services to people, to the people that should have the services, I just, I don't know. And so I don't know what a Republican administration, particularly a Trump-led Republican administration would look like from the CTO team.

Alan Rozenshtein: Before we jump into talking about AI policy, I do want to ask one more question on the government angle. So before you worked in the Biden and Obama administrations, you worked at big tech companies. So you were the deputy general counsel at Google, you're the general counsel at Twitter. And I assume that when you worked in those roles, you had some assumption about how things in the government worked, right?

Either looking at, what we all look at, in the news or maybe your own personal interactions in those roles. And so I'm curious when you moved into those government roles, what surprised you most about the reality of how actual tech policy gets done in the government?

Alexander Macgillivray: Yeah, part of it, I certainly was never expecting to get to go into government, to have the privilege of working in government.

And part of that was because I had taken a bunch of, they weren't anti-government stances, but they were pro-user stances in a way that made government life a little bit more difficult. I've certainly been around tables within government where people were talking about, for example, the troubles that law enforcement has in getting into encrypted communication and having been a part of buying Signal way back in the day for Twitter and then spinning it off into its own non-profit.

So some of it was a surprise to be allowed inside at all that they wanted that type of a person within government. And then some of it, just, I think I understood that you'd be able to have a tremendous impact on people's lives within government. I understood that kind of at an intellectual level, but I didn't understand it from how it would actually happen.

And, in spite of going to law school and the like, the piece of advice that Todd Park, the outgoing CTO when Megan and I were incoming during the Obama administration gave me was basically like, don't change. Don't try to meld yourself into what you think government wants of you.

Instead, just be yourself. And I found that to be good advice because I didn't find government to be too tremendously different from industry. Sure, the aims might be different. The mission was really uncomplicated in government to help people, right? But the basic stuff like building consensus, like treating people with respect no matter whether they were the administrative assistant or the president, the idea that you wanted to try to surface the best ideas and find a way to hash them out, all of those things are very similar in government. In spite of, maybe the more, the more careful measure twice and cut once type attitude in government, because we're just so deeply impactful into people's lives.

Matt Perault: I'm curious about what you're saying about the sense of mission. Like I, one of the things that I think is so misleading, at least in my experience about perceptions of industry is that they only care about the bottom line and don't care about mission. And then that other organizations, whether it's academic institutions or nonprofits or government care only about, quote unquote, helping people and not about other bottom-line concerns. Like I don't think the government is aimed at maximizing financial return, but there are a whole bunch of concerns not related to just helping people that motivate the government from political considerations to other things.

But I think what you're saying now is like, it does seem like that was a, the architecture was similar, but it felt like the mission's different. Can you talk a little bit more about that? Like to what extent is that sense of mission different in the public sector than the private sector?

Alexander Macgillivray: Yeah. I do think, part of this is the way the different sectors look at each other, right?

Like the, the private sector often looks at the public sector and thinks that a lot of what's going on is really political, and that thought is like a cynical view of how government is. Just like the public sector, certainly more than once I was told that such and such decision that was made within Twitter or Google or Facebook or anywhere else was really made just for money, and I, my experience of both really says that neither is true. Like the amount of times that money came up as a principal driving force for decision making within either Twitter or Google at the time that I was there --- and granted, they were pretty small companies at that point --- it just, it wasn't as much of a driver as certainly user growth, which is linked to money, but, and delivering the right value for people.

And similarly within government, part of it is I wasn't in the political part of the federal government. Granted, I was a political appointee. I was within the Executive Office of the President that, that does have some political connotations. I'm not blind to that. But I was not day to day in charge of how anybody's poll numbers would do or anything like that.

We really were trying to do the thing, the things that the president had promised and trying to really impact people's lives in a positive way. And I did find that to be true in every room that I was in. Now there are certainly like political strategists within the White House, but those were not so much the folks that I interacted with day to day.

Matt Perault: We were excited to have you on to talk about a range of things, but one of the main ones is a piece that you wrote for Lawfare at the beginning of the summer titled “What We Don't Know About AI and What It Means for Policy.” You do a bunch of different things now, you blog on your own, you're active in lots of different ways.

I'm curious, what was the thing that motivated you to want to write this piece?

Alexander Macgillivray: The big thing really was that so many of the conversations about AI and AI policy, it's almost like you're having two different conversations and people are really blowing by each other. And a lot of that is based on the assumptions that people bring into the conversation.

And it seemed to me that people weren't being as clear about those assumptions as they might, and in particular weren't being as clear about the lack of understanding of those assumptions. Just to sort of pick on one, there's this assumption that the current line of AI development is going up and to the right. As we get more and more compute, we get more and more performance, and that the next generation will be an order of magnitude more compute and have an order of magnitude, be an order of magnitude better, whatever the measurement is.

And I had the opportunity to talk with a lot of leaders in the AI community. And I remember asking one of the leaders, like, why do you think that's going to continue? And he basically said that he thought it would continue because it had in the past, which is not, is something, but it's certainly not more a logical argument as to why there's more to be squeezed out of it.

And he turned back to me and said why do you think it will stop? And I said I don't know. I don't have a better answer as to why I think the trajectory will be different. But the idea that neither of us, we're both basically guessing about the future. And when you're guessing about the future, you can have a lot better conversation if you're just a little bit more open about what you're guessing about.

Alan Rozenshtein: So your sources of uncertainty --- cost and capabilities ---make total sense to me. But I am curious about which way that cuts for your overall argument, because you say that even though there's uncertainty, that doesn't mean we shouldn't regulate. But one can make the opposite argument and say it's precisely because there's uncertainty that we should not regulate, which is to say, if you don't know what it is that you are regulating, what exactly are you doing?

So how do you respond, not to the sort of I think, crude, deregulatory, “markets always know better,” but I think the more measured, under conditions of uncertainty, which are hopefully reasonably temporary, we should just do nothing and adopt a first do no harm regulatory principle.

Alexander Macgillivray: Yeah, I guess my overlay to that argument would be that we have a regulatory framework right now, and that regulatory framework is, relatively developed in some respects and undeveloped in others, but there's no sort of naive space where you know, where we're in some space of nature, right?

Copyright law exists whether and how it applies to the AI models is a good question. But the, the idea that there is no regulation within, let's say, copyright, just to pick on one, is just false with respect to AI. And the same is true of privacy. Our patchwork of privacy regulation is a thing in the United States.

It's not, you know, a very well developed thing, but there's no sort of state of nature that that this uncertainty could bring us back to. So the question is, under a state of uncertainty do we like the regulatory framework we currently have, or do we want to build a different one? And I do think that the current regulatory framework is not as well suited as one that could be stronger with respect to individual rights and then more flexible with respect to agencies or other entities being able to continue to craft the regulation as AI develops.

Alan Rozenshtein: That's a great point about, that no regulation doesn't actually exist because there are all these existing background rules. So let me ask you maybe this question in a somewhat different way.

One way of thinking about regulation is to distinguish between two types, right? What we might call substantive regulation, which is to say you are regulating the actual thing, right? And so you are regulating the toaster, you are regulating the airplane, you're regulating the AI model. And the other kind might be thought of as meta-regulation.

You are regulating so that you can regulate better in the future. And you might think of information gathering, transparency, capacity building within government, procurement reform. These are all sorts of kind of meta regulatory approaches. And so maybe one way of kind of re asking my question, but maybe in a better way is, under conditions of uncertainty, maybe, or what do you think about the approach of not doing as much substantive regulation because again, you don't yet know what you are regulating, but really focusing in on the meta regulation piece of that so that when you finally figure out what you are dealing with, you are in a good position.

And so curious if a) you think that's a reasonable way of thinking about where to put the emphasis right now, and if so, what sort of meta-regulation --- which we should differ from regulating Meta, which is also in the AI space, so it's a little complicated here--- meta-regulation hyphenated lowercase would be most helpful right now.

Alexander Macgillivray: Yeah, I'm going to react to something and then try to go higher up. But on reacting to something, I think there is plenty we do know right now. And that's part of what I was getting at in the piece, is that one thing uncertainty can do is allow you to focus on the things that are currently certain.

And we really do have lots of good evidence of the way AI has done a bunch of things that are harmful to society and making sure that we're both enforcing current regulations along those lines, as well as developing new regulations where needed to deal with those current harms. I think that's extremely important, even as we think about the broader uncertainty.

So one thing is, I think I would fight that it's all uncertain. It's not all uncertain. There are some big uncertainties, but there are also things that are happening now. And it turns out that with the various uncertainties, there's also good reason to do some of those things that we're having trouble with now, both because we can be better at targeting them and seeing whether we can do something about it, but also because those same problems exist at just bigger scales with the potential development of AI, depending on how it goes.

So that's like the reaction. To pick it up a notch, I got very weirdly lucky during my undergrad and got to design my own major. And my major was literally reasoning under uncertainty. So reasoning and decision making under uncertainty, it turned out that Daniel Kahneman was at Princeton at the time when I was there and just loved everything that whole school of thought was thinking, so I thought I would design a major on that. And one of the classic experiments there is you take someone and you ask them about their preferences and you tell them that thing A has happened and their preference is X and you tell them that thing B has happened and their preference is X and you tell them that you don't know whether thing A or thing B is going to happen and their preference is Y.

Which seems, it breaks your brain a little bit, a classic Daniel Kahneman --- I'm not sure if it was his result --- but classic irrationality under uncertainty principle. And so I worry a little bit that the uncertainty here is going to cause us to have that same problem. And so what I'd rather people do is think through what they would want to do if the scenario were A, and think through what they would want to do if the scenario were B, and then try to design for how we think through regulating in either of those circumstances.

And then, as you said, Alan, which I think that was a great point, we do need to do some of that meta-regulation so that we can quickly understand whether we're in world A or world B and so that we can adjust on the fly. So transparency, of course, is important. Some of these, getting a better understanding of how to do assessments of AI is, of course, important. But we shouldn't be paralyzed by the fact that in either world, we're probably going to want to have, for example, federal privacy legislation.

And so let's go do that, right?

Matt Perault: So you just gave a couple of examples of the things that fit into this category. You have this line in the piece that I think captures this really succinctly. You say, “policy that focuses on fostering real beneficial applications of AI while cracking down on current harms remains vitally important.”

And you say it's unaffected by uncertainty. So in addition to the examples that you just gave, what are the sets of policy ideas that might fall into this category?

Alexander Macgillivray: Yeah, I still think that the blueprint for an AI Bill of Rights gives a great rundown of this. So I'm just going to basically read out the principles that we came up with there, which are first of all, we want safe and effective systems. We want protection from algorithmic discrimination. We want data privacy. We want notice and explanation with AI systems. We want human alternatives, consideration and fallback. So those are the five things that we proposed in the Biden administration. And this all happened before ChatGPT's launch.

So before this generative AI was the thing that everybody is talking about now. And I think they hold up really well because those are the types of things that that both help on the meta side, but also, are things that will be valuable to deal with the harms that are happening right now and to deal with whatever the trajectory of AI brings us.

Matt Perault: So you've worked for companies of a range of different sizes and one question that I have about the AI Bill of Rights and the White House Executive Order and then how it's been implemented is just what some of these ideas will look like for smaller companies. I read one of the NIST --- quote unquote “new,” I think it was issued back in April --- NIST guidance documents on generative AI, and it was incredibly thoughtful and incredibly, incredibly detailed. It was like, I think, roughly 60 pages of tables and each table had about 10 things in it that were guidance for businesses. And right now it's voluntary, not mandatory, but I think some in industry think that this will transition from being voluntary guidance to mandatory guidance.

And as I was reading it, I was thinking, this is well intentioned, it's incredibly thoughtful, and I have no idea how a small team with a small compliance function would be able to implement some of these things in practice. And so I get the general idea of we should get out in front of some of these issues and tackle the ones that we know that we can target, even in the midst of uncertainty.

But what do you think about potential competitive effects across the industry overall? Is this going to strengthen the large companies relative to the small ones?

Alexander Macgillivray: Yeah, obviously that's not what you want to do within regulation, and the Biden administration has been really strong on wanting to ensure there's a competitive environment here, although AI is weird, right?

At least if you believe some of the current AI companies, and this is the question around cost, if their cost ideas are correct, you don't really have a small company AI push, because if it's going to cost you billions of dollars to do a single run. Maybe you have billions of dollars and lots of compute, but no employees.

But the idea that those billion dollar runs don't have the overhead to do what will be a relatively small amount of compliance seems wrong to me. So at least on one cost trajectory you’re in, the cost of regulation is a very, very small rounding error on anybody who's actually building a model.

And then on the secondary side, yes, you do have issues there. If you're wrong about costs, right? If the AI leaders are wrong about costs, and actually we're talking about a much more commodified thing. We're talking about incremental improvements on open source models, then you have a much bigger question about how to do regulation well.

And maybe the right answer there is to really think through how individual models are built and to try to align the compliance costs to those models. Maybe what it is to think through how the model is impacting, right? I don't think anybody is trying to go after a computer science student doing homework for their AI class.

That's not the place where you're trying to get regulation, and I do think you can finally tune regulation to try to make sure it doesn't impede competition in that way, but it could be that AI --- again if the AI leaders are right about cost --- then AI is a place where competition and competition law needs to be extremely active because there's going to be this natural propensity toward having only  very few players and having most of those players be extremely well financed companies, which is not something we generally have when we think about technologies like this.

Alan Rozenshtein: In your piece, you cite to the development of fuel efficiency standards for cars as a good example of how to do this kind of regulation under technological uncertainty. And so I was hoping you could unpack what that example was and why you think it's a good lens to think about AI.

Alexander Macgillivray: That example has some problems because there were a large time periods where politics really got in the way of improvements, but what I was trying to get at there and really this does bleed, nicely into the court’s Chevron ruling, which makes all of this, I think, a lot harder. But what I was trying to get at there is that some mix of a regulatory principles and driving force with a more agile agency being able to push on particular levers that can be pushed on, and then industry actually being in charge of the implementation aspects is a pretty good recipe for most fast moving things. And like I say: the fuel efficiency standards, there are some good parts of that, there's some bad parts of that. But what it did do was government was able to set a target and industry was eventually able to meet that target and agencies were able to help in that process to make it actually work. I think, is a fairly good way of thinking about how we could do this in AI.

Alan Rozenshtein: So another question is whether you've seen or whether you view any of the existing AI regulatory proposals --- and so there've been some at the federal level, but not a ton; obviously there's been some stuff at the state level and there's the California AI safety bill rolling through --- but there are lots of other proposals across the country.

The EU obviously passed the EU AI Act, which is, as far as I can tell, 8 million pages long and who knows what's in it. Do you think any of those are good?

Alexander Macgillivray: I think there's a lot of good in a bunch of them. You look at the transparency requirements that run through a bunch of them that are somewhat different, but also relatively consistent.

The fact that the states are experimenting is probably very good. Although, the idea that we're going to regulate AI just for a bunch of states seems wrong to me. We really do need some sort of federal bringing together. So my basic answer to that is, yes, I think we have a bunch of good starts.

I do think that there's stuff that only the federal government really can do. There's stuff that the federal government needs to do. Just we, with respect to like our relationship to the other countries that are trying to regulate AI, we don't really have a way to talk with them about how we're doing it at the federal level.

And we need that. We very much need that. So I think like I would say good starts. Some high points. There are states that have essentially taken the blueprint from the AI Bill of Rights and tried to implement it, which is awesome, but lots of work still to do.

Matt Perault: So I think if I were asked, who should you go to if you want deep thinking on First Amendment issues outside of the academy, you would be probably the first name on my list.

You were known for your expertise in First Amendment, but also generally just expression issues, particularly when you were the general counsel at Twitter. and so I'm curious about how you see First Amendment jurisprudence mapping on to the AI regulatory landscape. In other areas of tech policy, like content moderation and child safety, we've seen some push at the state level to enact new laws, and then we've seen those regulations get pulled back on First Amendment grounds by courts. Do you think the same thing is about to happen in AI policy?

Alexander Macgillivray: Well first of all, I would say that these days there are tons of really smart voices on the First Amendment including lots of great people on Lawfare, which is where I get a lot of my First Amendment stuff. So that's the first thing I would say. I've been out of the game. I haven't been a lawyer in an awfully long time.

But the second part of your question, I thought it was, I thought some of the things coming out of the court were really interesting in that it seems like the court is trying to grapple with the complexity of some of the First Amendment issues as they're applied to the platforms, which is something that the trust and safety professionals who have been working at these platforms have been doing for years.

But it is gratifying that I think the more courts, the more the executive branch, the more the legislative branch tries to do something here, they are diving deeper and understanding the complexity more, which to a certain extent is making them stop doing stuff, which is probably not the right end state, but at least trying to get to that better understanding is encouraging to me.

And I think that also comes from, you see the view of 230 seems to be continuing to grow and change as a lot of people who came at it with a knee jerk, hey, it should all be thrown away, start to understand what that would actually mean in practice, and that might actually go against some of the goals that they have, whether that be First Amendment’s protected speech, or whether it be trying to enable diversity of speakers, or whether it be trying to protect people that are subject to harassment online.

These are all things that are bound up in these questions, and it's just great to see people trying to grapple with it a little bit more and going to some of the experts, some of the real experts on it, the people who have done that made these trust and safety decisions on the ground.

Alan Rozenshtein: I do want to take a moment though to dig in a little bit and at least just get your preliminary intuitions on this. And again, no one's holding anyone to these intuitions, right? Because this is so fast moving. But one can imagine a spectrum of views on the question of whether the First Amendment applies to AI.

And again, that's very broad, right? Because AI is very broad and different applications of AI are very broad, but you can, there are extreme positions where on the one hand you can say, look, this is just, this is not regulating speech, this is regulating action, this is regulating instructions for computers, and just as your toaster doesn't have First Amendment protections, or your toaster manufacturer doesn't, nothing about AI should have First Amendment protections.

You can go to the other extreme and say AI is made using computer code and computer code is speech. Or you can cite the famous Bernstein cases from the 1990s and say, therefore, all of this is speech, and you can make the argument that Apple made, for example, during the San Bernardino stand off over the iPhone, saying the FBI can't make us write code because code is speech and that's compelled speech. Or you can do some very complicated case by case middle ground where you can say regulating AI weights and biases isn't quite speech, but if you regulate AI outputs, that might be speech, depending on whose speech you're talking about.

I'm just curious what your intuition is for how courts should think about the First Amendment in this area, because I think Matt is right that there certainly will be challenges to this, the moment that AI regulation has really any teeth whatsoever.

Alexander Macgillivray: Yeah, and I don't think the scholarship on this is as deep as it will be.

I think there's a lot more work to be done here because people are still working-

Alan Rozenshtein: We’re on it. We're working on it. I promise.

Alexander Macgillivray: People are still, I think, grappling with substantively, what do we want before getting to and is any of it legal with respect to the First Amendment. It won't surprise you to hear that I'm of neither camp in terms of it being completely protected versus completely unprotected.

I do think we are going to have a lot of trouble with some of the types of regulation that certainly many people have been calling for, especially when you think through misinformation. The First Amendment really has a tough time with dealing with misinformation. The First Amendment requires that, courts have a tough time is what I really should have said, with misinformation because we just don't have the tools to deal with that, from a legal perspective, in terms of the government, just makes it much, much harder.

And I think whenever we've tried to do that in the past, it hasn't worked out very well. So misinformation as a classic example of something that I think a lot of people are worried about with respect to AI. And rightly so; it's unclear that the tools of government with, at least within the United States, are going to be very useful with respect to that problem because the First Amendment is going to be a real obstacle.

So I do think it's going to be case by case and all the rest of it, which is an annoying lawyerly way of saying it depends, but it's definitely going to be a filter through which everything else passes. And we too often make the mistake of saying the reason why this is such a problem is X --- and I don't mean the company there, although that could also be the case --- I mean is something and that thing is either tech companies or us not having passed the right legislation or public officials being negative in some way when really the First Amendment is the thing that is making it so that we can't do a particular regulation with respect to this particular thing like misinformation.

Alan Rozenshtein: So another potential legal impediment that I want to talk about, and I think we briefly touched on this earlier in the conversation, is the question of whether agencies actually have necessary authorities to do the sorts of regulation that we might want. So one of the biggest cases this term was the Loper Bright case, which overruled Chevron.

What exactly that means is not entirely clear, but I think it's safe to say that at the very least, agencies are going to have less flexibility when interpreting their statutes. And that's particularly relevant as they comb through their organic statutes to find any hook for regulating AI. And then more generally, I think over the last couple of years, you've just seen more judicial skepticism of, on the one hand, increased administrative power, on the other hand, Congress's ability to delegate to agencies. So you have, for example, cases about the major questions doctrine, which again, just to summarize very briefly says that, if a statute is ambiguous as to whether or not an agency gets a big new power, we're going to read the statute basically to not give the agency that power.

And, I think, especially if you think that AI is a big deal --- certainly if you think it's an existentially big deal --- but even if you think that it's just going to be a huge part of the economy and therefore regulating is by definition a big deal. That might be another limit on agency's ability to regulate.

So I'm curious to what extent you think that these decisions will in fact, hamstring agencies and what advice maybe you would give to agency general counsels as to how to work around that, whether it's being creative in some way, or just running to Congress and saying, guys, you have to give us more explicit powers now.

Alexander Macgillivray: Yeah, in the words of the great Strict Scrutiny podcast, they often say “precedent is for suckers.” That's what this court is teaching us. And I think that's just a really, really harmful thing for agencies, right? Not being able to understand and --- being an agency GC must just be very difficult these days --- not being able to know what you're standing on when you try to do regulations or even just do your job as an agency, not being able to know what standard a court is going to apply makes it extremely difficult and that might be the thing that the courts are going for here, but it's really bad for agencies. And I would point also to Jen Pahlka's extremely good book, “Recoding America.” She talks about the way agencies can better implement and execute on their missions, but all of that requires that the agencies are standing on solid ground.

And if we make it so that there's just no way for them to know whether a particular thing that they're doing is legal or not, which seems to be the project here, that's really tough. And it's going to make the job of governing harder, regardless of whether it's a Republican or a Democratic administration.

Matt Perault: So it seems like it makes it harder for the federal government, but I assume there will be lawmakers who will come in to fill gaps either internationally or at the state level. How do you see it unfolding?

Alexander Macgillivray: Yeah, I think that's not a bad guess. Although, in a functioning democracy, we would rely on our federal government for quite a few things.

And, I've been part of governments that really did deliver important, impactful things to people. So reducing uncertainty there, I think is an extremely important task.

Matt Perault: So we've talked a lot in this discussion about your roles and your perspective based on your public sector experience.

But maybe we could just conclude by talking a little bit about your private sector experience. Your Lawfare essay focuses on regulating in the face of uncertainty, which is something that you've done in your entire career, not just in the public sector, but in the private sector in roles at Google and as GC of Twitter.

So I'm wondering when you look back at your work in industry, are there things you would have done differently knowing how much evolution there would be in the field?

Alexander Macgillivray: Of course, there are many things that you might do differently. I think, so I think part of the mistake that we make is thinking, especially on the, just focusing right now on the platform speech thread that runs through a lot of my career, we make this mistake of thinking that there is one idealized version of the policies and procedures that is out there somewhere and we just need to discover it. And so then changes, or things that you would do differently, appear like you should have done them back at the time, or there was some something that you should have already realized.

And I think there is a question of like how quickly you adjust, but these platforms and communities really change over time, and the correct and appropriate regulation of them within a specific moment, particularly at the companies where you do have the ability to iterate very quickly is not static in any way, shape or form. Like Twitter, when I started there, it was a little bit unclear to me and I first rejected the job because I thought it was just all about the Kardashians and what people were eating for lunch, which wasn't that important to me. It wasn't, it is important to a bunch of people. And then as it became clearer and clearer that we were having a bigger and bigger impact on people's lives and important parts of people's lives, you do need different policies there.

And then as brigading became something that was a much bigger deal on the platform, you do need policies to deal with that. You do need to get into it. Matt, I'm sure you saw this a ton at Facebook as it grew. So I guess my answer to that is yes, sure. There's always things that one could do better in hindsight, but I also think probably these things need to change always, so there's never going to be an end point. And it's not even clear that there is a way to do these humongous platforms that are like public places for a good portion of the world well. I think Mike Mazek makes this point quite a bit too.

Alan Rozenshtein: I think that's a good place to end it.

Amac, thanks so much for the great post for Lawfare and for talking with us today and for all the great thinking you do on this. I'm sure we'll have more conversations as we sail bravely into our utopian or dystopian or somewhere in between AI future.

Alexander Macgillivray: Thank you so much, Matt and Alan.

Alan Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath. Our latest Lawfare Presents podcast on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
Matt Perault is a contributing editor at Lawfare, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and a consultant on technology policy issues.
Alexander Macgillivray, also known as "amac," is curious about many things including ethics, law, policy, government, decision making, the Internet, algorithms, social justice, access to information, coding, and the intersection of all of those. He worked on the Biden transition team and administration, and was part of the founding team at the Trust & Safety Professional Association and Alloy.us. He was also a proud board member at Data & Society and Creative Commons, and an advisor to the Mozilla Tech Policy Fellows.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.