Cybersecurity & Tech

Lawfare Daily: AI Regulation and Free Speech: Navigating the Government’s Tightrope

Paul Ohm, Alan Z. Rozenshtein, Chinmayi Sharma, Eugene Volokh, Jen Patja
Monday, November 25, 2024, 8:00 AM
Listen to a panel from the AI liability conference. 

Published by The Lawfare Institute
in Cooperation With
Brookings

At a recent conference co-hosted by Lawfare and the Georgetown Institute for Law and Technology, Georgetown law professor Paul Ohm moderated a conversation on "AI Regulation and Free Speech: Navigating the Government’s Tightrope,” between Lawfare Senior Editor Alan Rozenshtein, Fordham law professor Chinny Sharma, and Eugene Volokh, a senior fellow at Stanford University's Hoover Institution.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Eugene Volokh: So once you say something is subject to First Amendment, then pretty much the first question ends up being, are you trying to restrict it because of content or to be precise, things that flow from the communicative impact that it conveys certain information or convey certain ideas or the like, or for things that are unrelated?

Alan Z. Rozenshtein: It's the Lawfare Podcast. I'm Alan Rozenshtein, associate professor at the University of Minnesota Law School and senior editor and research director at Lawfare. Today, we're bringing you a conversation from a conference on AI liability that Lawfare co-hosted earlier this year with the Georgetown Institute for Law and Technology.

Chinmayi Sharma: I'm skeptical about, like, a big AI law. Even if we don't get to holistic AI regulation for all AI, we have agencies that address a lot of high-risk industries that might be already pretty technically competent at figuring out what kind of AI is nuclear and what kind of AI is cars within their domain.

Alan Z. Rozenshtein: Georgetown Law Professor Paul Ohm moderated a conversation between me, Fordham Law Professor Chinmayi Sharma, and Eugene Volokh, a senior fellow at Stanford University's Hoover Institution, on AI regulation and free speech, navigating the government's tightrope.

[Main Podcast]

Paul Ohm: Eugene, I didn't embarrass you with your full biography. I think the most important line in your biography is “taught Paul Ohm everything he knows about copyright law.” That's really 

Eugene Volokh: It's everything I know, but you know much more about it now.

Paul Ohm: No but from that moment one thing that really has stuck with me about the early part of your career, is your now-landmark paper, “Cheap Speech and What It Will Do,” and how it, kind of, in a really prescient way, anticipated social media and anticipated some of what happened with smartphones with a First Amendment lens. And so now I'm curious, like, are you, have you written, are you thinking about “Cheap Speech 2” or “Automated Cheap Speech and What It Will Do”?

So maybe for the benefit of the students who may not know this piece, talk about your observation then and what it might do to the First Amendment law, but I want you to try and map that onto large language models or other forms of AI.

Eugene Volokh: So, back in 1995, I was invited to do a paper on the internet. Now, I knew a little bit about the internet, just at that point, it was basically email and some really non-graphical browsers. But my view was it, the internet, made actually the promise of free speech and in particular freedom of the press actually available.

It was a mechanism through which people, both large organizations of people and small individual people could much more effectively speak to each other. And even if the technology was nascent, you could really see the writing on the wall and that it would enable things like Substacks, like, electronic delivery of books, and a variety of other such things.

And of course, enabling everybody to speak where before they had to be rich or connected to speak effectively to the public, has pluses as well as minuses, because all speech has pluses as well as minuses. But it was pretty clear what was going on is this is just another mechanism through which people can more effectively speak.

It's not so clear with AI, right? It's a mechanism through which speech is now generated, in a way that both the authors of the software and the authors of the prompt cannot fully anticipate. So this does raise, I think, new issues of speech protection in a way that the internet did not really raise, I mean, to the extent it did, they were pretty quickly disposed of in Reno v. ACLU.

Here, I think they are more complex. Nonetheless, I think that the AI output is protected by the First Amendment, although subject to some of the First Amendment exceptions. So that's my, kind of, quick bottom line. I'm happy to explain why and when, but I don't want to.

Paul Ohm: No, and I want to, like, my first line of questions are, for those who haven't yet read these pieces, and I recommend them to you, and they're actually both, you know, very, very easily digestible pieces in wonderful, deep ways. So I'm going to ask you some questions about that but let me just continue the parlor game, right? I mean, implicit in my bringing up this article of Eugene's is, how much should we be like looking at AI as a moment that is as big as the dawn of the commercial internet, right?

I mean in terms of all of the disruptions we've been talking about generally, but in terms of your paper on the challenges to the regulatory state? Like does this, is this on par with, and Chinny, you might have been too young to remember this, but is this on par with the web browser, or is this like nothing like that? Or is it much more hair on fire than that moment was?

Alan Z. Rozenshtein: So I think Chinny and I actually had a version of this conversation last night in the hotel bar where I was sort of waxing on insufferably about how magical AI was and how I got into this field because two years ago. I spent 20 minutes talking to ChatGPT-3, which in retrospect is like a remarkably stupid model.

And for the next week, like, I was just walking around in a daze going, whoa, this is amazing, I'm gonna spend the rest of my life doing this. And I think Chinny's response was, yeah, I wasn't that impressed, at which point I accused Chinny of being dead inside, and she, right, so, it was a whole thing.

Chinmayi Sharma: I accused him of being dumb.

Alan Z. Rozenshtein: Yeah, exactly. It was great. This is the usual Chinny and my conversation. So, my answer is, I think this is every bit as important. Quite possibly more so. I think it's also, for exactly the reasons Eugene articulated, I think much, much harder, right? Most internet First Amendment cases, they're incredibly important. They’re actually not that complicated, right? Because it's just speech. And the fact that it's speech, you know, going through the tubes as you know, the internet is a series of tubes, right? Doesn't really change the fact that it's speech, right? Any more than it, you know, the fact that speech is over a telegraph or a radio changes that it's speech.

Whereas AI is different, right? AI, because it is generative, raises a lot of, sort of interesting questions that I hope we get to, right? Questions not just, you know, is it covered, but how is it covered? So I do think it is a massive deal. Chinny, what about you?

Chinmayi Sharma: I just like to disagree with Alan, so obviously last night I was like, you're dumb, and that was not the right response, but I do genuinely agree that this is, like, it's rare that I think new technology actually raises novel questions, and I do feel like the fact that AI generates content that is not perfectly predictable to the developers that built it is a very interesting phenomenon.

I get frustrated when people call like, you know, 2021 or 2022 is an inflection point, and now everything's changed in the world because like, this has been happening behind the scenes and been developed for years and years and years and years. This isn't, like, magical new technology that just hit the scene. Neural nets were used in commercial sectors before LLMs came out. Generative AI is not all AI.  So like, I have my shtick about that.

And I do think that this distracts from larger AI problems to focus only on the commercial and user-based products that are out today. That being said, I think that when I talk to people that are like, AI is going to change the world, I'm like, like, yes. Yes, and so have a lot of other things. I do think it is important to start talking about AGI, whether it is 5, 10, 50, or 75 years down the road, or maybe totally impossible, I still think it's worth having the conversation.

But when I talk to people that are like, AI is no different than any other technology that has hit the scene, I'm like, that also feels wrong. And so I think that like, the, this is a long winded way of saying, it's the very predictable answer of, I think it's more complicated, and it's like a both and situation.

Paul Ohm: So let's dip into, I don't want to like replay all of your posts, but Eugene one of the, one of the early kind of moves you make in the paper is you say, and this I think is well established in the doctrine, the First Amendment is about listeners, not just about speakers.

And so maybe we can almost do away with the kind of, frankly, like, philosophical question of like, is this thing speaking? Is it thinking? Are the programmers? You're saying almost that may not matter in the end.

Eugene Volokh: It may not matter, it may not matter that much. So, let's just get concrete. Imagine that the government were to say, it could be federal government, state government, whatever. The government says, we don't like the fact that AI programs are outputting certain kinds of answers to certain kinds of prompts, certain kinds of responses. So we're going to make it illegal for the AI companies to allow that.

Now, an interesting question, to what extent will they have some defense saying, you know, we really tried really hard but somebody jail breaked, is it jail breaked, or jail broke? I don't know, the software. Bracket that, because you know, you can have defenses of a reasonable mistake or some such. But look, we don't want them outputting racist material. Or we don't want them outputting anti-police material. Or we don't want them outputting pro-Hamas material, let's say.

Can the government do that? Now, of course, one possible response is the First Amendment. It says, look, you know, we're perfectly fine with human beings outputting things.

We're even fine with corporations outputting things, because corporations are ultimately human beings. And if they, somebody wants to write something for the corporation that then the corporate managers endorses the voice of the corporation. That's protected. But this is just software generating it and that's not protected. I'm inclined to say that would be unconstitutional. Why?

So, one possible answer is there is a right of the speaker involved. The speaker is the company. The company creates the software that outputs it. The company trains the software. Sometimes just trains it on the underlying corpus, which it may not even anticipate what the output would be like, but often it custom trains it through reinforcement learning through human feedback or whatever else you've got, you know, and they may say, yeah, it outputs certain kinds of viewpoints because that, those are the viewpoints we want them to output. So that's one possibility.

But I think even independently of that, there are listener rights, and let me just give you one specific case. It turns out it's the first case, a trivia question, speaking of building on the previous panel. When was the first time the Supreme Court struck down a federal statute on First Amendment rights? Turns out it happened in 1965, pretty late. And the case is Lamont v. Postmaster General, and it involved a law that said if you want to get mailings from foreign countries, generally from foreign governments, of communist propaganda, you can get them. You just need to go to the post office and say, I want to be a, I am willing to get this material.

And the court struck it down, not on the grounds that foreign, foreigners from overseas, especially foreign governments, have First Amendment rights. Maybe they do, but it's far from clear that they do, but on the grounds that the recipients have rights to receive them without having to go get on a list of people who like communist propaganda, right?

So I think the rationale would apply here as well in two ways, at least two ways. One is, somebody may say, you know, I want to read this racist argument, or I want to read this pro-Hamas argument. And this is a good way of getting this argument, and therefore this is something that I should be able to access. And you can't block me from it, you the government, in a viewpoint-based way. Or content-based, or whatever the First Amendment concern.

But another possibility is listeners as speakers, right? One of the reasons people use LLMs is to get information that they could then, or get text, that they could then edit some and then repost further. So a person may say, I'm not just a listener, I'm a speaker. I asked it to compose a Facebook post that would make this argument. And I'm entitled to get that free from government interference. So that's why I think there would be First Amendment protection.

Note, such protection is not absolute, just like protection for human speech is not absolute. So returning to the first panel, libel, for example, if the company outputs libelous material, you know, that's not protected by the First Amendment when humans do it. It's hard to see why it would be protected when AIs do it. Likewise, there are interesting questions. What happens if it outputs information that is false but physically harmful? It tells you some mushroom is safe to eat and it turns out it's poisonous. Or then the variety of other kinds of such questions. Those would have to be filtered through First Amendment law.

But my point is there is a First Amendment protection. It's not just like the government can, with regard to non-speech products, say we're going to regulate them just because we think it's in the national interest.

Alan Z. Rozenshtein: So I want to almost entirely agree with what Eugene said before I disagree slightly.

Eugene Volokh: Where's the fun in that?

Alan Z. Rozenshtein: No, no, no. I'll get to it. I want to mostly agree with what Eugene said, and I want to go maybe even further and say I suspect that this listener theory, whether listeners as listeners or listeners as speakers, is going to very quickly become the sort of conventional wisdom among both academics and especially judges when this gets to the courts, precisely because it allows everyone to sidestep the very much more complicated, both technically and sort of conceptually, in both technical and conceptual sense, the question of, are these AI models thinking? Are they speaking? To what extent are they, you know, is ChatGPT speaking for OpenAI or Sam Altman? That’s too hard, right? Whereas the listener answer is very straightforward, right? It's pretty obvious, right? And I think it's definitely true.

And yet, the thing that I think still makes it more complicated than just saying that is because the doctrine does distinguish the level of First Amendment protection, sometimes, of what kind of speech the listener has access to. And here's what I mean. So, it's one thing to say, and here I do agree here with Eugene, that this stuff is definitely First Amendment protected in the sense that the First Amendment applies here, right? This is not a case where the First Amendment just doesn't apply and therefore you can do anything. Yet, there are cases that distinguish, for example, between listeners having access to speech itself versus listeners having access to a tool that they then use for speech purposes, right?

So, one can imagine, for example, that the law might treat restriction on for example, you know, text that a listener can get differently than it might restrict, you know, access that a listener might have to, let's say, a typewriter, right? A tool for speech. And so, I do wonder, this is good, this is a face that means there will be a response. This is good. Getting Eugene to scrunch his face is like one of my main goals in life. Right? So, just saying the First Amendment is implicated here, right, might a court still say it's implicated, but really this is a tool for speech. Not necessarily speech itself, and therefore we might allow, you know, under intermediate scrutiny.

Maybe the appropriate thing is intermediate scrutiny. Yeah, no, clearly I'm wrong here.

Eugene Volokh: Well, so, as a general matter, speaking often requires a wide range of tools. One tool is the printing press and its technological heirs, right? The internet is a tool. A bullhorn may be a tool, right? A camera is a tool for creating a particular kind of speech as well as a tool for, kind of, viewers to capture something just for their own purposes. But there are other things which aren't physical tools, like it requires other things like If you're going to have a demonstration, you have to be physically there and you might block roads or entrances or whatever else.

The law, I don't think, treats tool-assisted speech really that differently from things that are not conventionally seen as tool assisted. Rather, what it distinguishes, as a general matter, is between restrictions that are justified by the non-content features of the tool or of the speech and those that are justified by the content.

So, for example the bullhorn is a very powerful tool for speaking, but the government can say we will restrict you from using the bullhorn in certain places at certain times, outside a hospital, let's say or at night in a residential area.

But that's not because it's restricting a tool, because it can also restrict things we don't think of as tools. Like, we could say we can restrict you from demonstrating on the street, at least unless you got, have a parade permit. It's not that your demonstration is a tool, it's that it's a content-neutral restriction that passes the more relaxed standard available there, I think. So as a result, it's true if the government were to say, we're going to, we're gonna put a tax on AI or large language models because they consume so much electricity that, you know, they create externalities that we need this tax in order to deal with.

It's an interesting question whether it could be unjustifiable. In any case, there's special rules that have to do with taxes on the press, but at least you could say, okay, that's a content-neutral rationale. But I had to really work hard to get this one because there are very few content-neutral rationales for regulating LLMs, right?

As a general matter, if the government wants to regulate an LLM, it's because it doesn't like the output that the LLM will produce. So the fact that it's a tool, it seems to me, would no more justify those content-based regulations than, or even lower the level of scrutiny than if the government were to say, you know, if you want to buy a typewriter and use it to convey these messages, we're going to put a tax on it or otherwise restrict it.

That's my opinion.

Chinmayi Sharma: So is it fair to say safety audits or mandatory reporting requirements would be seen as content-neutral, on LLMs?

Eugene Volokh: So, I think I know what you mean by safety audits, but I want, before responding, I want to make sure I do.

Chinmayi Sharma: A couple of the bills right now, the EUA, are like mirroring the EUAIX call for or mandate on companies of a certain size to have safety audits.

Eugene Volokh: Okay, but safety, I take it doesn't mean, oh, you know, I start typing it. Ah! I get a shock because it broke the keyboard, right? That's not the safety we're talking about. The safety stems from a concern about the content, right? Almost invariably when people talk about LLM safety, it's a worry that the LLM will output material that is harmful.

To some people it's because it conveys harmful viewpoints. To some people it's that it's harmful because it's false. It could be that it's harmful because it tells me to eat the wrong mushroom. That is actually a physical harm, but it is a harm that flows from the content of the output. Am I right that the safety audits would all be having to do with that?

Chinmayi Sharma: Not always.

Eugene Volokh: Well, give me an example of a safety audit that aims at a harm that is unrelated to the content of the output of the LLM.

Paul Ohm: Could be endless scroll. Right? It's just like, we don't want people to be addicted. And so we want mandatory, you know, we think it's best practice to give people a break every 20 minutes.

Eugene Volokh: Oh, that's an interesting point. I appreciate it. It's an, it's a question that's coming up right now with the social media platforms. That's where people are most concerned about.

It's not clear to me that is a content-neutral rationale. It's true it doesn't single out some particular subject matter, but the concern is that the content is so appealing to people that they're spending too much time on it.

But yeah, in principle, you could, at the very least, that's a plausible case for content neutrality. I haven't heard of any safety audits that are aimed at that, that we're just afraid that people will fall in love with their LLMs and then stop having human relationships, stop eating, right? Maybe that's the concern but I don’t think that’s what safety audits today are about. 

Alan Z. Rozenshtein: I mean, I think that's, it's not mostly what they're about. You're right. It's mostly about, oh my God, this LLM, when you ask it how to make a nuclear reactor, a nuclear bomb in your basement, it gives you, you know, here's a three easy steps.

But I actually do think, you know, this question of design, right? Which is coming up in the platform, right, coming up in this platform case, it's also gonna come up in AI really soon. I mean, just, literally this week, OpenAI released its advanced voice model, which was, you know, they, if you may recall this six months ago they announced this really remarkable voice model. You can really talk to, it sounds suspiciously like Scarlett Johansson from the movie “Her.”

Paul Ohm: Not anymore.

Alan Z. Rozenshtein: Not anymore. Not anymore. But the vibe is very similar. And it pushed it all out to ChatGPT if you have the plus you know, if you pay the $20 a month. So on my phone right now, right, I can pull this up.

And there's an interesting question, you know, will people get addicted to this? I think some will, and you can imagine this being an issue. And then the question is, yeah is the nature in which the content is presented, is that itself content-based or content-neutral?

Eugene Volokh: So it's an interesting question. Those are the right questions to ask under First Amendment law, right?

So once you say something is subject to First Amendment protection, then pretty much the first question ends up being, are you trying to restrict it because of content or, to be precise, things that flow from the communicative impact, that it conveys certain information or conveys certain ideas or the like? Or for things that are unrelated, like it causes eye strain, or it causes people to lose sleep, not because they're, lose sleep because they're upset at what they read, but lose sleep because they spend too much time on it, let's say.

So then, if you conclude that it's content-neutral, then generally the government has fairly broad latitude to restrict that, so long as it leaves open ample alternative channels for communication. On the other hand, if it's content based, then as a general matter, it can't be restricted unless it fits within one of the existing exceptions, like for libel. Or, building a nuclear bomb, I mean, if the first step is, find some uranium in your basement, maybe it turns out it's not that harmful, but there actually is pre-internet or case law, certainly pre-AI case law, very little of it, but still about whether or not that can be restricted. But the theory would be there's some exception for speech that is so harmful because it allows the creation of weapons of mass destruction, let's say.

But that would be the analysis you'd have to go through.

Paul Ohm: So let's stick with the First Amendment since I didn't want to, like, box out the other people on the panel and you clearly are not feeling boxed out. So, we will…

Alan Z. Rozenshtein: Chinny and I will not be boxed out

Paul Ohm: Yeah.

Alan Z. Rozenshtein: That's not our, that's not how we roll.

Paul Ohm: That’s not your style, yeah. So, there was another argument you made in your piece Eugene, where you were responding to a provocative argument.

So the argument you were responding to was: large language models are just picking tiles out of a Scrabble bag, right? And so the idea that anyone could be— I think this was in the context of defamation—the idea that anyone could be defamed—I don't know, I'm just playing out this argument I didn't see—it's ridiculous because no one's gonna believe that these are assertions of fact, when they're just tiles. I don't know if I'm characterizing that argument, but then what's your response to that?

Eugene Volokh: Well, that is certainly an argument I've heard. There are variants of it: Ouija board, monkeys at typewriters, you can, 8, Magic 8 Ball.

And my answer was, I don't think the Magic 8 Ball corporation was getting billions of dollars from people who were planning to commercialize Magic 8 Balls.

Chinmayi Sharma: There is a TikTok trend about Magic 8 Balls, so maybe they're getting ripped off.

Eugene Volokh: Oh, fair enough, they're getting something. So, it's true that if people just sort of see something as a toy, as a game, and it happens to output something that says Volokh has been convicted of killing dogs, then in that case they would probably laugh and say, funny that it should output this, but wouldn't take it at all seriously.

But if you've got, say, a search engine, which has enough confidence in the quality of the output. Not perfect confidence, but enough that it outputs things, like at the top of a search by default gives this, then unsurprisingly, I think, most readers would assume that this is not perfect but fair, but reliable enough that it's way beyond the picking tiles out of the Scrabble scenario.

And once that's so, it turns out libel law is familiar with situations where people rely on things that are not completely reliable. The classic example is rumor, right? That everybody knows rumors aren't always true, but they also know that rumors aren't completely random, so they're sometimes true and people believe them.

And if I spread a rumor even if I preface it, saying rumor has it that Joe Schmoe has been convicted of killing dogs, then in that case I could be liable, even if we might say, in a perfect world listeners would say that's true I'm not going to pay any attention to that's just a rumor. In the real world, people do pay attention to such things, and thus liability follows reality.

Paul Ohm: This is the form of a question that's, I could have just Googled it, but this is more efficient. I mean, Eugene, among other things you track these defamation cases closer than anyone. I learn about all of them when you post to CyberProf, our listserv.

So what's the most advanced one of these cases gotten? Have they, have there been motions about the First Amendment question yet?

Eugene Volokh: Yes. So there are two. Yeah, one is in federal district court in Maryland. The plaintiff's pro se, you know, this is a great opportunity for someone to take this up, represent him. You may win or lose, but this is really cutting edge. So it's Jefferey Battle. He spells it “Jefferey.” And he says when you… it turns out to be important.

Paul Ohm: I was going to say…

Eugene Volokh: I'll get to that. Play along with me here. So Jefferey Battle says, I went to Bing and I entered my name. And I get the following. It says, Jefferey Battle is the, is a retired Air Force officer who now has a business under the name of the aerospace professor doing consulting in aerospace. He used to teach at Embry Riddle University. So far it's quite accurate.

However, Battle was also convicted of levying war in the United States and sentenced to 18 years in federal prison. But the part about Battle being convicted, Jeffrey, “Jeff-REY,” not “Jeff-E-REY,” is also true. The falsity comes in the two words however, comma, Battle.

There are two people named Battle. As it happens, they have different first names, although it's very similar, although similar questions would arise if they had the same name. And the problem is that Bing merged the two together, and the however, comma, Battle is a signal to readers those two are the same thing.

If this had been a newspaper that did it, it would be a very solid libel case, right? They could say, you know, we didn't intend to do this, although, he also says he let Microsoft know about this. And they said, though, we're going to try to fix it, and they didn't. At least for a month they didn't. So that might be knowledge or recklessness as the falsehood.

That's one case, and right now it's not gone, hasn't gone anywhere beyond the original pleadings, even though it's been pending for a long time, in part because he's pro se.

The other case is Walters v. OpenAI. Mark Walters is a kind of, he's a commentator, gun rights commentator. And apparently somebody else, who as it happens knew him, but I don't think there was any collusion at the outset, asked ChatGPT 3.5, what happened in this complaint? Which is a complaint, Second Amendment Foundation I want to say versus Ferguson, the attorney general of Washington.

And out comes, oh, this is a lawsuit against Mark Walters for embezzling from the Second Amendment Foundation. It's not a lawsuit against Mark Walters. Walters is not even mentioned in the lawsuit. It's not about embezzlement. It's a facial challenge to Washington gun control law. Somehow it creates this. It gets filed in Georgia Superior Court. By the way, when your law professor, your supra professor said this is the most important class you're going to take in law school, that person was right.

What happens is OpenAI removes it to federal court, which is, for the, I think for the usual reasons that are often given, probably the right move for them, sort of cutting-edge legal theory, probably more likely to be better handled by a federal, by a federal judge. And then it gets remanded back because OpenAI is not a corporation, it is an LLC. And for diversity purposes, the citizenship of an LLC is the citizenship of all of its members, and if the members have LLCs, then it's their members and then it settles all the way down.

So, it ended up being remanded back to Georgia Superior Court. They file a motion to dismiss on various grounds, including, I believe First Amendment grounds, and the judge in a one line order, which is quite common in state court, although not as much as federal court, denies the motion. So the case is still progressing in Georgia Superior Court in Gwinnett County. Those are the two cases that I know of in the U.S.

Paul Ohm: Any follow ups on libel? I'm gonna move past libel.

Alan Z. Rozenshtein: I just can't, I can't, how could you bring civil procedure into this? How could you bring civil proceed—?

Eugene Volokh: How could you not?

Alan Z. Rozenshtein: This is a safe space, and here I have to think about diversity, jurisdiction, for AI. How could you?

Eugene Volokh: The lawyer's true superpower, is we have the power to turn everything, every question, into a question about procedures.

Alan Z. Rozenshtein: That’s true.

Paul Ohm: Okay, so question about the regulatory state. We're gonna, we're, I actually have a few questions at the end that are gonna combine the two papers.

Alan Z. Rozenshtein: Ooh, a high-wire act.

Paul Ohm: Just, I like setting myself up for high expectations that I cannot meet. For those who have not thought much about the regulatory state and how it applies to technology companies generally, not just AI, this is such a useful primer. I mean, you cover so much waterfront so efficiently. And then you do this pro and con list about whether we want agencies, whether we think they're the right regulators for this moment.

And so they're, your pros are expertise, speed, agility, scope, and democratic responsiveness, and your cons are lack of resources, political challenges, legal constraints, and overreach. And like so many of the final exams in my class, you don't weigh the pros against the cons. You just leave this hanging. And so at the risk of, like, overgeneralizing, like you could pick one agency or one context if you want, as you do at the end of your paper.

Worth it? Worth the candle? Are these pros, pro enough that they outweigh some of these constraints?

Chinmayi Sharma: So…

Paul Ohm: Like, are you bullish about the regulatory state in this space?

Chinmayi Sharma: So, I’ll give my answer, and Alan and I are gonna differ on this which just goes to show like-

Alan Z. Rozenshtein: On brand.

Chinmayi Sharma: Yeah, it's on brand But I would say it is worth it. I do think that and it's kind of similar to what a lot of the panelists on the first panel said, I think that right now there is a need to move in this space. And I think that my concern that agency rules will be wrong and then we won't be able to retract them, are not as high as my concerns that the EU AI Act is going to end up being an act that created a loophole that's bigger than the mandates.

But I also think it is, like, it matters which agency it is. Some agencies are better than other agencies at investigations and enforcements. It also matters because I think some agencies, the judiciary is more antagonistic to some agencies than other agencies. I think that some agencies, more than other agencies, have, like, better political capital with Congress, and so have, you know, are more likely to get the resources they need to do things, but all of that aside, I think that I would like more regulation, but to undercut my own position, it's an empirical question that I don't think anyone on this panel has the answers to, which is like, is agency enforcement actually effective in influencing change across an industry, as opposed to specific enforcement against one company?

And then even in terms of specific enforcement against one company, is agency action actually able to get long lasting change, or is it kind of a one moment in time penalty or casting light on an issue, that passes and it's like business as usual?

Alan Z. Rozenshtein: Yeah. So, no, that's great. So a couple of thoughts. So I'm going to stand, I'm going to, I'm going to stick up for a second, for the not doing the weighing of the pros and cons.

And I mean, the real reason’s, cause-

Chinmayi Sharma: This is the classic, I reject the premise.

Alan Z. Rozenshtein: I reject the premise. I mean, the real answer is because it’s too hard. But the more principled answer that I just made up on the spot. Is that I think that it is important to be very tentative to the point of sometimes just staying silent on these bottom-line normative evaluative questions, because it underscores how early we are in what, again, going back to sort of your first question, I think it was a really transformative technology.

I mean, if you were to ask at the beginning of the industrial revolution, what should the role of the law be, right? I think the most honest answer would be here are 17 considerations. I'll get back to you, right? But I'm just not in a position to tell you, and you know, as the steam engine is suddenly developed, what the role of the law should be, right?

And even if you don't think that AI is like the steam engine I think it's, I think hopefully it's, I can convince everyone, even some of the skeptics, that it's something, at least it's like the railroad, right? Which is to say, you know, maybe not an earthshattering technology, but like a really big deal of technology, already, right?

Even if we don't make any more advances, right, which we will, if you just look at the scaling law graphs, even if we just AI stopped today, right, just the effect of that ramifying through the industry and society and economy over the next 10 years would be massive, right?

And I think, again, if you were to ask, what should the law do with railroads in 1860? You'd be like, I don't know. We'll have to wait and see. And part of it is this empirical question. You've got to go agency by agency sort of specific thing by thing.

Chinmayi Sharma: But also technology by technology. Like you brought up steam railroad, but we've also brought up the analogies of cars. Like, I think that David made this point earlier, like, regulation of cars has not quelled the thirst for cars in America.

I think that it depends on the technology and how high the demand for the technology is and how inclined suppliers are to kind of overcome regulation to be like, no, I think we could regulate the crap out of AI and OpenAI is still likely to continue producing AI and we're likely to continue consuming AI.

Alan Z. Rozenshtein: Yeah, I think that's right. And that gets to the second point, which is, you know to say yes, regulation is worth it or not worth it, it depends on what you're comparing it to, right? So, I actually totally agree with Chinny that compared to courts and the common law, I think that regulation, at the very least, should be as important, right? I'm not going to say it's more important, right? I mean, courts have a lot of wisdom, common law has a lot of wisdom. I'm not such a technocrat that I think that everything should be done from, you know, the Department of AI here in D.C.

But for these reasons of scope, speed, potential for technical expertise, I think you have to get the agencies involved. Well, maybe I disagree with Chinny, right? And this is like one of the, one of the many reasons it's so much fun to work together on this sort of stuff, is, you know, without putting words into your mouth, Chinny, I think it is a fair statement to say that you are generally a little more concerned about harms of AI, and I am a little more concerned, right, about the potential benefits and the risk to innovation, right?

We're both concerned about…

Chinmayi Sharma: I do think innovation's overrated.

Alan Z. Rozenshtein: Yeah, I think we're both concerned about both things, but sort of, you know, you're more of the safety person. I'm more of the, like, you know, AI accelerationist. So, where I think you and I may disagree is, I am concerned about premature regulation.

I'm concerned about it, whether or not it's from the courts. I'm concerned about it, whether or not it's from the agencies, right? I do think I think you gotta let ‘em cook a little bit.

Chinmayi Sharma: I don't actually disagree with that.

Alan Z. Rozenshtein: Ugh, well, where’s the fun in that?

Chinmayi Sharma: I do like the laboratories of experimental concepts.

Paul Ohm: But wait, wait, well, I'll disagree with that. So, let's talk about the automobile, right? So we let that cook for a couple of decades. Lots of people were killed.

Chinmayi Sharma: Alan's like, no problem.

Alan Z. Rozenshtein: No, no, no.

Paul Ohm: None of your direct ancestors, so maybe, no

Alan Z. Rozenshtein: I am against automobile death, thank you for coming to my TED Talk.

Paul Ohm: Not only that, but maybe similar to what we're doing with AI, the federal government decided to, like, intervene to make certain things easier for car manufacturers. And so we ended up with the interstate highway system, which begot the suburbs, right?

I mean, there, there is an argument that we waited too long with cars, and that we kind of screwed up some things fundamentally, that if we had, you know, with all appropriate humility, intervened a little bit earlier, or like didn't do things quite the way we did, we'd have a better world in some ways.

Alan Z. Rozenshtein: Yeah, no, that, I think, so, so, two questions there, right? So the first is the counterfactual question of, take some technology like cars. Did we, in fact, wait too long? Right? Should it have taken Ralph Nader pointing to, you know, Ford Pintos blowing up for us to get our act together?

Right? And that's an interesting question, and you do all these counterfactuals, and you have to ask, okay, well, how many, how much life was improved by having cars for somebody? I don't know the answer to that question. Fair point.

But the second counterfactual is what is the right comparator? Because you can say, right, okay, fine, we waited too long on cars. But maybe I might respond and say, sure, but cars is the wrong example. Nuclear power is the right example, right? You know, we over-rotated wayyyy too much on restricting nuclear power. We over learned the lesson of Chernobyl, right? We certainly over-learned the lesson of Three Mile Island, right? Which by the way, is back because of AI, fun coincidence, right?

And in fact, you know, we, you know, we've contributed just by our, you know, insane aversion to nuclear a full, you know, 0.3 degrees of global warming, just based on that, which is a big deal. And God knows how many deaths because instead of nuclear, we burned a bunch of coal and put that crap into the air, right?

So if that's the right comparator, right? And I think there are reasons to think that AI could be the sort of, you know, science and welfare advancing technology that nuclear power could have been that's the issue. So again I think all we can do, and maybe it's kind of a meta point, is we should just identify what our priors are, and maybe my priors are a little more techno-optimist, and maybe your priors are a little more techno-pessimist, or techno-realist, however you want to call it.

But then just hold those priors very lightly, and be good Bayesian updaters, and just be willing every day to sort of relook and say, you know, today, I'm feeling better about it, today, I'm feeling worse about it, but just not to commit too much, right?

Now, that's easy for an academic to say, right? It's harder for, I don't know if David is still here, but it's harder for a government official or a judge, who has to make a decision, and where not deciding, is itself a decision not to decide, but you know, it's why I like being an academic.

Chinmayi Sharma: I think that's the feature of regulatory law, is that you can make the decision with at least less consequence long term than some of the other modes of regulation that we have. I actually think that tort is also good at this because you can have one case in one jurisdiction and a different case in a different jurisdiction and they can have similar facts and come out differently and you can see how we like the results of that.

But again, this is kind of like to my earlier point about not all AI is generative AI, like, the car analogy and the nuclear energy analogy might be apt for different kinds of technologies, different kinds of use cases. And then another feature of the regulatory world is that we have a lot of different agencies that, even if we don't, I'm skeptical about like a big AI law, even if we don't get to holistic AI regulation for all AI, we have agencies that address a lot of high-risk industries that might be already pretty technically competent at figuring out what kind of AI is nuclear and what kind of AI is cars within their domain.

And I think that like, I'm never going to remember the acronym. The National Highway and Transportation Services, NHTSA, yeah, I need to do better at remembering the pronunciation of the acronyms as if they're words, is like trying to do a good job of pulling together stakeholders and technologists to understand what's going on with autonomous vehicles and different kinds of autonomous vehicles.

And telecom is brought in, because like autonomous vehicles need to communicate over spectrum and so like what kind of spectrum bands do they need to use? So it's like a, it's not just an AI question. And it's not just a car question, it's like a multidisciplinary question.

Eugene Volokh: I just want to add an extra thing to the mix, which I'm sure you folks have thought about, but it hasn't really appeared as to the administrative agency question. What about the state versus federal question, right? When we're talking about tort law, of course, that's all state, with some variation among states, and yet it can apply to, to national and transnational companies.

Cars, you know, historically they've been regulated at the state level, as well as at the federal level in some measure. And what's more there are particular political difficulties in Washington, D.C. with creating a new agency or creating a new statute or whatever, but we've got 50 states, some of which are deep red, some of which are deep blue, and if they decide they want to do, they want to create a new agency they can do it pretty quickly.

And the, and there may be not the same kind of ad law constraints at the state level as there are at the federal level. Different states have very different approaches to the restraints on agencies.

Alan Z. Rozenshtein: So, I thought about this some. So, on the one hand, I think it is a great, I actually do think laboratory democracy is a very cool thing. And I think we actually…

Chinmayi Sharma: Laboratories of democracy not laboratories of experiments. 

Alan Z. Rozenshtein: Yeah. Yeah.

And I think it is a good thing. And absolutely states should be experimenting with all sorts of things, right? And they are, right? But I think they have to be extremely careful about regulating about stuff in which they have a specific, and can articulate their specific state interest rather than regulating about things that are both not particularly in their state interest and especially in ways that might affect AI across the country, so.

Paul Ohm: Are you making a policy argument or a con law argument? 

Alan Z. Rozenshtein: So I'm making primarily a policy argument, but maybe there's a con law argument here. So, the policy argument and I think Dean Ball is, was in the audience, but I think he had to leave. So he and I wrote a piece for Lawfare a few months ago saying when SB 1047 was about to be passed through the California legislature, that Congress should seriously consider federal law preempting state safety legislation, right? Because the sort of legislation that I think SB 1047 is trying to get at is trying to prevent, you know, not like specific harms to individual Californians, but trying to prevent AI being able to create nuclear weapons, let's say. Right?

And in our view, that is not primarily a California-based concern. That is a concern for the nation as a whole. And the problem with something like California doing that is because the spillover effects, because the AI labs happen to be located largely in California, would be spillover effects nationally. And if you're concerned about innovation and geopolitical competition, that's the sort of thing that should be decided at the national level.

Now this is without prejudice to the substantive question of, what should the AI safety standards be and is sacrificing some innovation worth a marginal decrease in the risk of AI causing nuclear war, right? Like that's a totally separate conversation. Now that's a policy argument. There's a constitutional version of that policy argument, which is that the doctrine known as the Dormant Commerce Clause says that at a certain point, state regulations, even for otherwise legitimate nonprotectionist purposes of, you know, interstate, of out-of-state commerce, right, or even potentially maybe intrastate commerce, can be struck down even in the absence of federal legislation that preempts that because of the particularly bad effect on interstate commerce.

Now the health of that doctrine is extremely contested. There was this case from a few years ago about California. It's usually California cases in Dormant Commerce Clause. I'm actually not trying to pick on California, it’s just because California is a such a-

Chinmayi Sharma: I think California wants to be picked on.

Alan Z. Rozenshtein: Well, it definitely does but it's just because California is such a massive economy. It's like the fifth biggest economy in the world. Anything California does has a California effect. And there was this case about California regulating pork production. And it went to the Supreme Court.

And I've read that case four times. I'm still not sure what this case actually says. But you know, I would argue either on policy or constitutional grounds for there to be some limits, actually on, in particular, California because that's where it's going to matter the most.

Chinmayi Sharma: I'm asking a purely rhetorical question, not a leading question, but is there not, or like taking my lawyer hat off and just being like a practical or like political person, is there not value to California using its might in the, I regulate all tech companies and that's going to regulate across the country to say like, we're going to do this and this is going to put pressure on the national level to like address this question to preempt because like now it's like the stakes have been raised?

Alan Z. Rozenshtein: Yeah no, that's, it's an interest, there's an interesting like second order question here, right? Of state as forcing Congress to act and maybe.

Chinmayi Sharma: It's like net neutrality. It's like worked medium-well

Alan Z. Rozenshtein: Yeah, you may be right. I mean, there may be sort of a, Congress is so deadlocked that has a second best answer to Congress getting its act together, you know, you need states to screw things up, potentially, so Congress can act. And that may be the answer to the Dormant Commerce Clause argument, and honestly, why many people don't like the Dormant Commerce Clause is because Congress could always act to just do this under its regular Commerce Clause powers.

But I think there's still the policy question of, I still think it would be better that these kinds of issues were decided at the national level. The thing I wanted to say was, there are counter arguments here, right? People often point to, you know, California's role in increasing auto-emission standards. And the California effect, because if you're a car company, you're not gonna make a California, a really, you know, environmentally effective California car and then have some less effective car for the rest of the country because of scale. But the problem with that analogy I've always found is it kind of begs the question, that California is setting good, all things considered, auto emission standards relative to the increase in cost that it does.

Now I think in the car example there's a good argument to me that it does. But I think the problem is that this question of AI safety legislation is extremely contentious. And the whole question is what is the right balance? And so that's my sorry, overlong take on the state-federal thing.

Paul Ohm: Yeah, it's so funny. I'm looking at my, like, 17 questions trying to pick the one more question I want to ask because I do want to get to your questions in about seven minutes. And remember, a student has to go first. I mean, this is going to feel like a real, like, point of personal privilege question, but the one topic we haven't talked much about today is privacy.

And I'm a privacy scholar. And I actually think I can combine your two papers, in a sense, let's see if I can pull this off, which is someone today talked a lot about the idea that ad personalization is very likely to be coming soon to a large language model near you, right? I can't imagine how annoying that's going to be, like, yeah, I'll tell the story to your son, but first, let me tell you about this juice box he should be buying. And it feels like…

Eugene Volokh: But I'm sorry, will the annoying thing be the ads or that they're personalized?

Paul Ohm: Probably both.

 Eugene Volokh: Because maybe the ads, if they know me well enough, they'll actually give me an ad that I like.

Paul Ohm: There's no such thing.

Chinmayi Sharma: I find that very annoying though, because then I spend money on dumb stuff.

Paul Ohm: There's no such thing. Anyone who ever says I like personalized ads has been fooled by the ad industry to say something they don't really believe because they don't ever click on them. I love unpersonalized ads where they think I'm a 35 year old pregnant woman. That is my happiest day because I've beaten the machine.

And then Leo Herstrilovich, when I said this once, said, Yeah, but Paul, they've realized you love that. And so that's why they're…

Alan Z. Rozenshtein: And also, Paul, it's cause you too deserve the foot massage, the water foot massage device.

Paul Ohm: No, but so, so my question about the regulatory state is first of all, the FTC seems like the home for worrying about this but does this even rank? I mean, you were, today, you know, Alan's been talking about death, destruction, you know, nuclear power. Like, is privacy something that, that we should even factor in these conversations?

And then for Eugene, I want to hear more about commercial speech doctrine and whether or not it's likely that some of these, you know, pro-privacy regulations might work because a court, it's hard to imagine the Supreme Court, would say intermediate scrutiny. There's something like this. So I don't know which of those you want to take first.

Alan Z. Rozenshtein: Okay, so I'll do a quick thing on the privacy thing. So, I think that there's two-part, if I think about it. So the first question is the substantive question of how important is privacy versus other things, right?

My gimmick is that I'm like the anti-privacy law professor because it's just fun to go to privacy conferences and see what happens. But, you know, I think we could just have a range of views of what is the actual privacy interest? What is the harm of the privacy intrusion? What is the benefit you might get to it, right?

On net, on the one hand, you might think it's really bad that all these AIs will know everything about you, that's a horrible privacy intrusion. On the other hand, it's kind of amazing that you can ask very sensitive questions to an AI rather than have to go talk to maybe a human being. So maybe, on net, it's privacy-improving. So there's a whole kind of substantive set of questions there.

But then there's this institutional question of who should decide. And I actually think that getting to this question of how should regulatory agencies think about AI interventions. I think, I kind of liked the FTC doing privacy stuff because it's kind of, it's somewhat narrow and somewhat tractable. I mean, it itself is very complicated, but the thing kind of, I think the sort of the kind of guiding, the North Star of institutional design question should be, you should have specialized agencies narrowly focusing on stuff that they have expertise in, right? Like, the scariest thing I could imagine is a Department of Artificial Intelligence.

Like, that's, I think, the last thing that anyone would want, because they're not going to be very effective. It's going to be very abstract. It's not going to work. Whereas, if the FTC has been doing privacy law, right? And they've been doing privacy law, you know, for 30 years in certain domains, and now ChatGPT presents, you know, that question in a slightly new technical context, right? I am much more comfortable with the FTC taking a bite at that apple, right? Then at the FTC doing AI law generally. That's what I'll say about that.

Eugene Volokh: So I think that there are two questions here. One just about commercial speech generally. Commercial speech usually, I oversimplify here, is shorthand for commercial advertising.

And it's pretty well settled that misleading commercial advertising is constitutionally unprotected. It's not enough just to say, oh, we think it's misleading. There's got to be some real evidence, but if there is sufficient evidence, it's unprotected. And non-misleading commercial advertising is subject to less protection, although it's ebbed and flowed, starting with the late 70s, in the late 70s it looked like it would get a lot, then in the 80s there were a bunch of cases that said no, it actually would get quite modest protection. And ever since then there's been quite a bit of protection. Not absolute protection, of course, but also not as much as for noncommercial speech that's not commercial advertising, but considerable.

So if, let's say OpenAI starts serving up ads and they're misleading. Whether because it bought misleading ads or it hallucinated things into those ads, that would be regular. If, however, let's say it is producing ads that are not misleading, but personalized. And somebody says, well that's bad, it's bad to have personalized ads, I think courts would look at it quite skeptically and say these are constitutionally protected in some measure. It's not enough just to say they're icky or they're bad or they might get you maybe too effective at getting you things to buy. That just means they're persuasive. So it may be very difficult to do that.

But let me turn to the privacy issue and step back and try to merge it a little bit with the liability stuff we talked about it earlier. So, there's this phenomenon I talked about ten years ago, way outside of AI, of tort law versus privacy. Tort law is sometimes seen as a protector for privacy, because there are various privacy torts that are aimed at protecting that. But the law of negligence and related liability doctrines is also potentially a threat to privacy.

Just to give one example, in a lot of cases now, if somebody is injured, or criminally attacked, on someone's property, say in a parking lot of a mall, it's now a routine claim they were negligent in not putting up cameras. Because as the technology got cheaper and more and more effective, the claim was that you should take reasonable measures to protect us, by using cameras, became much more plausible.

And the objection that, well, wait a minute, maybe we shouldn't have more surveillance. That's a hard objection, it could be factored into the analysis, but often isn't. So let me give you the following hypothetical fact pattern. Somebody uses an AI program to ask something like, what substances are poisonous but not easily detectable in an autopsy? Now there are a couple of possible reasons for that. One is you want to commit murder.

Chinmayi Sharma: Asking for a friend.

Eugene Volokh: Another is you're a novelist. And, in fact, there are novels you can read that presumably talk about it. Another is, you're just curious. Another is, you're training to be a medical examiner. And you're looking, or a police officer, or a private detective, and you're looking for this kind of thing. So, presumably, if somebody says, well, we're going to sue the company for putting out this output. It'll be very difficult to show, even apart from the First Amendment, that they're being negligent in just providing information.

But, if they know a lot about you, then it becomes much more plausible to say, well, that the user had asked these various queries, like, how do I deal with a spouse I can't stand? Do you have any advice on that? Well, that didn't work. Anything else? No, that didn't work either. How about undetectable poisons, right? Then you could come up with an argument. You know, your software is so fancy and so technical, so technically sophisticated, somebody can even, has actually shown that they can predict at some degree of probability based on your past queries in this one, whether in fact you're looking for criminal purposes. You should have done that.

One possible answer is great. We want there to be such rules that pressure AI companies into sorting people this way because we want them to provide output to the novelists, but not to the would-be murderers. Another possibility might be to say, well, wait a minute. We don't want to create legal obligations on these platforms or legal pressure that requires them to build a psychological profile of us and then decide whether we seem like trustworthy people or dangerous people.

That's something that I don't think is being much talked about, but that is actually an important question at the intersection of privacy, free speech, tort liability, and probably regulation.

Paul Ohm: Well, and the third consideration, of course, is there's some DA somewhere in the country who's about to subpoena that history too, right?

Eugene Volokh: Oh yeah, yeah. That's also an important privacy question about the records, just whether they can be turned over in criminal or in civil cases.

Paul Ohm: So for those who don't know, the Seymour case out of Colorado, the DA just went to Google and said, who's been searching for this particular thing? And Google gave it up eventually. Out of all their users in all the world, who was searching for this particular string?

Chinmayi Sharma: Which like, theoretically, the Stored Communications Act should have had something to say there, but…

Paul Ohm: Yeah, but warrants surpass everything, right? As long as you get a warrant, you know, the company has to comply so.

Chinmayi Sharma: Yeah.

Paul Ohm: Okay, so it's Q&A time. I have lots of questions that I will ask if you don't. The way we're going to do this is you stand up at the microphone, but we are going to invoke the Weiser rule. And I see a lot of students in the room. One of you's got a brilliant question. Are you a student? Alright, awesome. Come on up, please.

Audience Member (Minashe Shapiro): Yeah, hi, my name is Minashe Shapiro, I'm a 1L here at Georgetown Law.

Paul Ohm: Awesome.

Alan Z. Rozenshtein: Even better. Extra bonus points.

Audience Member (Minashe Shapiro): I'm also with the Minority Staff of the Finance Committee for the U.S. Senate.

Paul Ohm: Man, okay.

Audience Member (Minashe Shapiro): So I have two advantages. We went small already in talking about, well, if we don't regulate at the federal level, so, then California might make standards for the whole world, at least for the United States. I guess I want to go big, especially thinking about that a lot of the AI regulation right now, at least the discussion about what AI regulation should look like is taking place in sort of international consortiums.

Is there a way and is there a need to sort of preempt the actions of other states, especially in light of, that even our allies have vastly different approaches to free speech, to libel, to all of the issues that we talked about?

Alan Z. Rozenshtein: So yes, there's a big international component, and let's divide the world into friends and not friends. So, when it comes to the sort of competitor nations, and here the big obvious one is China, right?

I think it's totally reasonable to say AI is the technology of the future. We are totally uninterested in having our, both knowledge and hardware used to sort of help adversary nations with this, and so we're gonna export control the heck out of all of this, right? I mean, You know, this is a big thing that companies like NVIDIA, for example, have to deal with. But I think that's, I think that's absolutely appropriate.

You know, with respect to the other countries, I mean, Europe is interesting here. You know, often it's very easy to poo-poo European tech regulators because you can just say, well, Europe doesn't really have its own tech sector of any interest. And so, sure, they can regulate because, like, what do they care? It's actually not true for AI. I mean, Mistral is a very impressive, Mistral is very impressive, and so I think the Europeans have real skin in the game here.

But at the same time, I actually am not that convinced that the, whatever that the Brussels effect is actually going to be as pronounced for AI as it has been for platforms, certainly early indications don't suggest that it will be OpenAI is just happy to not allow a lot of its most advanced capabilities in Europe. So you know, I can talk to the fancy voice model right now, but if you're in Europe, you cannot, and I think OpenAI is just betting that they can win this particular game of chicken, and they may well be able to, in part because you don't quite have the same, I don't know, network effects is the exact right term here, but you know, the reason that we all ended up after European, you know, I forget which European law it was.

You know, you go to any website and you have to click 17, yes, I'll accept your stupid cookies—right, unclear what this accomplished—is because there are good reasons for companies to just have one standard, right? I do wonder for, if for AI, it's actually maybe easier for just to have much more localized AI systems and fine, the Europeans can have a worse version and the Americans can have the better version and so that's what's going on there.

Eugene Volokh: So if I could offer a related but slightly different answer perhaps. I think a lot depends on how aggressive various foreign countries are going to be about this. And, you know, I disagree with the Thai view, as I understand it, that people should go to prison for many years for insulting their king, right? I think it's bad. I would be very good, very nice if they abandoned it.

But if the Thai government says, look, you guys are, WiZ is a geolocation. You can geolocate to 99 percent reliability. We don't insist perfect reliability, but if somebody is using this software from Thailand, then you better make sure that it doesn't say anything to insult the king. Again, to a high level of reliability.

You know, I think AI companies might deal with that. Likewise, if the Turks say, if you have to, you can't say things that support whatever Erdoğan's enemies are. Or if things are normalized eventually with Russia, but Russia remains much more restrictive than America, if the Russians do the same, or if the Europeans do the same in various ways.

But they could be more aggressive, right? They could say, look, it upsets us that the king is being insulted, even in America, by Americans to Americans, or it upsets us that supposed hate speeches being conveyed by Americans to Americans. So we are going to basically threaten to jail any employees you have in our country and seize any assets you have in our country.

Well then in that case, they would be exporting their rules to us and constraining what Americans can see. I think if that happens, I think it's very important and probably a proper role for the government to try to stimulate, although I certainly wouldn't want it to try to code it to have a all-U.S.-based AI company, that says we pledge to have assets only in America, and please only in America, and as a consequence we can assure you that we're in no way beholden to any foreign country that tells us what we may or may not do.

Chinmayi Sharma: Realistically though, is that possible given AI supply chain and the vast majority of, like, people building training datasets are overseas.

Eugene Volokh: So, I think it's an interesting question. I think a lot has to do with the open source question.

Paul Ohm: Yeah, I was going to say the same thing, right? Like, Meta has commoditized pre-training of the foundation models, in a sense. Like, Meta's grand gambit seems to have worked. And so, you know, o1 that just came out is not a pre-trained model. It's a bunch of really fancy fine tuning.

And so, in a funny way, if you're Thailand, just use the latest Llama. Get a bunch of your smartest computer scientists to fine tune the heck out of it, and it probably will stop, right? And so you've said they'll have worse AI, I'm not sure they are.

Eugene Volokh: No but Thailand might say we have the Thai AI. In fact, AI is part of our name. So, but it's not good enough for us because lots of our citizens are using that. So, the other interesting point, of course, is it doesn't have to be 100%. I mean, imagine that there is this movement and the Philippines says, look, you know, We are thinking about having people doing various kind of low cost training things.

We have, I think it's about a hundred million people in the Philippines now. A good deal of them speak English, let's say. And this is what we're going to offer. We promise not to leverage any of our control in this situation. So let us into your little walled garden, your big walled garden, American-walled garden. We'll provide these services. We'll promise not to interfere with that. So I think there would be mechanisms for doing that, I think. 

Paul Ohm: All right, let's try and get through a few more of these. No, that was great. I love that.

Audience Member 2 (Shira from Meta): Thank you so much. I'm Shira from Meta. But I actually have a question about, really related, I think, to this question about the kind of international component of all of this.

So, Alibaba has recently released an open source model and has produced, there have been chatbots produced from it. I've been playing around with them. And from what I can tell, just from dogfooding, it seems as though there are certain, I assume output controls, although I don't know, I don't know technically how they've how they accomplished this, where they just won't provide certain outputs about politically sensitive issues for China.

So specifically you ask it about Tiananmen Square and it responds, I can't respond to that issue. What do we do? Like, how do I think about this?

Eugene Volokh: Right, so this does raise actually a version of the Lamont v. Postmaster General problem, right? Which is also now pending in this fair city in the TikTok case, right?

Audience Member 2 (Shira from Meta): Right, exactly.

Eugene Volokh: Do we have, whatever the normal First Amendment rules are? Is there some extra power that the government has to try to restrict things that are substantially affecting American public discourse. Maybe that's one way of distinguishing Lamont v. Postmaster General. Like, you know, foreign propaganda being mailed into the U.S. in 1965, probably not that big a deal just because very few people read it. But if it's something that's TikTok, some, an important platform like that, or an LLM, let's say, that lots of people are using, and they get not just that, of course, the, this is like the 13th chime of the clock. Not only is it wrong itself, but it casts doubt on all that preceded it, right?

If it just says no about Tiananmen Square, what other more subtle things there might be, would there be, should there be a different First Amendment rule for that, saying, look, even if it does interfere in some measure with American listeners and American speakers ability to communicate this way, given all the other options they might have that don't have these kinds of possibilities of foreign influence, would we be able to restrict that?

I don't know what the right answer is, but again, I think that the TikTok case from the D.C. Circuit, and especially if it goes out to the Supreme Court, may very well have spillover effects on the very question we're describing.

Alan Z. Rozenshtein: And to be clear, I think you're absolutely right, to be clear, it's not that there's a TikTok case and it will tell us something about this AI question.

The TikTok case is an AI question. It is exact, it is an AI case ,because the one of the two justifications and what I think everyone recognizes is the real justification in a sense, which is the TikTok algorithm's potential for interfering in the ecosystem. The algorithm is AI, and this gets back to Chinny’s point, right?

AI means many things, and generative AI is only one of those things. The algorithm is a very complicated machine learning something something right? And so, yeah, I think that's the question.

Paul Ohm: Can I just comment on both questions? What I find fascinating is, I mean, you started out by, like, saying they don't have the same network effects, and then you took it back.

And then Eugene was basically describing a balkanized set of large language models, and for 25 years you were never allowed to say anything supportive about the balkanized internet. I mean, there was this kind of libertarian strand in the foundation of our debate where if you ever pictured a world where anyone's internet was not as capable as our internet. You just broke the first, I think we're much more comfortable with balkanized AI and where it's trade war all the way down and where it's geopolitics at the highest level.

You're not, based on the look on your face. And by the way, I'm one of the few published authors who have written like a love letter to the balkanized internet. So I'm just happy that the rest of the world is catching up to me. But it's fascinating to me.

Eugene Volokh: I've been a heretic, for a long time. Yeah.

Paul Ohm: Oh, really? I didn't know this about you. We should have formed a club. We're the same people. So in a funny way, it's funny to me that like that libertarian argument, even if it makes you squirm a little bit, isn't, you know, table stakes.

It used to be, you can't say a bad thing about the global internet that we have. Yeah. Go ahead, if you want to, you look like you're desperate to.

Audience Member 2 (Shira from Meta): Well, if I may offer, I think one of the fundamental challenges with the balkanization of the internet is understanding how certain services work. So there are, obviously if you're thinking about data storage, then that might work in a balkanized environment.

It's less effective, and in fact, maybe more destructive, in an environment where the service itself is communication. So if you're going to balkanize internet from one of Meta’s surfaces, for example, like WhatsApp, the result may be that you certainly, you just simply cannot communicate with someone in a different jurisdiction.

So it'll be interesting to see how this plays out for AI. Certainly we were thinking about this a lot in terms of bias. You know, where is the data that we're training on coming from? And what's the value of training on data from, you know, more diverse sources but I think there'll be other ways it plays out too.

Paul Ohm: Well, you have the last word on that.

Audience Member 2 (Shira from Meta): Thank you.

Audience Member 3 (Sonny from Encode Justice): Hi, my name is Sonny. I'm part of an org called Encode Justice. We're one of the three co sponsors of SB 1047, so this is going to be a slightly pointed question. And so I think I've, whenever I hear about people talking about federal preemption of state law when it comes to AI safety, I feel like this always just like mystifies me a lot because I think it goes like, there's two big things.

One is like, it goes really counter to the whole laboratories of democracy, federalist kind of mindset that we practice in the U.S. for many other things. But also, you know, there's like very real political realities of, like, Congress is barely even able to pass a continuing resolution, you know, much less like actually legislate on major issues. And so, I always, is the beef with California specifically, is it like you just, you know, if this happened in Mississippi, would it be okay? Like, I’m genuinely asking.

Paul Ohm: Wait, can I, I'm going to break the fourth wall and show you all, if you haven't moderated a lot of panels. This is a moderator's trick. If you don't know the answer to something, you blame it on the students. So, scene there are students in the audience and we've now talked about the California bill twice. Do you, one of you mind summarizing like the key points of the bill just so we know what we're talking about?

Alan Z. Rozenshtein: Oh no, it's a moving target. I have honestly lost track. But can you tell me, you might be able to, you’re the co-sponsor.

Chinmayi Sharma: That’s on you.

Paul Ohm: Just a couple of high level points.

Audience Member 3 (Sonny from Encode Justice):  Sure. Well it's a, the bill has like three major things. The first thing that it does, which is the most controversial, I think, and the thing that gets talked about a lot is that it puts on liability standards for frontier model developers above a certain amount of money and a certain amount of compute. These regulations would target like the largest model developers and kind of put on a reasonable care standard for catastrophic risks, which are like your chemical, biological, radiological nuclear weapons or critical infrastructure damage.

And the other two things that it does is, it puts on whistleblower, it gives whistleblower protections to people from the lab so that they can report to the California Attorney General if they find anything that they think is kind of controversial.

And the last thing it does is establishes a public computing framework for California. Is that a fair characterization?

Chinmayi Sharma: That was a beautiful summary. Yeah.

Audience Member 3 (Sonny from Encode Justice): I feel like I've given this speech a million times now, so.

Eugene Volokh: Can I just mention, just to set the stage. The fact is, we believe in laboratories of democracy: we have lots of things that are governed by state law. And we believe in uniformity: we have lots of things that are governed by uniform federal law. So I think it's a mistake to assume that everything needs to be, because we have federalism, everything needs to be available for state action, it's a mistake to assume that because we have the Congress, everything needs to, or because in theory everything is more efficient if there's one rule, everything needs to be preempted.

So, for example, it's generally believed, I think, by copyright, by people who know something about copyright, that it's a good thing there's one federal copyright law throughout the country. And that if a state says no, we want to experiment some more with development of different approaches to copyright, different length of the term, different scope of coverage and such, generally speaking, we're kind of skeptical of that.

Likewise, my understanding is that with a few exceptions, things having to do with broadcasting are federal, and generally, you don't leave it to states, among other things, because there could be broadcasters that are, operate in a tri-state area, right? On the other hand, there are lots of things which you might think, well, why don't we do it in a uniform way? That we don't do uniformly. So, for example, just even in the area of speech and communicative torts, there are different libel law rules. And the same New York Times is, in principle, subject to the libel law rules of the 50 states, depending on where the people they're covering live and whether their articles are sufficiently focused on those people's behavior in those states.

So I just want to flag, on the one hand, I totally understand the value of that, of having different state approaches, but at the same time, we shouldn't be mystified by the idea that why not federalized? Because in some areas we do think that federal preemption is the right thing.

Alan Z. Rozenshtein: Yeah. And beautifully put. And I think this answers the question totally reasonable is, would I be so exercised if this was Mississippi? And the answer is no, because it wouldn't matter, right? No, I'm not trying to rag on Mississippi. It just is an empirical fact that right now Mississippi, as far as I know, does not have any leading AI labs in it, right? If that were to change, then I would say the same thing about Mississippi.

The end of the day, you know, laboratories and democracies are great until those laboratories create externalities whose negative effects are greater than the benefits. Right? Now, you may say, well, I don't think that negative effects are greater than the benefits because I think AI safety is so important, right?

But my point is, this should be decided at the congressional level. Now, the other question you put is, or the other argument is totally reasonable is, and this is kind of something that Chinny was pointing out as well, is, don't we have gridlock? Doesn't it make it impossible? Don't we need states to do this? Sure, but I actually think the gridlock is perhaps somewhat overstated, because we do know that Congress-

Audience Member 3 (Sonny from Encode Justice): That's definitely controversial.

Alan Z. Rozenshtein: Well, I mean-

Chinmayi Sharma: Alan is full of hot things.

Alan Z. Rozenshtein: Well, but think of it this way, right? You know, think of how quickly the TikTok bill was passed, right? I mean, when Congress decides it really wants to do something. It is in fact capable of acting.

Chinmayi Sharma: That was in response to a lot of state action, in part.

Eugene Volokh: I think it was a response to the Republicans and the Democrats agreeing. Yeah, I think that's the thing that you need to have buy in from both sides of the aisle in Congress today, but one, and maybe you can't, maybe you don't want to, but if you can then things could happen, maybe they would happen with regard to threats.

Chinmayi Sharma: But if California passes the bill. Then isn't it, if we do think it needs to be preempted at a federal level, wouldn't we get some sort of bipartisan agreement?

Eugene Volokh: To your point that Congress could be especially forced by states, that's also true. You're quite right about that. I'm just saying there are things that Congress, or I'm echoing what Alan was saying, there are things Congress will do. It's just, you can't do it from just one side of the aisle.

Alan Z. Rozenshtein: And I think that when you look at, you know, I think when a lot of AI safety folks express frustration with Congress not acting. I think that frustration is totally reasonable, but I think the answer is Congress isn't not acting on this issue because, you know, everyone agrees, but they can't get it together.

It's because I think the main, you know, the main leaders in Congress simply just on a substantive level don't agree with the agenda, however righteous it may be, of the AI safety community, right? So when Senator, when Senate Majority Leader Schumer released the AI Senate roadmap thing, it didn't include any discussion, really, of AI safety. Now, you can say, that's a terrible thing. That's a bad document, right? And that's fine. But he didn't not do that because he couldn't talk about AI safety. He just doesn't agree with AI safety. So that's the issue, right? I just don't think there's a lot of appetite for this sort of AI safety legislation.

That, you know, would do meaningful amount to restrict the risks and in the process, because there's a real trade off here, might potentially harm innovation.

Chinmayi Sharma: I do want to say like, obviously, lobbying is a very powerful tool. It's the presence of, like, billions of dollars of lobbying in D.C., is impactful.

It's more likely to happen at the federal level than maybe at the state level or more effective at the federal level than at the state level, because Schumer did start out his whole roadmap, I'm going to have a bunch of fora on this with I want to make safe and trustworthy AI.

Alan Z. Rozenshtein: That's true, but that assumes that lobbying isn't democracy. That's my, that's another hot take.

Chinmayi Sharma: Oh my god.

Audience Member 3 (Sonny from Encode Justice): So many controversial takes.

Paul Ohm: Alan’s cheap takes are Congress is super effective, and yay lobbying.

Alan Z. Rozenshtein: That's right.

Paul Ohm: Alright, this might be our last question.

Audience Member 4 (Kendrea Beers): Great. Yeah, thanks for the great panel. I'm Kendrea Beers. I work on AI governance and safety at Georgetown's Center for Security and Emerging Technology. I'm going to ask sort of the obvious AI agents and free speech question.

So you've talked about how it's plausible that the output of generative AI systems can count as speech. How does this change in the case of AI agents where that output might take the form of code or other instructions that are translated directly into actions in the real world? As a motivating case for this, it seems to me to be, lik,e quite plausible that an AI agent might decide that, to accomplish its goal, it might be a good idea to, like, hack into a system, say, like produce code that will end up hacking into a system.

Eugene Volokh: Yeah, so I've always taken a very skeptical view of the broad versions of code as speech.

I agree that certain kinds of source code might be speech when they're basically communicated to other people in order that they may read them. And there are interesting questions having to do with things like the 3D-printed guns debate. Because it turns out that some of the 3D printer instructions are actually comprehensible to humans, like they look like blueprints.

So blueprints may well be speech. But my view was always when something is not communicated to a human, it doesn't have to be by a human, but if it's not communicated to a human, or at the very least let's say there's no human in the mix, that's action, that's not speech.

So if I write a program that creates code and then executes code, and that then communicates the code to somebody else's computer, where it's automatically executed as well, all of those things, it seems to me, you know, it's a good question whether it should be regulated, but I don't think the First Amendment has anything really to say about that.

Now, I'm not completely certain about that as a descriptive matter in part because the 3D-printed guns question has led to a good deal of disagreement among lower court judges and of course hasn't reached the Supreme Court. But at least what you're describing at that level, it seems to me that's not something, and I say this as someone with a fairly broad view of the First Amendment, but that it seems to be something outside of the First Amendment scope.

Paul Ohm: Yeah. Quick follow up. I'm intrigued then, I love that I have you on the record here, not under oath though, but

Eugene Volokh: I think I blogged about it.

Paul Ohm: Yeah, yeah. No, but a lot of people make who believe in the code of speech argument also make a lot of the one line in Sorrell where Justice Kennedy on the way to invalidating a Vermont law that was about keeping pharma away from doctors, says basically, and I don't remember the exact words, but data is speech. In this very freestanding way, but a lot, people have made a lot of that, yeah.

Eugene Volokh: So, I do think that data is speech, is corrective, but it's shorthand for when people are communicating to each other. That's protected, even if it's not we're communicating opinions, but we're communicating facts, or even communicating arrays of facts.

So if I am sending you information that you're going to use because you read it and you make decisions based on it kind of, it's communicative by me to you, me as a human, you as a human, then yes, in that respect data is speech, which is exactly what was happening.

Paul Ohm: But it's not the, it's not the noun data. I mean, I think people have read that more broadly.

Eugene Volokh: It's the robot data.

Paul Ohm: My point is, it's what you do with the data that, yeah…

Eugene Volokh: Yes, so data is speech is shorthand for communications from human to human, or human organization to human organization, of information doesn't lose its speech protection simply because we label it data, simply because it's not opinionated, simply because it's a large array of facts.

Alan Z. Rozenshtein: Yeah. And I 100 percent agree with that and this is why, whatever my skepticism of, let's say SB 1047, I get very frustrated when people argue that SB 1047 would, whatever its policy merits, be an unconstitutional infringement on the First Amendment because model developers have a right to communicate, for example open source parameters, right?

Because again, I think that's, I think that is not speech precisely because not that data isn't speech, but when, because when you are sending a billion parameters or trillion parameters, you know, you're not communicating between humans in the standard way.

Eugene Volokh: Right, it’s not likely that somebody will sit and read those

Paul Ohm: It's a negative point seven parameters.

Eugene Volokh: Yeah, exactly. Let's have a debate about this.

Alan Z. Rozenshtein: Yeah exactly.

Paul Ohm: Okay, it's a sign. Thank you very much. It's a sign of how rich the conversation was that I didn't even get to ask about Chevron and Loper Bright. I mean, these are all things we should talk about as we're milling around afterwards. Please join me in thanking this wonderful panel.

Alan Z. Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineers for this episode were the good people at the Georgetown Institute for Law and Technology. Our theme song is from Alibi Music. As always, thank you for listening.


Paul Ohm is a Professor of Law at the Georgetown University Law Center. He specializes in information privacy, computer crime law, intellectual property, and criminal procedure. He teaches courses in all of these topics and more and he serves as a faculty director for the Center on Privacy and Technology at Georgetown.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.
Eugene Volokh is the Thomas M. Siebel Senior Fellow at the Hoover Institution.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare