Lawfare Daily: Gavin Newsom Vetos a Controversial AI Safety Bill
Published by The Lawfare Institute
in Cooperation With
California Governor Gavin Newsom recently vetoed SB 1047, the controversial AI safety bill passed by the California legislature. Lawfare Senior Editor Alan Rozenshtein sat down with St. Thomas University College of Law Assistant Professor Kevin Frazier and George Mason University Mercatus Research Fellow Dean Ball to discuss what was in the bill, why Newsom vetoed it, and where AI safety policing goes from here.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare . You can also support Lawfare by making a one-time donation at https://
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Dean Ball: A, I think the transparency about safety plans, it's just a good idea in general for the industry. So, I think that's one idea that came out of the bill that was pretty smart, but also it helps clarify what the reasonable care standard was going to be.
Alan Z. Rozenshtein: It's the Lawfare Podcast. I'm Alan Rozenshtein, associate professor at the University of Minnesota Law School and senior editor at Lawfare, with Kevin Frazier, assistant professor of law at St. Thomas University College of Law and co-director of the Center for Law and AI Risk, and Dean Ball, a research fellow at the Mercatus Center at George Mason University and author of the popular Hyperdimensional Email Newsletter.
Kevin Frazier: SB 1047 became a litmus test. If you supported it, then you were within AI safety community. And if you didn't, then you very much weren't going to be someone who necessarily would be received with a wide embrace by folks who identify as members of that community.
Alan Z. Rozenshtein: Today we're talking about California Governor Gavin Newsom's veto of the controversial SB 1047 AI safety bill, what was in it, why Newsom vetoed it, and where AI safety policy goes from here.
[Main Podcast]
So, on Sunday, California Governor Gavin Newsom vetoed SB 1047, the high profile and quite controversial AI safety bill. Before we get into what he said in his veto message, I would just like to figure out what it was that was in this law that he vetoed. So, Kevin, what was in the latest version of this bill, since it of course had gone through some evolutions?
Kevin Frazier: Yeah. So the progression of SB 1047, like any good maturation process has seen quite a few twists and turns. And there's oddly enough, quite a bit of debate about whether it should have included more or less. And so I think that's kind of ironic about one of the main attacks by some folks is that it didn't include enough things. And other folks said it tried to accomplish too much in one fell swoop. So, with that in mind. I'm just going to kind of run down the main provisions.
The first is really looking at developers of quote covered models. And so one of the evolutions we saw in this process was what actually constituted a covered model. So, there were AI models that were trained requiring a certain compute threshold or amount of spending. And then fine-tuned models could also be covered depending on the amount of spending that went into that fine tuning effort. And before training any such model, there were requirements regarding cybersecurity protections, the creation of what's been called a kill switch, something that has the capacity to promptly shut down the entire model, certain safety and security protocols being established.
And then some of the key language here, at least with respect to the veto debate, would be a requirement on any such developers to take quote, reasonable care, end quote, to implement appropriate measures to prevent critical harms. And critical harms here saw a bit of evolution through the drafting process but it referred to the result of mass casualties or at least $500 million in damage or other comparable harms. Not sure what other comparable harms necessarily includes, but I'm sure we'll get into that.
Then before deployment of any such model there had to be efforts made to determine whether the model was reasonably capable of causing or materially enabling a critical harm, sharing and performing sufficient testing to assess whether that critical harm could result and then taking reasonable care to implement appropriate safeguards. So those are some of the key provisions about pre-training and pre-deployment expectations of developers.
Some other provisions also merit quite a degree of highlight. So, beginning in 2026, SB 1047 was going to require developers to annually retain a third-party auditor to perform independent audits of whether a developer was complying with the rest of the bill. Other key provisions included requirements on compute cluster developers or anyone operating a compute cluster to, for example, institute know your customer provisions. There was also an authorization of the attorney general to bring a civil action for any violations of the bill, which again was a provision that underwent quite a degree of scrutiny.
Then, importantly, there was also the creation of whistleblower protections for employees to be able to report to the attorney general or the labor commissioner if that employee had reasonable cause to believe that the developer was out of compliance with certain requirements or that the covered model posed an unreasonable risk of critical harm.
And then finally one of the other provisions to point out here would be the creation of the Board of Frontier Models within government ops. And that would consist of nine members appointed by the governor, or five members appointed by the governor, four by the legislature. So that's all that was included in SB 1047. Thanks for dealing with the rant.
Alan Z. Rozenshtein: No, no, that was great. So, you know, I want to talk about the whistleblower and the new agency stuff later. But I want to focus for now on the core provision, which I think was really what people were spun up about on both sides and what ultimately led Newsom to veto it.
Is it fair, just kind of abstracting, cause there's a lot in there, to say that basically what this law does is it kind of creates a kind of statutory negligence or products liability regime for certain advanced AI systems and that can be enforced by the government, even in the absence of a specific harm. Like if you just let one of these scary models out, then they can come after you. Is that a fair summary? Dean, you should also jump in if your understanding kind of differs from Kevin's.
Kevin Frazier: I would say that's a pretty fair, plain reading of the text. I mean, if you just look at what is required of developers pre-training a model, the requirement to take reasonable care to implement appropriate measures to prevent quote critical harms end quote. Now we're just getting a better sense of what are we actually looking for from specific AI developers when it means going through that initial process of trying to develop and then deploy a model.
And so in many ways, the creation of a reasonable care standard is the imputation, as you pointed out, Alan, of kind of just a traditional expectation of a liability regime. And critically, this is different from what was originally required in one of the earlier drafts of the bill, which was reasonable assurances to prevent these sorts of critical harms. And so, over the evolution of the bill, we saw a switch from that reasonable assurance language to a reasonable care standard, which kind of takes the bill and moves it much more in a traditional kind of liability regime.
Dean Ball: Yeah, and Alan, I'd say, you know, I think that mirrors my understanding of the bill as well. The one thing I would add, of course, is just that with rapidly advancing frontier AI models, it's very hard to know what exactly reasonable care means in practice. And I think another thing that is, that is underrated is that we don't know what materially enabled means. So the critical harm happens when a frontier model materially enables some sort of harm above $500 million in damage or in that ballpark, mass casualty event, et cetera. I think we just don't know what materially enabled means. So there's a lot of ambiguity there in my mind. But beyond that, I think that the basic read of the bill is the same.
Kevin Frazier: And I think that's important to point out, though, too, just to loop in a conversation I had on the Lawfare Podcast with Senator Wiener, the sponsor of this legislation. Ironically, he was pretty insistent that he didn't regard this language as ambiguous, that he thought this was a pretty well understood bill.
And that, I do think, was a big reason why this was pretty easy to attack for a lot of people was that ambiguity, both a plain reading of the text. And then also bringing in the broader context as Dean was pointing out, we don't know which test, for example, if we look at research from the Ada Lovelace Institute, which evals are actually going to tell us the information we need to know to have any sort of reasonable care or understanding of whether or not a model is going to cause critical harm.
Alan Z. Rozenshtein: Yeah, I will say maybe this is, maybe this is in defense of Senator Wiener, I'm not sure, but I think the legislator who writes a bill and thinks it's a lot clearer than the people who didn't write, you have to read the bill at. That's a pretty common problem.
So I want to turn to you now, Dean. Just, how did we get here? So, I mean, SB 1047 has gone through many iterations. I mean, I don't remember the last time anyone has focused on a state-level bill with quite such intensity as SB 1047. So just kind of walk us through a little bit of the main ways this bill has evolved over time, and also, for lack of a better term, the kind of the political economy of the debate here. And just give us, we'll start at a high level and we can sort of drill into the many interesting coalitions and sometimes odd bedfellows that we saw on both sides of this.
Dean Ball: Yeah, fascinating, fascinating political dynamics at play here. In terms, first of all, of the evolution of the bill. Kevin, Kevin did a good overview of how we got there. I would say, you know, the conversation around this bill, so the text was released in early February and the conversation really picked up starting around May and kind of just grew in intensity from there over the summer.
There's been a lot of criticisms brought against Scott Wiener that I think are unfair and against the coalition that drafted this bill that I think are unfair or overstated at least. But one thing that I think is fair is that Senator Wiener and the coalition that supported the bill, their willingness to change core provisions was really quite lacking until the very end.
So, in a certain sense, a lot of us felt like we were just repeating ourselves over and over again. And the bill went through many different iterations, rounds of amendments. And many of us still felt like, well, we're still sort of relying on the same foundational criticisms. So I think that's fair. And by the end, we actually, you know, I think, some of those criticisms were incorporated to Senator Wiener and his colleagues’ credit.
Alan Z. Rozenshtein: Like, like what? What were the kind of the main ways in which the bill, for those who were critical of it, improved? I guess maybe those who liked it from the beginning maybe didn't love the final changes, but what were the main ways it did change at the end?
Dean Ball: So I think one of the pivotal moments in the evolution of the bill was the AI company Anthropic, which makes the Claude line of models, released a letter, a support if amended letter, which means an oppose letter, California's polite way of saying oppose, where they said, you know, they listed a bunch of very reasonable complaints. Everything down from like the whistleblower protections as the bill was originally written, applied to like Anthropic's janitors and, you know, the people who serve lunch at OpenAI's cafeteria and stuff like that, and they said that's a little too broad.
All the way up to the bill creates all kinds of ways where rather than liability attaching when a model does something harmful, liability can attach when you violate the provisions of SB 1047, which is like a kind of different sort of problem. Like, if you violate the process in some small way, then you have significant liability that can attach. And Anthropic said, well, you know, we really don't want that. We really do just want to focus on the harms.
And so, I think the edits to the bill that came at the end sort of changed things such that frontier labs were required to release their safety plans. Their, what in the industry are called responsible scaling policies. And their actions and their, sort of, the extent to which they were considered to have, sort of the reasonable care standard, would be based in large part on how those safety plans compared to their peers. And I think that level a) I think the transparency about safety plans is just a good idea in general for the industry. So I think that's one idea that came out of the bill that was pretty smart. But also, it helps clarify what the reasonable care standard was going to be.
There are also a number of major criticisms that I've had and others have had that were not addressed. That, you know, certainly we'll get into. But on the political dynamics, evolved in a way that I really did not expect. First, just the kind of the media environment in which this took place. A lot of the debate around this bill took place on Twitter and Substack, and I would say like conventional media-
Alan Z. Rozenshtein: And Lawfare to be clear.
Dean Ball: And Lawfare. That is true. That is true. Lawfare was on this way before the New York Times. And conventional media came in much later in the process. But for that reason, it's, you know, a lot of online debates end up fracturing in sort of weird and unpredictable ways. And that definitely happened here. So you had, your, your maybe traditional crop of techno-libertarians who were against the bill from day one. You had the open source community, which has a lot of people you know, it's a broad political coalition, but you had academics in that world, startups, a lot of investors who want those startups to succeed. And the startups need open source, you know, AI, at least they think they do, to succeed.
Alan Z. Rozenshtein: And let me just pause on that actually for a second, because I think this is an important point. And maybe I can go back to Kevin for a second: what would the effect of SB 10 47 have been on open source development, especially kind of frontier models like like Llama?
Kevin Frazier: I think the short answer is it depends the long answer is it also depends on who you ask. So that some of those provisions underwent quite a degree of amendment throughout the process. And so initially it was quite unclear how the bill was going to address open-source models and the amount of liability developers of open source models could face. The provision we ended up with, as I briefly mentioned, was including within the definition of developers, anyone who fine-tunes a model by spending more than $10 million.
So if you have Llama 3, for example, that's out there, and then you go and you fine tune it to the amount of $10 million. Well, now you find yourself subject to those developer provisions and oversight. So for some folks, that was a fair compromise. I think for folks like Fei-Fei at the Stanford HAI, that was something that was quite concerning with respect to efforts by academics, researchers, nonprofits, to be able to use open source models, to do research and more to hopefully facilitate some more competition, that open source provision was quite concerning.
Dean Ball: Yeah. And I'll just take a quick step back here and just say, what is an open-source model? It's a model whose weights, the numbers that really define its capabilities, are available for free for download by anybody on the internet. As opposed to a closed source model like ChatGPT where you can use the model on the internet, but you can't access the weights.
And what access to the weights gets you is a lot more flexibility in how you can deploy it, a lot more flexibility in how you can customize it. So, Kevin mentioned fine-tunes. A fine-tune is basically when a developer or another user uploads, sort of trains the model. They take a trained model and then they do further training on their own data. So let's say I want a model to be able to write bids for me. I'm a contractor and I want it to be able to write bids for customers in this format that my company does it. Well, what I might do is take a thousand bids that I've written in the past and fine tune a model on it so that the model has some context for the specific use that I'll be using it for.
Now you can fine tune closed source models. OpenAI has a way of doing that, but it's significantly less flexible than, you know, when you're talking about an open source model. But what that also means, the access to the weights and the ability to do fine tunes on arbitrary data means that there are many, many, many different ways that an open source model can be used, including conceivably for harm. Now, we haven't seen that. In practice, we haven't seen a lot of open source language models used to accomplish anything particularly dangerous, but it is a worry that some people in the AI safety community have.
And so, the risk, the liability risk for a company like Meta, which is the most prominent company that produces large, you know, frontier models that are open source, possibly the only one in the long term. The risk for a company like Meta is that there's just so many different ways that an open source model can be deployed, right? It can be fine-tuned with anything. Meta has no way to surveil what people are doing with it, which is not true of ChatGPT. OpenAI, when you're using ChatGPT, knows what you're asking the model. That won't be true of Meta with their open source models. And, you know, open source models can be deployed into app, like software applications, all kinds of different things that can happen there.
So it's just the combinatoric possibilities with an open source model are just staggeringly high. And that is why, that has been true in software in general, which is why liability for open source software has been sort of major, complex issue for many years. I know Lawfare did a podcast on this topic recently.
Alan Z. Rozenshtein: Yeah. I want to kind of, I want to kind of push this even further, maybe because open source is like, we all have our thing we're obsessed with, and open source is kind of the thing I'm obsessed with. And I will admit that I like the idea of open source AI almost on kind of ideological grounds. So I'm trying not to let that totally blind my view of SB 1047.
But, personally, at least my big concern about the law was that it wouldn't just discourage open source. I mean, the way I read it, it basically banned open source because maybe not Llama 3, or 3.2, but like Llama 4, Llama 5, right? You know, at some point these models, and we're going to get to the sort of actual AI safety merits later in the conversation. But like at some point, very plausibly, these models could really do real harm.
And if the models, and I don't, I mean, one of you, please correct me if I'm wrong, to the extent that this law imposed liability based on what an end user could do, and or required a quote unquote kill switch. I mean, the whole point of an open source model is that for the reasons Dean pointed out, but also because you can fine tune or train out any quote unquote kill switch that you put in there. I mean, didn't this bill, to ask a leading question, effectively ban advanced open source, or am I being uncharitable here?
Dean Ball: Well, I think actually just to be fair to the drafters of SB 1047, the kill switch provision was part of the original bill, but they did remove it for open-source models.
Alan Z. Rozenshtein: Got it. Okay. That's helpful. That's, that is good. Good for them.
Kevin Frazier: With the caveat though, right? Of the fact that if you develop them, if you fine tune a model to 10 million plus, you are defined as a developer, and then you must have the capacity to shut it down, aka have a kill switch.
Dean Ball: So, yeah, there's, I think like the idea there was you have to be able to shut it down. You have to remember that like part of this bill, the part of the drafters of this bill, part of the concern is not that a person is going to use a model in a dangerous way. It is that a model during training will, in essence, wake up, and decide to start harming humans, right? That’s that’s, it's an assumption of the bill, is that's possible, and a risk that we should be worried about, so the kill switch stuff I think is primarily intended, to give the most charitable interpretation, I think is in primarily intended to deal with that eventuality where the model starts doing dangerous things while you're training it.
But I also think it is more generally true, though, that it's very hard to imagine how, if you're talking about AI systems, you know, three, four years from now, that are dramatically more capable than current ones, it's very hard to imagine how SB 1047 is compatible with open source versions of those models. I just don't see it. I continue to not see it. I've heard people who support the bill try to explain to me how that would work. But I've never really bought those explanations. So I think that, you know, there is still, at the core of what you're saying, Alan, there's still significant aspect of, amount of truth to that.
Kevin Frazier: Well, and I think, too, building off of the earlier political economy conversations, it's important to point out who the folks were who helped bring this to Senator Wiener's attention originally. As Dean was discussing, they tended to be far more AI safety or x risk oriented than I'd say a lot of other individuals. And so, with respect to concerns about open source going off the rails, they have a different risk profile than, as we saw, Governor Newsom, for example. Or as Alan may be telling us, then Alan, or folks concerned about competition right now.
So, I do think that the conversation about open source is changing quite a bit and has changed quite a bit since SB 1047 was initially released. Because we've seen folks like FTC Chair Lina Khan, all of a sudden see open source as a very important competition approach to making sure we don't end up with a Western Union equivalent. For those who don't know telegraph history, I apologize for telling you what I've been reading these days, but it's way too much about the telegraph. Stay tuned for a forthcoming law review article.
But having some sort of Western Union that controls the entire AI ecosystem like in OpenAI or OpenAI plus two is something that folks are very concerned about. Now, knowing just how resource intensive it is and how the scaling effects tend to allow for greater concentration, open source now has become in the political discourse much more, I'd say, palatable and acceptable and even encouraged among some strange bedfellows.
Alan Z. Rozenshtein: I want to get to Newsom's veto letter, but before that, I do want to get back to the political economy stuff, because Dean, you were in the middle of a very interesting explanation before I took us on this open source tangent. So let me kind of tee this back up for you a little bit.
So, just kind of finish talking about who the main players were on both sides of the coalition. And I'm especially curious about your thoughts about how national politicians engaged in this, because I think Nancy Pelosi's role in this is really interesting. And I think also comparing how, in particular congressional Democrats have responded to this and other efforts versus how sort of AI policy people in the White House have. Tyler Cowen was on a Brian Chau's podcast, I think earlier this week, and he made the interesting point that the White House AI people seem a little bit more safety-pilled than sort of the congressional Democrats, and I thought that was an interesting intra-Democrat fight on this.
Dean Ball: Yeah, you've got fascinating dynamics there, for sure. So yes, a number of people from D.C. weighed in on this bill, which, in and of itself, is just abnormal for federal policymakers to weigh in on a state bill. Many of them don't do that almost as a matter of principle.
Alan Z. Rozenshtein: Yeah, and especially California, like, people often don't badmouth their own state's state bills because that often is very awkward for them when they come back home. So it took a lot, right, for them to do that.
Dean Ball: It took a lot. And I think, you know, I mean, one thing, Alan, you know, in our Lawfare article that we wrote together back in June, we kind of made the point that this bill has national security implications. And I think one thing we haven't mentioned in this conversation that is important context for the, for D.C. weighing in, is that the intention of SB 1047 was to regulate AI models all across America. So, any model distributed in California and any company that does business in California by which is meant distributes any product employs any person.
Alan Z. Rozenshtein: And this is just to be clear. All of them, right? I mean, are there any major AI labs that don't have a major presence in California?
Dean Ball: No, this is every AI company in the world.
Kevin Frazier: I think what, 31 out of the 50 largest AI companies are in California, per the Newsom letter?
Dean Ball: Yeah. And even those that don't might employ, because if they employ a single person in the state, then they have an economic nexus and would therefore be counted as covered by SB 1047. So it's a very it the bill is approximately federal or was approximately federal. That is actually how Dan Hendrycks, the one of the sponsors of the bill described it.
So, yeah, I think that D.C. actually had a very legitimate, sort of reason to weigh in here. It wasn't just pure politics. There were some of that for sure, but we had Nancy Pelosi, Ro Khanna, Zoe Lofgren and other members of the House Science Committee all weighing in against the bill. I don't know of a single federal policymaker who weighed in favor of it. Lina Khan, as you said earlier, Lina Khan never came out explicitly against the bill, but she did talk a lot about the importance of open source. She went to an event hosted by Y Combinator, and spoke right before Senator Wiener, and talked about the importance of open source, so I think that was meant to indicate something.
So you, you certainly had that. But then also, on the other side, no friend of Scott Wiener and no friend of generally government regulation, Elon Musk, came out and supported it. He supported SB 1047. So-
Alan Z. Rozenshtein: Is he still grumpy that OpenAI refuses to rename itself ClosedAI?
Dean Ball: There's definitely a part of that. And I think, you know, one big part of this is that, not entirely, but a substantial part of the sort of fissure with this bill really comes down to what you think of as the capabilities and risk trajectory of AI and how you think, how you sort of weigh the different likelihoods of these things.
So again, Dan Hendrycks, the head of the Center for AI Safety which really authored the bill and was a co-sponsor. He has, I believe, said in public that he has an 80 percent probability of doom, p(doom). 80 percent is really high. You know, that means that you think a substantial number of human beings, billions of human beings, will be dead in 5 to 10 years because of AI. So, you know, that is certainly one opinionated take on the capabilities trajectory of this technology.
And other people have far more circumspect opinions about how that's gonna, how that's going to develop. So it's just, it's been an interesting conversation in the sense that it boiled down in some ways to, an intellectual dispute about, yes, the capabilities trajectory. But the capabilities trajectory is like a very academic discussion about information theory and the nature of intelligence and the nature of the world and all these kinds of philosophical issues. So it's been fascinating to observe.
Kevin Frazier: Well, and I think to your point, Dean, SB 1047 became a litmus test. If you supported it, then you were within AI safety community. And if you didn't, then you very much weren't going to be someone who necessarily would be received with a wide embrace by folks who identify as members of that community.
And that branding alone made discussions very suitable for X and for Bluesky and for Mastodon, right? Just screaming at one another about, well, if you don't support this, then you support the end of the world. Or if you do support this, then you hate innovation. And so, it just became a really interesting litmus test. And one aspect of the political economy that is concerning for me is the outsized role that OpenAI, Anthropic, and Meta played in conversations about both the content of the bill and actually shaping the bill. So obviously in any regulatory process, you of course want to know what's the input of the regulated entities. What are they going to think about it?
But to the extent that we had Senator Wiener, for example, went on Bloomberg and was really touting and celebrating, you know, OpenAI released this letter that said, this bill isn't the worst thing in the world. And that means that this is great legislation. Seeing how reliant we were on these three small companies to really shape the discourse of this entire legislation, for me, raises a lot of red flags about needing more voices, more independent expertise who can contribute to future legislation, both at the state level and in D.C., because if we're just, if our test is do these AI labs like the regulation or not. Well, then of course, they're always going to find a reason to oppose it whether that's in Sacramento or in D.C. So I just wanted to call that out because I thought it was quite concerning to see the amount of oxygen and the amount of deference that these labs received.
Alan Z. Rozenshtein: So let's now talk finally about Governor Newsom's veto message. So I've read it. I will admit I found it extremely puzzling. I will shout out Zvi Mowshowitz, who's a very interesting, sort of one, one of the Substack commentators, Dean, that, you were alluding to earlier. I think his views are perhaps not my views about AI safety generally, but I, his views on the relative incoherence of Governor Newsom's veto message are I found pretty helpful. So, folks can go read his latest Substack for that.
But lemme just gonna point out one part that I found extremely confusing because on the one hand, Newsom seems really concerned about innovation. He sort of starts with his pan to how innovative California is, which is true and good and should be supported. But then he criticizes the bill for only focusing on the biggest models, which he says, quote, could give the public a false sense of security, which is on the one hand true. I mean, that doesn't make sense as a critique of the bill. But if anything, that would argue for, you know, regulating all models, not just the biggest ones.
So, and there’re, kind of these similar statements throughout the veto message, where on the one hand, he seems to be very concerned about innovation generally. But all of his critiques of the bill is that it doesn't go farther far enough, which of course would harm innovation. So, I cannot figure out what this veto message means. So, help?
Dean Ball: So yeah, I think it is a very confusing message for sure and I think sort of a lot of contradictions in there. To a certain extent he's certainly right about the frontier model regulation challenge of how do you identify the right thresholds. OpenAI’s o1 models that came out in mid-September really challenge a lot of the assumptions that people had about, oh, well, we're going to make the models bigger and bigger. And that is going to be how we'll regulate them because those models would be very expensive to make.
Still true. We are still going to make models bigger, but the o1 models complicate that because they don't appear to be models that would have triggered SB 1047 regulation, yet they are the most capable AI models in terms of benchmarks on the market today.
Alan Z. Rozenshtein: And just explain, because this is I think an important point, so just spin out a little bit of why that is that o1 really challenges the, it's all just about training compute kind of paradigm, that we've been using.
Dean Ball: Yeah. The way to think about the o1 models is basically the existing version of GPT-4, but OpenAI has found a very clever way to train the model to think before it answers your question. So you'll ask it a question and instead of just spitting out an answer the way the ChatGPT does when you normally use it. The o1 models will say thinking, and they might think for 10 seconds or they might even think for up to you know, something like two minutes. And during that time, they're writing thousands and thousands of words that you don't see as the user. And after that thinking is done, they've come to an answer, and then that's what you see as the answer. And you see sort of a summary of the thinking.
So, what this is called as inference time as opposed to training time. Training time is when it's, you know, the model is not released, and it's being trained. Inference time is, that's what it's called when it's running. When you're using it, the model is doing inference. And so, this idea of inference time compute is applying more computing power at the moment when the user is using the model so that the model has time to think. The challenge with the, people have been saying inference time compute is valuable for a long time. The challenge has been that nobody knew how to make models think at a technical level. Sort of, how do you actually engineer that?
OpenAI seems to have figured that out. We don't really know the details of how, but because of that, we're seeing similar performance increases where you give the model more time to think and it gets better, even if you don't make the model bigger. So, there are models from OpenAI, the o1 mini model might be quite small, like surprisingly small. Far lower than the SB 1047 threshold. And again, as I said, is able to get top tier performance on mathematics, benchmarks, things like that.
Kevin Frazier: And I would also add that I think this has been late breaking. The Allen Institute for AI recently released its OLMo model? I'm not quite sure how you say it. Dean, do you know? O-L-M-O.
Dean Ball: OLMo, I think.
Kevin Frazier: OLMo. I mean, can we just get some senate interns to come up with some better acronyms for some of these models? I would really appreciate that.
Alan Z. Rozenshtein: We should do a separate podcast on why AI models, the naming is just, it's the worst in all of human history.
Kevin Frazier: It’s a concerning problem.
Alan Z. Rozenshtein: Never has a group, never has so much IQ failed so spectacularly at a basic task of naming things.
Kevin Frazier: We could have so many fun names, so many fun names, but instead we're stuck with OLMo? Anyways, so this is an example of a model that was trained on far less data than is traditionally used for OpenAI, right? Everyone's heard about OpenAI and other labs scraping the internet for all of the data that's ever existed. The Allen Institute for AI instead used just a really high-quality training databank.
And I think it's showing us that some of these smaller efforts, that may not meet the threshold as Dean was pointing out. Some of these smaller efforts are going to achieve quite a degree of capabilities that rival the leading models that wouldn't otherwise have fallen within SB 1047. And I think that is, partially, a reason why Governor Newsom would want to veto this is perhaps just making sure that there's that sustained innovation.
But Alan, to your point, one of my favorite paragraphs of the veto message was walking through how he was happy with signing legislation from the legislature detailing all these different aspects of AI, covering a whole gamut of potential risks. But then again saying, I want to make sure we're pro-innovation, so this one piece of regulation I’m not going to support.
Dean Ball: Yeah. And I would also say Governor Newsom's veto message certainly indicates what could be a very troubling regulatory future for AI, where we create use-based regulation for, kind of, every conceivable use. So, the government will publish rules for if you're using AI in dentistry and if you're using it in construction and plumbing. And that just seems like crazy making, given that I think firms don't really even know how to use AI. The idea of creating proactive rules just for the sake of being proactive probably hurts more than it helps.
And a good example of this actually is Governor Newsom's own executive order from a year ago relating to generative AI has this provision that would-be government contractors have to submit a form to the government that details all of their own internal uses of generative AI that are relevant to the project that they're bidding on. The problem with that is that if you use generative AI extensively-. I mean, imagine if you were writing a bid or a grant for an application or something, and the application asked you, what are all the ways you use computers? What are all the ways you use microelectronics? That would be really hard to answer in a comprehensive way. And what you might just say is, well, you know what? I'm just going to do all the work I do for this grant. I'm just going to do it on paper. Because I don't want to bother. And I think that kind of thing is a real possibility if we're not careful.
And so I hope that to the extent Governor Newsom is interested in innovation, he understands that innovation doesn't just mean having a light touch regulatory regime on the inventors of new models. It also means ensuring that AI can diffuse in productive ways throughout the economy, can be used by people and businesses to do things. That's what we want. We don't just want to have the top-tier language models. We want to have the top-tier uses. The most creative uses of AI, of anyone in the world.
Alan Z. Rozenshtein: So I want to clarify one thing you said though, Dean, because you know, it was when you said that you don't want a future where AI is regulated on a kind of per-use basis. I actually think that's the only potential future. And so I want to push us for a second on what you mean by that, because you know, if your concern is, we don't want a bunch of proactive regulation that puts a lot of burdens for no obvious reason and involves a lot of box checking. Sure. I agree with you. Fine. Fair enough.
But if the choice is between regulating things at the sort of super high front, frontier model level, like we see here, or instead saying, look, if you're a and I'll just use an example close to my heart. If you're a lawyer, right, using AI models to help write your briefs, you have an independent obligation to do the sorts of things that we want AI, we want lawyers to do with AI, for example, to make sure that you're not hallucinating cases that you submit to judges.
Now, I don't know what the version of that would be for dentists. I'm of course not a dentist, so I wouldn't know, or, but I would imagine that's actually exactly how you do want to regulate. You want to go sector by sector and say, you know, there are specific or special considerations for dentists when using AI, whatever those are, and then let's do a cost-benefit analysis there and figure out what the regulations are and then go down the list. So is your objection to use based regulation generally or just, kind of silly, performative, fact-free regulation, which I think we can all agree is like not a super good use of anyone's time.
Dean Ball: Yeah, so I have tried to distinguish in my writing between what I call use-based regulation and conduct-based regulation. This is a distinction that has not caught on with anyone else except for me, but let me try to elaborate on it.
Alan Z. Rozenshtein: Are we still trying to make fetch happen? This might be the podcast for it. You gotta start somewhere.
Dean Ball: We're going to make this use-first conduct thing happen. We're going to make it real. But so, so the use-based regulation is like what I described, like, we're just going to assume that AI needs a set of new rules in every industry, and we're going to write it down.
Conduct-based regulation, the way I think of it is let's start with the assumption that the existing body of law that we have and, you know, soft law, statutory law, common law, et cetera. That all of that taken together does a pretty good job of codifying the normative standards for the type of conduct we want to see people in firms exhibit in the world. And that law should be the basis and that we should only change that law, or update it, or supplement it, in the event that we have a demonstrated reason to do so.
I think deepfakes are a really good example of this, where I think in a lot of ways it depends on the state. But in a lot of ways between common law and statutory law, you basically do have protections for name, image, and likeness from someone making a malicious deepfake of you, but in a lot of places you actually don't. And there might be room for a targeted statute to say if somebody knowingly distributes a malicious deepfake of you, that is an actionable crime because, you know, so, there, there are examples where that can happen. And I think we just have to see it based on demonstrated experience, demonstrated harm, as opposed to just assuming from the get-go that we need new rules.
Alan Z. Rozenshtein: So, Newsom has vetoed the bill. We can read his veto statement a million times, but I still don't think it's going to explain why he did it, but he did it for whatever reason. Maybe there were merits reasons. Maybe there were politics reasons. Obviously, he has national ambitions and maybe he thought this was the right play. I have no idea.
I'm kind of less interested in that than thinking about the future and where we go from here. So, I want to throw this out to both of you. Where do we go from here? Right? Both in terms of you know, we can start kind of, kind of lowest, kind of most specific to most broad level, right? My understanding is that California has a veto override process. So maybe 1047 might still be passed. I'm curious what you think about that.
But then sort of more generally, what should people do about this? Because I think I don't want to be, you know, I want to be, I think this conversation has had a bit of an AI safety-skeptical tenor to it, and that's fine. But I want to be fair to the AI safety folks, because I mean, these models are accelerating at this incredible rate. And look, whatever your p(doom) is, and I agree that 0.8 is a little high. Maybe it's 0.2 or 0.1, or for me, maybe it's .05. Point 05 times doom is still a big problem. And so I just want to put our AI safety hats on for a second and say, if we think this is a potential problem, which I all think we accept, how what sort of regulatory interventions would in fact balance to a meaningful extent, this concern, without the same time crippling innovation and you know, foreclosing our post-scarcity, you know, utopia, that is also a possibility.
Kevin Frazier: Yeah. So, I think here is where I'd really like the folks who pushed back on SB 1047 predominantly on federalism grounds saying that, hey, this is an issue for Congress to take on, to step up, right? So, for Zoe Lofgren for-
Alan Z. Rozenshtein: So, basically Dean and me in our Lawfare piece?
Kevin Frazier: Let's go, let's step up. Let's see, prove it, right? Prove it, get something across the table because I am sympathetic to a lot of the AI safety arguments in particular, I have concerns about what it means for our democracy, what it means for job displacement. So, let's go Congress, get your game together. If you're going to deny states or kind of put pressure on states, not to act. I get that. I don't think a patchwork is helpful, but now prove your work.
And I think starting with some low hanging fruit would be really beneficial. I've been a long advocate for a right to warn, or an AI safety hotline, creating more mechanisms for us all to be more aware about what risks these models may pose. And empowering folks who know most about these models to share those concerns in a meaningful way, I think is low hanging fruit that we can all agree on. And so I would love it if Congress kind of took this as a inflection point. And said, we will take AI seriously. We don't need another SB 1047 fight. Here's what we're going to do about it.
Dean Ball: Yeah. I mean, I absolutely agree that the risks are serious here. I mean, at the end of the day, we're trying to make inanimate matter think. And that is in and of itself, that's a novel endeavor for the human species. And I think we should take that seriously because it seems like we're maybe succeeding at it. So, you know, I think that right now, what the best thing to do is to focus on transparency and insight before we focus on proactive and pre-emptive regulation. I think we just don't know what the shape of regulation should even be.
And to some extent, I think when you're at the frontier of things, there's a lot of benefit to letting experimentation happen, and kind of like self-governance happen, to some extent, in such a way that you can then codify, you know, what we find from those experiments in statute later on. But for now, I would focus on transparency from the frontier labs. I think there are ideas inside of SB 1047 that got refined and whittled down in just the right way that if you were to disentangle them from the rest of the bill and try to pass them either at the state or federal level, I'd prefer federal, you could maybe, you know, you could maybe get something productive.
I think auditing of AI models is potentially an example there. Or at least of frontier lab safety plans. I think that whistleblower protections for employees of frontier labs could potentially fall into this category. There are a number of different things, I will have my own proposal that I'm co-authoring with a friend of mine on transparency soon. So, I think that there's plenty to do in that regard that just basically boils down to, we need a more informed public discussion.
Because as I mentioned earlier in this conversation, the AI policy debate, the SB 1047 debate, in particular, was about what is the future capabilities trajectory of AI. And that is a question on which there is a substantial information asymmetry between what I know and what people at OpenAI, Anthropic, DeepMind, and Meta know. And so, I think that finding ways to reduce that asymmetry without expropriating the intellectual property of these companies or harming American national security is something that can really, potentially create a lot of value.
Alan Z. Rozenshtein: Well, obviously this is not the last time we talk about this issue or potentially even SB 1047. We'll see what happens as it returns to the conversation.
Dean Ball: Thanks very much.
Kevin Frazier: Thank you, Alan. Thanks, Dean.
Alan Z. Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening