Lawfare Daily: Alexandra Reeve Givens, Courtney Lang, and Nema Milaninia on the Paris AI Summit and the Pivot to AI Security

Published by The Lawfare Institute
in Cooperation With
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a Non-Resident Senior Fellow at the Atlantic Council GeoTech Center, and Nema Milaninia, a partner on the Special Matters & Government Investigations team at King & Spalding, join Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to discuss the Paris AI Action Summit and whether it marks a formal pivot away from AI safety to AI security and, if so, what an embrace of AI security means for domestic and international AI governance.
We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Courtney Lang: The U.K. has publicly come out and said for them, you know, they didn't sign on because they felt that the declaration lacked kind of critical guidance around global AI governance and didn't provide enough clarity for how that should look going forward.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, contributing editor at Lawfare and adjunct professor at Delaware Law. I’m joined by Alexandra Reeve Givens, CEO of the Center for Democracy and Technology; Courtney Lang, Vice President of Policy for Trust, Data, and Technology at ITI and a non resident senior fellow at the Atlantic Council Geotech Center; and Nema Milaninia, a partner on the Special Matters and Government Investigations team at King and Spalding.
Alexandra Reeve Givens: You cannot help but note the scary vacuum that is left when the U.S. is silent on these questions of AI governance.
Kevin Frazier: Today we're talking about the Paris AI Action Summit and its significance in the broader AI policy landscape. In particular, we're discussing whether the summit marks a transition from AI safety to AI security as the primary framework for AI governance at home and abroad.
[Main podcast]
Alright. So we have a lot to get to today following the Paris AI Action Summit. There's been a bit of a awakening to perhaps a pivot to a new understanding of how AI governance is going to proceed, not only in the U.S., not only in the EU, but perhaps globally.
And we are fortunate to have three experts on the topic, two of whom were actually in Paris—and I'm incredibly jealous because they didn't bring me a croissant, but we'll leave that for another moment—who can share their insights into why this action summit was particularly important from understanding the direction of AI policy going forward.
So Courtney, this wasn't the first AI summit we've seen. Can you walk us through what happened in the U.K. and then South Korea with prior summits?
Courtney Lang: Sure. Happy to go over the prior summits and what we saw unfold there, certainly.
We saw the first summit held in Bletchley Park in early November of 2023. This was really focused on AI safety at its core. This was the first time that a group of countries was brought together to talk about AI safety, and in particular, there was a focus on the risks that frontier AI systems pose. As a result of the Bletchley Park Summit, we saw both a declaration that a number of countries signed onto, including the United States.
So we saw the outcome from the Bletchley Park summit being the primary declaration and then moving into the Seoul Summit, which was held six months later in South Korea, there were additional outcomes that were reached.
Again, the Seoul Summit was focused on advancing the AI safety conversation a bit more. And so from the Seoul Summit, we saw both organizations sign on to frontier AI safety commitments around what they were going to do to advance safety around frontier AI. And then we also saw a smaller group of governments and countries sign on to the Seoul Statement of Intent around International AI Safety Cooperation, which essentially outlined a plan for them to work together to advance discrete areas of AI safety.
So those were the two primary global scale convenings that we saw leading up to the France AI Action Summit, although there is a plethora of other work also going on in multilateral fora.
Kevin Frazier: Okay. So we have this common theme of the prior summits definitely being grounded in the notion of AI safety, right? Responsible AI development, how do we make sure these frontier models don't cause the end of humanity or maybe closer to earth, don't necessarily produce models that are going to have discriminatory outcomes or perpetuate systemic biases and things like that. And the combination of declarations being a key output of these summits.
I think it's also worth noting that following these two summits, we saw kind of mass adoption of, or increased adoption of AI safety institutes by various countries. So not only agreeing that safety should be a priority, but then seeing at the domestic level an adoption of some institutions with this safety focus.
So Nema, we had a U.K. focus on safety, a South Korean focus on safety. What were the expectations going into the Paris AI Action Summit in comparison to those prior gatherings,
Nema Milaninia: The expectations were one of a new discussion concerning a new administration since from the, from the first day of this administration's taking into effect, there's been an effort to start removing and dismantling many of the executive orders and administrative rules that had been put in place by the by President Biden, which had aligned with many of the rules and regulations we currently see being promulgated in the EU and other parts of the world that do look at safety and look at risk assessments when evaluating different AI models.
So for instance, on the very first day when President Trump took office, he enacted Executive Order 14148, where he rescinded amongst many other executive orders that were instituted by President Biden, his landmark AI safety executive order, which had been put in place for the purposes of directing different departments and agencies to come up with risk assessment standards for AI models.
Three days later, the president then also subsequently issued another EO executive order 14179, which directs agencies and departments to in fact, remove barriers towards the development of AI models and anything that could inhibit innovation. And the thrust of both of these actions obviously put into place a point of tension between this president's desire to make the United States—and U.S. companies in particular—models of innovation, with that of the EU and other countries who wish to be models of regulation. And that it is that tension that then went into this summit and became one of the, I would argue, hallmarks of what came out of the summit as well.
Kevin Frazier: Courtney, correct me if I'm wrong, but from the outset, we also saw that there was maybe a tension in the continent itself where Macron and France weren’t really embracing this safety narrative from the outset of this summit being adopted, whereas the U.K. summit was very much grounded in safety. South Korea also made clear from the outset that they were going to have a safety focus. When this summit was announced, we didn't even see safety feature prominently in the agenda.
So what do you think France was really trying to get out of this summit, being the host, trying to really focus itself as maybe a hub of EU AI innovation?
Courtney Lang: Well, I think you kind of put it, put it there at the end of your question, Kevin, which is yes, I think going into it, right, France really articulated that they wanted to kind of shift the narrative, shift the conversation away from just focusing on AI safety to really focusing on what they call AI action or AI opportunity.
And they really saw it as an opportunity to demonstrate kind of the valuable use cases that AI is enabling, and, you know, frankly, it was also a way for them to bring the French tech ecosystem into the conversation a little bit more directly and a little bit more starkly. And so I do think one of the things that we saw at the summit, both, you know, kind of in the, in the events on the margins, but also, you know, in the business day itself was a major focus on, you know, those French companies that are innovating and bringing these solutions to market.
And so for them, I think, you know, their overall goal was to bring this broader focus to the conversation. Global governance and safety was, you know, one track of this overall event that they had put together, but, but really their goal was to change the narrative a bit and, and bring a broader focus. And so I do think that in that regard, they, they succeeded in that.
Kevin Frazier: So Alex, we have these headwinds and maybe suggestions of the fact that this summit was going to be a different summit. Let's set the stage for listeners who didn't have the great pleasure of spending some time in Paris.
What was it like being on the ground at this summit? Can you tell us a little bit more about the participants, the setting, and just the general vibes before we get into some of the more substantive issues of the summit itself?
Alexandra Reeve Givens: Sure. So Courtney and Nema have teed up kind of some of the differences in dynamics and it's interesting to do a compare and contrast with how the Bletchley Park Summit was run in the U.K. I was there as part of a small delegation of civil society representatives. But when I say small, it literally was about five of us that have the privilege of attending alongside government representatives and then a number of the CEOs.
And it was this interesting thing for the Brits. They had chosen to host the summit at Bletchley Park to hark back to the historic foundations and incredible role that the Enigma team had played at Bletchley Park during the Second World War. But as they did so, it meant that it was very cut off from the people that are actually using and are impacted by AI systems.
So the French went different. They had this at the Grand Palais in the heart of Paris, a really large exposition hall that dates back for over 100 years and where there have been a lot of different trade fairs and kind of celebrations in the course of Parisian history.
And they wanted it to be more inclusive from the start. Courtney alluded to the broader themes that they had. They had pillars that were focused on public interest AI, the future of work, innovation and culture, openness and trust in AI, and global AI governance. So these five pillars, each of which had a special envoy who was leading the programming around that subject matter.
And very intentionally a schedule that had two main days of kind of summit engagement, where global leaders and CEOs would be there. But then many other days built in of activities around Paris, including cultural activities, art installations, a whole number of side events focused on the science of AI, on responsible deployment and adoption. And then also kind of these questions of–as Courtney was alluding to–the business day, where kind of different startups and companies in the in the French ecosystem were profiled.
So from even just the styling of the schedule, they were trying to signal a very different moment, which shows that we've gone from this more narrow conversation to a broadening of again how AI is showing up in people's daily lives and how we center the voices of people who are using and impacted by these technologies.
Kevin Frazier: And that's a really interesting point to see just how over the span of since November 2023 at the Bletchley Park Summit to now in terms of AI adoption and the sophistication of these tools, we can't have necessarily just that narrow conversation of a few stakeholders. Everyone now realizes that AI is here. It's here to stay. It's going to be a part of our daily lives for every day to follow.
So Alex, I'm also keen to get your sense of what the response was to Vice President J.D. Vance's remarks. This was the speech that sent shockwaves across the AI community for sure. And I'm assuming, but I would love to hear more about what it was like before and after these remarks. And if you could first give us a bit of a substantive taste about what the vice president said.
Alexandra Reeve Givens: Sure. So I have described this summit as really a tale of two summits and a split screen. And we've mentioned already how for a number of the leaders, they really came in with a strong focus on innovation, on deregulation and on driving adoption.
And you saw that not just from the U.S.—and I will get to the vice president's remarks in a moment—but also from President Macron speaking for France, talking about an investment of 109 billion euros that's coming in to develop data centers and other kind of supercomputing power in France. You saw it from von der Leyen speaking about a 200 billion euro investment for the EU again and building out compute power data centers and driving kind of innovation in the sector and a real focus on saying Europe is open for business.
With that came some pretty significant conversation around deregulation from the French and the European. So we saw for example discussions from Henna Virkkunen, who's the executive vice president of the European Commission focused on the tech portfolio, who talked about cutting red tape and saying that the EU needs to be attracting more you know, more investment and more innovation. And von der Leyen herself talking about cutting red tape.
So there were seeds of this already, and then enter Vice President Vance, who took that deregulatory tone and, you know, 10x'd it by really saying for the U.S., like, we need to focus on a pro-innovation agenda. You can understand why he did it. I will say, I think, for many of us sitting in the audience watching him, it was shocking to see him pivot from just an innovation frame to one that actually was almost making fun of people who have concerns about how AI is adopted.
So he dismissed the notion that we have to think about safety and responsible deployment as hand wringing, talked about, you know, kind of this notion of the precautionary principle, that we need to throw that away, that that's not a way that we foster innovation in a dismissive tone and then one that actually had real specificity to it.
This wasn't just, we need to encourage innovation, standard political line. He went act by act on the three major tech regulations in the EU and criticized them one by one. So the European AI Act, the Digital Services Act and then even harken back to GDPR, the original privacy regulation from 2016, really taking, you know, this, this very pointed critique that, of course, he then doubled down on a couple of days later at the Munich Security Conference.
I will say that for me, the critique was so pointed that it actually felt out of touch with the perspectives of people that are actually thinking about how to use and build these tools in a way that people can trust, which was the vast majority of the conversation in all of the perimeter of the summit.
All around the summit, you had people that are just trying to build these tools and actually sell them to enterprise customers and have people use them. You had impacted communities and civil society organizations saying if AI is going to be built and it's going to be used in decisions about us, it should be tested, and it should be transparent. Really a pretty mature conversation around how we make sure this technology works and can be relied on.
And so for him to come in with such a pointed critique was pretty stark and, you know, pretty noticeable, particularly when you countered with remarks, even of the vice premier of China, who spoke at a state dinner the night before and was oddly talking about ethical AI and responsible deployment in a way that the U.S. historically spoke about these issues. So a real sea change. It was very noticeable.
Kevin Frazier: Yeah. I've, I've heard it described as an elbows out speech.
And I think it's worth noting also that recently the AI and crypto czar David Sacks went on his former podcast All In and specified that he didn't have a role in developing the vice president's remarks. This was Vice President Vance speaking about his views and obviously the administration's views on AI, but I think it's really telling that this is coming from the Oval Office itself in terms of the perspective of AI.
So, Nema, can you give us even more color on the significance of this speech? Obviously rustled some feathers, I think it's also worth noting that after Vice President Vance's remarks, he didn't stick around. He got out of there. He left. So, can you provide us a little bit more color on the significance of these remarks?
Nema Milaninia: Yeah, I mean, I want to take a step back and then I'll, I'll go into the significance in a second.
Taking a step back, I don't think this is by the way, just a concern or a position that's being taken by the best vice president. I mean, at the end of, at the end of January, you had the House Judiciary Chair Representative Jim Jordan, in fact, send a letter to the European Commission's tech boss Henna Virkkunen questioning the European Union's decisions on the Digital Services Act and noting that they constitute a violation of the free speech, speech rights of U.S. citizens, but also of tech companies which are putting out information and which the DSA targets because the DSA has regulations and rules concerning misinformation and disinformation.
The EU AI Act, in fact, talks about high risk models also in the context of misinformation and disinformation and therefore juxtapositioning this notion of free speech with obviously what the European commissioners are concerned with, which is preventing the proliferation of hate speech or information that is misleading to their population.
So I don't think it's just a Vice President Vance issue that we're seeing play out here, but more generally, there's also congressional dynamics that are also supporting it and come out in light of Elon Musk or Tim Cook, who have been outspoken against EU regulations as well as the fines and investigations that have been levied.
But what I do think you see are three points of tension that are playing out and which are going to continue to play out. I think this is going to be something that is going to put companies and individuals in a lot of difficulty.
One is this space that we've already discussed, which is this tension between the desire to deregulate, to allow U.S. companies to remain competitive with Chinese counterparts or counterparts in other spaces, because we see this competitive concern that, that our companies are disadvantaged in as a result of EU regulations. And that obviously comes into loggerheads with the institution of new regulations, including the EU AI Act, which went into place just at the beginning of this month.
The second point of tension is the one I just mentioned, which is this, the free speech agenda, which this president and this Congress is concerned about. And the concern that we see in Europe and other places of the world in relation to misinformation, disinformation, and the proliferation or promulgation of hate speech, which they want to combat, which one can see as more of a cultural conflict, but that is playing out obviously between regulators in the EU, as well as the legislature and the executive in the U.S.
And then the third point of tension and conflict is in relation to the fines that have been levied and could be levied by the EU as a result of violations of these new regulations that have taken place in the past few years or take, are taking place this year because they are quite hefty—you know, some of them amount to seven percent of profits that are accrued in the European Union—versus the the anti-tariff, pro-trade posturing by this administration, which views any taxation or any penalty on U.S. companies as being are amounting to a tariff on U.S. citizens. And that's playing out as well in the context of these disputes.
So these, these I think we can take if you take a step back, are the three points of tensions that we've seen emerging in the current ecosystem and which are going to have consequences as U.S. companies are going to have to navigate both forces and going forward.
Kevin Frazier: Yeah. And speaking of tension, maybe one of the more tense moments that came out of this summit or after the fact that we can look at is the fact that neither the U.K. nor the U.S. signed on to the declaration at the end of this summit.
So yeah. Courtney, what the heck happened here? We have 61 countries that sign on to this declaration, and yet these two outliers opting out. What was the rationale behind the U.S. deciding not to sign and the U.K. deciding not to sign?
Courtney Lang: Yeah, so this has of course been a conversation ongoing post summit where, yes, two of the major folks engaged in the AI conversation decided not to sign on to the declaration.
The U.K. has publicly come out and said for them, you know, they didn't sign on because they felt that the declaration lacked kind of critical guidance around global AI governance and didn't provide enough clarity for how that should look going forward. And they also cited a lack of kind of addressing national security concerns related to frontier AI models, which given the way in which they have now changed the name of their AI safety institute to focus on AI security seems to be the kind of focus that they are taking in this conversation moving forward.
The U.S. as well didn't sign on. I think, you know, this perhaps is not super surprising given this is a, for all intents and purposes, new administration coming in. This was the first time the vice president had provided remarks and shared, you know the administration's vision around AI policy. And so you know, given the fact that they are still relatively new and coming in, I think, you know, we shouldn't be surprised that they decided not to sign on to the declaration.
They're currently, I think Nema mentioned earlier, kind of both the rescission of prior executive orders from the Biden administration, as well as now the introduction of a new executive order, part of which tasks the Office of Science and Technology Policy along with others in, in the administration to develop this AI action plan, which in theory will articulate kind of the vision of the Trump administration for AI policy moving forward.
And so I do think there is interest in getting stakeholder feedback on what that should look like and international engagement will play into that in, in one way or another, and certainly we believe that international engagement remains a key component of the AI policy conversation. But again, I think, you know, given the fact that there was still a variety of newness to the administration coming in, you know, we didn't see the U.S. sign on either.
Kevin Frazier: Alex, if you could add just a little bit more commentary on the significance of the U.S. and the U.K. not signing and perhaps also commenting on, are we experiencing a paradigm shift from AI safety to AI security, and does that mean anything? Is it just a transition to a different sort of buzzwords or what, what does AI security mean for the future of AI policymaking? So Alex, love to hear your thoughts.
Alexandra Reeve Givens: Well, as for the U.S. not signing, I think Courtney is right to say it's a new administration, things were moving very quickly as they were coming in. On the other hand, you cannot help but note the scary vacuum that is left when the U.S. is silent on these questions of AI governance because it wasn't just not signing the declaration.
The other thing that, the other kind of empty space that was so palpable throughout all of this was the lack of any representatives from the U.S. AI Safety Institute, which before really had been the galvanizing force of what it is to have scientific expertise housed within the federal government that can be a thoughtful counterpart to these companies as they develop increasingly sophisticated AI models, and to US government purchasers of this technology so that they can have an informed source of expertise housed within NIST, which historically has been deeply apolitical, you know, not a political bone in their body. They're just there being the standards guys.
And so the U.S. AI Safety Institute delegation did not attend. The only people from the U.S. government who were present were either from the Executive Office of the President or from the State Department, kind of doing the logistical aspects of summit management. And that is a real loss if that silence and absence continues.
And the reason is you cannot just assume that these conversations are going to stop. The U.S. stops talking about responsible AI governance, they stop talking about kind of international collaboration—it's not like the dialogue ceases altogether. It just goes on without us. And so the notion of kind of other safety institutes that step, came up out of the Seoul meeting that are continuing to do work. Doing that without the U.S. having thought leadership in the space is a real loss for American society, for American values, and for these communities, mainly U.S. based tech companies as well. So I hope that that is a blip.
The fate of the AI Safety Institute, you know, as of this present day, is still undecided. We're still waiting for the news, but the absence was really palpable, and it was an important reminder as to why the Trump administration has to keep investing in that as a way of shaping these conversations.
So you're right that a few days after this happened, we had the U.K. Safety Institute rebrand, right? So they're now the, the U.K. AI Security Institute. I will say I also found that a little bit depressing just because it did feel like political pandering in this moment. The notion that safety is somehow now a passé term and we need to only focus on security really is minimizing the needs that society has as we think about the broader adoption and deployment of these tools.
You know, when you actually think about the people who are building these systems who are trying to integrate them into their daily business, the people that are actually going to drive AI adoption, which is financial services companies. It's healthcare companies. It is enterprise software solution providers. When you think about all of them, and then, of course, the communities that are going to be impacted by AI, which is the constituency that my organization represents. We all benefit when there is a mature conversation around what appropriate transparency, appropriate testing and appropriate checks on these technologies look like.
And so the notion that either that is fading away or is getting a rebrand to now only focus on national security shows governments failing in a moment where we actually really do need their continued investment and leadership in this space.
Kevin Frazier: So Nema, we see this transition in the U.K. to an AI security Institute. We're left in this weird spot of not knowing exactly what's going to happen to the U.S. AISI. We know that the head of the U.S. AISI is no longer there, the head that was there under the Biden administration. And there haven't been additional details. What's the significance of this all from your perspective and what should we be paying attention to?
Nema Milaninia: I think the first thing that I'd pay attention to is what the states do. Whenever the federal government and the executive have decided to take a step back, the states have always taken a step forward, and in fact, we've seen this in the context of privacy legislation.
We don't have a similar federal legislation equivalent to GDPR. However, at the state level, we have a number of states that have enacted at the state level legislation concerning user privacy. And last year alone, we saw 600 pieces of proposed legislation by 45 states across the union concerning AI at a moment when the when the former administration did seem like it was going to centralize AI policy.
So even in a circumstance where we had the executive step in, we still saw the state's going to take a more aggressive role. And I would anticipate that that's going to be even more so the case in a circumstance where, where this president decides not to, for instance, enact legislation or enact rules and regulations pertaining to AI safety.
The second is we've always seen legislation by litigation. And even though our regulatory agencies are being tested—the CFPB and the FTC—we still will see, and I still anticipate that we'll see pending investigations and new investigations whereby regulators step in and say enough is enough and enter into consent decrees or settlements that not just force specific companies to do more as it relates to safety, but also set expectations for others in the industry.
And then the third is, and I think Alex already indicated this, which is even, even in circumstances where we step aside, I don't expect that regulators, including those in the EU, will abdicate their responsibility or diminish the role that their new legislation or promulgated legislation is intended to have. And so we do expect that companies will still be subject to onerous litigation and regulatory requirements at the international level, especially since most of the companies that are in the AI space are by definition international companies.
It is unfortunately a circumstance where the executive might have less of a voice as it relates to centralizing safety standards as it relates to AI, but that also just means probably a more fragmented atmosphere for purposes of AI safety regulations and not one where AI safety becomes less of a priority.
Kevin Frazier: And speaking of fragmentation, I think it's also fascinating to put this summit in the context of what prior to these events was really framed as a race between the U.S. and China, right? After Deep Seek, everyone was saying, okay, the race is really on now. China isn't two years behind, China's maybe six months behind, the U.S. needs to accelerate.
And Courtney, we saw this change in perspective among the EU, sort of stepping back from the EU AI Act saying, well, maybe this wasn't the path we wanted to go. If you have von der Leyen and Macron saying the EU is going to be a force for AI, kind of trying to wiggle their way in, perhaps, to this so called two way race for AI dominance.
What is your takeaway from the state of AI development internationally now? Are we in a three way race for pushing out the frontier of AI? Is the EU even on the track or are they still warming up in the back benches? What's the lay of the land now following the summit?
Courtney Lang: Yeah, so I mean, I think, of course, as you note, there was two leaders, I think, in, in, in the race and then coming out of the summit, you know, we did see some potential new entrants, but at the same time, I mean, I think it is very clear U.S. companies remain dominant in the development of AI technology.
And certainly, yes, DeepSeek has presented an opportunity for U.S. companies and the U.S. government more broadly to look internally and figure out, you know, how do we want to continue promoting innovation and what does that holistic framework look like and what should it look like moving forward.
We also saw, yes, as I noted earlier, kind of France stepping up to the plate and saying, you know, we need to be more engaged in developing our technology ecosystem. But I do think the regulatory environment in, in Europe is such that it is going to be more difficult ultimately for companies to continue to innovate there, and I think we saw that with the introduction of, of GDPR as well in some regards.
With that being said, I think the other important point to note is that at this AI Action Summit, you actually saw the emergence of the Global South in a way that, you know, as Alex mentioned at the outset, was not as present at these prior summits. And so the fact that India was acting as a co chair and is now hosting the next AI Action Summit is something that I think is really interesting and important to keep in mind, right?
It's not just, you know, the U.S. and the EU and China kind of writing the rules of the road for this. There are going to be other countries that have a role to play in figuring out, you know, what does this kind of AI governance ecosystem look like moving forward? And so I think it's really interesting to pay attention to kind of the way in which India decides to scope their version of the summit moving forward and what sorts of things they choose to focus on.
But, I mean, I think one of the high level takeaways from, from my perspective right now in the context of kind of the overall international conversation is just that I think we're not necessarily at a place where we're going to achieve kind of universal rules for AI that every single country agrees upon, which, you know, maybe felt a little bit more tangible at the prior safety summits.
I think, you know, this summit indicated that countries are taking kind of different approaches. There's definitely more of a nationalistic flavor to a lot of the ways in which countries are thinking about AI regulation and AI governance. And so I think, you know, that is also something that matters for how we think about these conversations on global AI governance moving forward.
But that's not to say that we should not still aim to make progress in these international discussions. I would like to underscore that that is still incredibly important and is an area that we would like to see the U.S. remain engaged in and continue forward.
Kevin Frazier: Yeah, Alex, I'm really keen to hear what you have to think about this. Courtney just teed up that we have this India summit on the horizon.
You noted earlier, Alex, that this appears to be the U.S. kind of leaving a vacuum in a lot of international conversations. May it be the case that the U. S. doesn't even show up in India or what can we expect going forward from U.S. engagement on some of these international regulatory efforts?
Alexandra Reeve Givens: I hope that's not the case and the U.S. remains engaged in India. There's no question that U.S. companies and U.S. deployers of AI tools and U.S. civil society representatives will be watching that closely and want to be there. It would be nice if our government is there as well.
But the point I was going to pick up from Courtney is that there were notes of light that came from this. I worry that we're leaving listeners with the impression that this was some nihilistic summit that, you know, just where there's now a graveyard for the future governance of AI. And I don't think that's the case. So let me make the argument as to why it is not.
The first is, as we alluded to, there were incredibly rich conversations by most of the major stakeholders in the development and the deployment of of AI systems that reflected a really mature conversation about where we are in actually governing these systems today.
So a lot of conversations again, focused on financial services, focused on the healthcare sector, focuses on enterprise adoption around how they're approaching auditing, how they're approaching kind of transparency and documentation requirements around how they're building up responsible AI teams. A lot of people who just, they're doing the work every day. It's their day job. And that is actually how AI governance happens. It's not in some abstract kind of government declaration. It is individual decisions made by individual people every day. And so that was one silver lining.
And then, you know, going back more to declarations. There were a couple important initiatives that were announced and launched here that I think are really good points of light.
So one is that the OECD, the Organization for Economic Cooperation and Development, launched the reporting framework for something called the Hiroshima Process International Code of Conduct. This came out of the G7 a couple of years ago. It's a code of conduct for the leading AI Companies to talk about the testing, evaluation disclosures that they're making.
And one of the main critiques that those of us in civil society had said at the time was that looks great on paper, but who's actually monitoring for compliance and accountability? How are we starting to coalesce around this is an organizing principle where we check progress year over year.
And the OECD has stepped up with the directive of the G7 to actually now having a reporting framework for this. So the companies will do consistent reporting. It's a shame that that was on the sidelines of the summit rather than being embraced as a full part of the summit. But being in that room watching the companies, watching the countries celebrate and commit to that launch was a really good example that international governance efforts do still have valence in society today and can be useful.
Another one was that we saw Canada and Japan sign on to the Council of Europe's treaty on AI, which again was adopted last year. The U.S. was involved in the negotiating of this under the Biden administration, a whole range of countries, not just in Europe, but outside of it as well. And so having two more major players, Canada and Japan sign onto that again, kind of keeps up that drumbeat that, of course we should subscribe to standards around AI being built and used in connection with democratic principles, with transparency, with accountability mechanisms.
We also saw a two day conference of a new International Association for Safe and Ethical AI. And it was a really interesting summit because what you saw was people from the traditional safety community and people from the AI ethics community—who historically have not always been deeply aligned—spending two days together kind of talking about common goals, a common research agenda, common approaches to implementation, again in that vein of just the people that do the work like they do AI governance is a day job, continuing to have a day job and do it well and do it in community.
And then we also saw the launch of an environmental sustainability coalition, which brought together 91 partners focused on AI's environmental impact. This was the year that environmental impact and sustainability really entered the main stage. And so the French do deserve credit for surfacing it in the way that it deserves. And again, we saw a lot of countries talk about this as a priority, really begin to prioritize it.
Of course, there's a lot more work to do to actually fill in the details on that and add accountability to it, but this was the year where that went from being a side conversation that some of us in civil society were trying to get privileged to a main stage conversation, saying, of course, if we want AI to be adopted at the scale at which ultimately the world is going to want, to need, in the direction we're moving, we have to do it in environmental sustainable way, and companies need to begin thinking about that right now.
Kevin Frazier: I really appreciate you bringing up all those various other aspects going on because there is such a temptation—and I am guilty of it and I think a lot of people are guilty of it—of focusing only on the frontier of AI, right? Who's winning this so-called race of DeepSeek versus OpenAI versus Mistral, right?
Everyone's focused on these frontier use cases of AI when in reality, most of us are going to engage not with that frontier model, but right, is it your doctor using an AI model to diagnose something? Is it your kid who's using an AI personalized tutor? All of these things are going to be the determinant of whether AI really does benefit all of us more so than saying who won some race that doesn't even have a good benchmark at this point. So thank you for, for adding that color.
Unfortunately, we're getting pretty close to the end here. So I want to go around the horn and get a sense of what trends you all are looking for going forward. What should we keep an eye on? What entities or events or potential laws are you looking at right now? So Nema, let's start with you. Then we'll go to Courtney and Alex.
Nema Milaninia: If I'm a company or if I'm curious as to the trajectory of how the EU might regulate, I would actually focus in terms of how Congress and the executive are responding to other regulations that have already been put into effect and that are impacting tech companies.
So as I mentioned, the Digital Services Act and the Digital Markets Act, they're both the EU legislation that concerns content moderation, but also antitrust, were put into effect and went into effect a couple years ago. And already we're seeing congressional pushback and executive pushback in terms of what those two regulations mean for companies based in the U.S., including on ideological lines, as we've discussed concerning free speech versus regulation of misinformation.
And how that discourse and how those conflicts bear out in the next year will also tell us a lot with regards to how we should expect similar conversations concerning the EU AI Act and AI regulation more generally will play out because fundamentally the standards that the EU as imposed under its AI regulations are fairly similar with the protective aims of the DSA and the DMA in terms of protecting competition, but also protecting harmful conduct that might arise out of systems that are deployed by tech companies.
So for me, as, as a lawyer who is trying to time to help clients navigate very difficult and changing waters, this is my focus and this would, I would argue be the focus or should be the focus of many who, who are looking at what, what might happen to companies who are operating globally.
Kevin Frazier: Yeah. Nema, you have your work cut out for you. I mean, I just saw that the Colorado AI Act now, there are new negotiations about the extent to which they're going to enforce that. So, I, I don't want to interrupt what could be billable hours right now, but, but really appreciate those remarks. Courtney, what are you looking out for?
Courtney Lang: Yeah, so I'll, I'll talk about this a little bit more from, from the broader international perspective.
And I think, you know, some of the things that, that we're looking out for moving forward is, is one of the things that we talked a little bit about was both the kind of shift in focus of, of AI Safety Institute in the U.K., but also, you know, whether or not that happens more broadly, that's something that I will be interested to see kind of how that unfolds, particularly in the context of the U.S. AI Safety Institute.
I also think you know, an important thing that we didn't discuss necessarily, but that happened back in November was the international convening of AI Safety Institutes and that network that was set up, which the U.S. was in theory the inaugural chair of. Obviously, I think we'll be very interested to see kind of what happens with that network moving forward.
As I was saying earlier, I think it's really important that we recognize that, you know, safety and innovation are not necessarily mutually exclusive and safety in many ways helps to promote innovation and vice versa. And so, you know, there is still work to be done on AI safety.
And so kind of I will personally be looking out for kind of what the International Network of AI Safety Institutes kind of is looking to achieve moving forward. They had some solid outcomes after the November convening, particularly around a joint framework on risk assessments. But you know, kind of seeing how they prioritize their research agenda moving forward and whether the U.S. retains kind of that inaugural chairmanship moving forward is something to, to keep, I think, on the radar.
And then, you know, maybe a couple of other things just to flag—I know Alex mentioned a lot of the international and multilateral conversations that are still going on. Again, I think it's important to still pay attention to kind of political statements that come out of G7, efforts that are ongoing in G20. The OECD continues to do really important work on advancing a lot of the foundational kind of policy principles and kind of frameworks and agreements amongst the OECD countries that can have a really, I think, critical role in advancing the conversation. So we'll be looking out for that and then, of course, you know, things that are ongoing in the UN and elsewhere.
So I would say, you know, all of those things kind of feed into this broader global AI governance conversation. And it will be interesting to see, especially in light of the upcoming India summit, kind of how they all fit together and, and whether, you know, the international group of institutions is able to articulate that a little bit more specifically and the state laws, of course.
Kevin Frazier: And the state laws, just 50 different versions. No, no, no big pass there.
And I appreciate you raising that point about safety and innovation, because it's worth remembering that labs like OpenAI, Sam Altman have been some of the loudest voices calling for more international cooperation and regulation out of the theory that regulatory certainty might actually help further innovation.
So, Alex, no pressure, last word goes to you.
Alexandra Reeve Givens: Well, I'll say it's a strange thing to have a conversation focused on AI governance when there is such broader sands shifting in the U.S. as a general matter. It's a difficult time for people who believe in the role of the federal government is protecting people's rights and making sure that our economy works for all of us.
And in the AI space, I think the way that that translates over is, I am left with a very strong feeling that it is up left to us. One of the things that you saw happen at the summit again was the people that are using these tools that are impacted by these tools really having to take the lead in framing the conversations around the government and responsible deployment of this technology.
Personally, I don't think that's the way that this should work. I think government should be playing a role, but in the absence of that leadership, the conversation can't cease. It's going to continue. And so keep looking for the helpers. Look for the people that are continuing to drive conversation around the responsible adoption and use of this technology. They are the ones that ultimately will actually make sure that AI is working for all of us and is being deployed in a responsible way.
And then a more specific example of that, the one thing we haven't announced coming out or discussed coming out of this summit was the current AI launch, which was the $400 million fund for building AI in the public interest. And what I thought was so interesting about that is that they're reclaiming public interest data sets, public interest kind of tools and deployment of AI, but they are modeling the approach that responsible development should take because the third pillar of their work is around auditing and accountability.
So they're saying we're going to invest in building and we're going to invest in building responsibly. And as we look for points of light in this moment, the launch of current AI with that framing, I think was a really good example of it, where again, it's not always going to be government money, government leadership coming in. Some of it is going to really just be left up to us. And we are the ones that can help build this future in a direction that works for everyone.
And that's going to be my day job going forward that, you know, that's, that's what CDT does every day. But I think all of us from our different vantage points together are going to have to push this common enterprise forward.
Kevin Frazier: Well, you all have a lot of work to do. So, enough having fun on podcasts. Get back to it. Thank you all for joining. We will have to leave it there.
The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look for our other podcasts including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org.
The podcast is edited by Jen Patja and our audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi music. As always, thank you for listening.