Cybersecurity & Tech

Lawfare Daily: Geoff Schaefer and Alyssa Lefaivre Škopac on AI Adoption Best Practices

Kevin Frazier, Alyssa Lefaivre Škopac, Geoff Schaefer, Jen Patja
Friday, September 27, 2024, 8:00 AM
How can AI be adopted ethically and responsibly?

Published by The Lawfare Institute
in Cooperation With
Brookings

Geoff Schaefer, Head of Responsible AI at Booz Allen Hamilton, and Alyssa Lefaivre Škopac, an independent responsible AI strategist, join Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to detail the Responsible AI Top-20 Controls. As governments, corporations, and nonprofits face increasing pressure to integrate AI into their operations, how to do so in an ethical and responsible fashion has remained an open question. Geoff and Alyssa offer their insights on jumpstarting AI governance within any institution.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alyssa Lefaivre Škopac: Ultimately, the job is to figure out the best way to leverage the opportunity and mitigate the risks of AI inside of your organization.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, assistant professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, joined by Geoff Schaefer, head of Responsible AI at Booz Allen Hamilton, and Alyssa Lefaivre Škopac, an independent responsible AI strategist.

Geoff Schaefer: If you're implementing these controls, either in part or in full, by the very nature of that work, you, you are already at a pretty high state of maturity.

Kevin Frazier: Today we're talking about the Responsible AI Top 20 controls they helped develop. Though national and international efforts to regulate AI have garnered much attention, comparatively less has been paid to how organizations should actually go about adopting AI. I press Geoff and Alyssa on the potential for responsible AI washing, the odds that self-governance efforts might diminish legislative energy, and why there are currently only 15 controls in their top 20 list.

[Main Podcast]

Geoff, these Responsible AI Top 20 controls didn't just come out of nowhere. As you all mentioned in the report itself, this is in part a reflection of prior efforts, for example, in the cyberspace. So can you tell us a little bit more about what motivated you all to take on this effort?

Geoff Schaefer: I think this was originally inspired by our collective sense that our field, responsible AI, AI governance, et cetera was still operating too much in the land of principles and high-level policies and guidance with too little work being paid to providing specific actions and to do lists and action lists, et cetera, that were really practical and tangible for folks to be successful in implementing responsible AI on the ground. And there are many gaps out there from risk assessments that are specific and useful to technical tooling that facilitates this work.

But one of the gaps we also saw was basically, we have all of this guidance out there. We have international policies and regulations. We have more frameworks than we can count at this point in time, more sets of principles than we can count. But precisely because we have all of this guidance, it's, it's, it's not useful or it's really difficult to make use of. And so I had spent some time in, in cyber previously and remembered you know, when I was, was doing this work, the SANS Top 20 was a really useful way to just kind of check, okay, what do we need to focus on? What does maturity look like across those, those things? And therefore, you know, what are the obvious ways I can get started?

We thought to bring this full circle here that we needed something similar for our field, and we thought it could look actually pretty darn similar. And, you know, why not take what's worked so well in, in, in cyber and, and kind of apply it to this space as well. And so this is, this is our first attempt to do so.

Alyssa Lefaivre Škopac: So my background was working with individuals and practitioners within organizations. And as much as there is all these high level principles, they, they're all still coming to me and saying, what do we do? Like, tell me just exactly what to do. Because even if there is things that are emerging like regulations and standard development and best practices, it's still not super clear for the practitioner like what the discrete actions are and where it's coming from.

And so some of that translation is coming from people saying like despite all of the progress we've made with responsible AI requirements and best practices, it's still not super tangible and clear. And, or I don't have time to read 58 page documents that tell me exactly what to do because we're still standing up our AI governance and I need things that are a little bit more turnkey. So that was another kind of genesis story of the need for the top 20 controls.

Kevin Frazier: Right. So you're not going to go to the EU AI act and suddenly learn, oh, okay, this is how I implement AI in my organization. And you can't just tell all your employees to go to Khan Academy or some coding bootcamp and say, oh, now you understand the ins and outs of AI and expect that your organization will be able to adopt AI. So I see how you all are meeting a need here, but there's still quite a bit to unpack. So I guess to start, Alyssa, can you define or give us a sense of what the heck responsible AI means? Listeners to this podcast have heard doomers. We've talked about accelerationists. We've talked about all of these different camps. Where is responsible AI on that spectrum? Is it on that spectrum or is it a whole ‘nother concept?

Alyssa Lefaivre Škopac: So the definition of responsible AI is kind of changing and you're going to hear ethical AI, you're going to hear trustworthy AI, you have responsible AI. I think that's still honestly up for discussion about what this whole domain is being called. We don't use responsible AI very lightly, but it's really, you know, the tools, practices, teams that help you implement AI in a risk mitigated way at scale. That sounds sort of boring, but ultimately the job is to figure out the best way to leverage the opportunity and mitigate the risks of AI inside of your organization.

We're seeing a lot more around trustworthy AI because it doesn't necessarily mean that you're irresponsible if you're if you're not doing things exactly as people are dictating. And we move a little bit away from ethical AI because your ethics might be different than my ethics and one country's ethics might be different than another country's ethics. So it's really the spectrum of guardrails, tools, processes, and personnel that help you harness the benefits of AI in a, like, a safe, trustworthy, and risk mitigated way. That isn't a standard definition necessarily, because I don't think there is a standard definition but that's how, that's how I certainly see it.

Geoff Schaefer: You know, we, we started getting questions from clients about a year or so ago, hey, tell me the difference between ethical AI and responsible AI. You know, what is, I hear these terms interchangeably. Can they be fungible or do these mean specific things? And we started getting that question frequently enough that we realized we needed to put some thought into this. Like the field is clearly not resolving this definitional issue, which is fine, but, you know, we, we need to kind of wrap a structure around this so we can, we can actually have a good answer for client.

And so we came up with our regs formula. So responsible AI equals ethics plus governance plus safety. And the idea is that at a minimum, you know, common sensically, you can't consider an AI system responsible if it's not ethical, well governed and safe. And so we thought that was a good way to just kind of, again, wrap our arms around what this broad topic area could mean and should mean.  And we've since seen a lot of light bulbs go off when we, when we use the regs forma, when we, we structure our conversations around that. We've even done actual exercises where people, you know, take a use case and, and talk about the ethical dimensions, how they would govern it, and then what are kind of the safety aspects of that problem set as well. So it's, it's proven to be really useful, but as Alyssa said, the whole field is still kind of evolving and grappling with what these, these different terms mean. You know, are they sub fields? Are they sub disciplines? Are they interchangeable? Does it even matter? So at a minimum, we're just trying to make this a little bit more useful for folks.

Alyssa Lefaivre Škopac: And I think what is responsible or trustworthy or safe for one organization doesn't necessarily mean it's the exact same for another organization. So organizations are grappling with how to even define this for themselves internally. So for example, if I'm going to implement an AI system that brings a lot of productivity, But it means potentially laying off part of my workforce, how do I grapple with that? Where is my responsibility in that? And that is not the same for every organization. And so that's part of the, the friction we're getting in definitions is it depends. It's context specific.

Kevin Frazier: That's fascinating to hear you all map this out, because I think it gets to the fact that so much of the AI discourse right now has been at either the meta level, right back to those doomers versus accelerationist conversations, or very personal looking at your own views on AI, but that sort of organizational approach has sort of been missing. And so when you think about someone who's going to pick up these Responsible AI Top 20 controls. Are we thinking, you know, it's the Denny's diner equivalent down the street, or who are the ideal clients? Who do you expect to actually pick up these controls and implement them?

Alyssa Lefaivre Škopac: You know, when we first thought about this, it kind of came from the fact that maturity is all over the map as it relates to responsible AI. And also the realization is that every organization already likely has AI inside their, inside their company, whether it's embedded within software or their purpose building their own AI.

So the idea was that this would be useful for anybody that is looking to scale their responsible AI maturity. And for multiple types of practitioners inside of organizations. Right now the responsible AI portfolio is a little bit of an organizational hot potato. And so you're getting people that are in security, they're in privacy, they're in marketing, they're in IT, and this might not be in their wheelhouse and so they need a tool that even if, you know, you know, AI, if you're a manager who has to understand and evaluate a tool your team is bringing to you, where can I go to for practical guidance?

And so we're seeing probably lower maturity to medium, high maturity type companies adopting this. And it fitting and supporting multiple roles with inside of an organization. There's a lot of organizations that are quite mature with their responsible AI practices. So you're going to see, you mentioned Apple or Microsoft or others. Hopefully that this is something that they contribute to and bring their expertise to, but we're definitely seeing those in the more emerging space looking to adopt things like this.

Geoff Schaefer: Yeah. And I'll tell a personal story here. I was the acting chief security officer for a crypto startup back when that was still cool. And-

Kevin Frazier: It's still cool in Miami, Jeff. I don't know what you're talking about. Crypto is still big here.

Geoff Schaefer: You still have a stadiums named after it, I think. I I told that team hey, I like, shouldn't be that person. Like, I, I don't know what I would do there to approach that, that work, it's incredibly technical, et cetera. And it was full of Bridgewater guys. And so they're like, well, that's exactly why we want you to just come in, do common sense things, help us figure this out, get us from zero to one, and then we'll re-explore. And so that made sense to me.

But you know, here I am starting on day one. I'm like, okay, well, what are these common sense things? What am I going to do? And again, I remember the SANS top 20. I thought, well, let, let me just start with a basic maturity assessment against these controls. Like, what does this startup have already? You know, where do we really need to be focusing? Is it four out of these 20 controls, is it all 20, et cetera. And I did that assessment and I was actually pretty blown away by how much of a roadmap it provided us. It just made so many things clear.

And so on the one hand, I think it's real useful for any organization at any size, but particularly organizations that are starting with very little, or they just simply don't have this type of capacity. This gives them, like, just basically an almost self-evident roadmap of what to do, how to start, and what good looks like for each of these controls. But as Alyssa said, on the same, at the same time, on the other hand, it doesn't matter how mature you are, this is a useful way to orient your, your body of work, your program, your practice and AI governance, responsible AI, whatever you term it. This is meant to be timeless and universal so you can always kind of come back to this and do kind of a sanity check on whatever cadence makes sense. You know, where are we against these controls? Have we neglected some that have become more important based on how our business is evolving, et cetera. So it's really meant to be for all organizations at all sizes and all states of maturity.

Kevin Frazier: So before we dive into some of the explicit controls you all have outlined, Alyssa, I'm keen to hear your take on why this list. As Jeff mentioned, there are principles of principles. There are lists for lists. There are libraries of compendiums, all related to AI, so many things out there. What about the process that was used to form these controls, the stakeholders who were involved makes this the go to list for an organization, for a small government, for a small business. Why this list?

Alyssa Lefaivre Škopac: Great question. Really, it was developed in a variety of ways. One, it was meant to be very plain language and very obvious and easily transferable. So, that was part of the rationale of the list and the language and how it was used.

Two, it was developed with a multi stakeholder group, and Jeff can speak a little bit more to kind of how that came together. But practitioners from all types of organizations, in government, in different industries, different levels of maturity. So it wasn't developed in a vacuum. It was developed with actual practitioners who have maturity and experience in the space that says like, this is what I did. This is what needs to be done.

And then number three, it was actually done in evaluation of all of the things that have been developed already. So, trying to use language that we see in NIST, trying to look at what's happening in, you know, Biden's executive order and using language like that. So, it still connects to the, some of the regulations and principles and things that are emerging. So, the idea is, is also an active living thing by having governance and an industry advisory group around it. Not only, you know, will we continue to develop new controls as the technology emerges? It's, it's not static and it's open to change because we realize that people are going to start to implement this, come back with feedback and we're like, you know what that does need potentially tweaking. So that was, that was how it was developed

Kevin Frazier: Awesome. And Jeff, yeah. Can you tell us who was involved in this process?

Geoff Schaefer: Yeah. Our, our partner Ramsay Brown from Mission Control AI started this really unique summit called the Leaders in Responsible AI Summit at Jesus College, Cambridge. We're bringing it to Georgetown actually this fall. So it's going to be bicoastal, bi annual.

But the idea here is to bring together practitioners from across the, across the globe, across sectors, industries, et cetera, and really try to solve like big practical problems in the field. It's very hands on. It's like a giant workshop versus, you know, a traditional conference. And this past March we had about, you know, 75 practitioners from across the EU, UK, and Europe come together and help us pull together the first draft of these controls. And it was really designed to basically get all of their pain points. All of their lessons learned, all of their ideas, if they had to start from scratch and go from zero to one today, knowing everything that they know now, like what would they do and how would they approach that work? And so we got so many incredibly rich insights from, from these practitioners across, you know, major companies, smaller organizations, academia, et cetera. And it really helped inform, you know, this first set of controls and what they should be how we should contextualize them, and what is good to look like for each one.

Kevin Frazier: Thinking about this unique group that formed, I have to put on my public policy hat, which I guess has a sort of public policy bias, which is to say, shouldn't the government be doing this? Why isn't Congress coming in? Why aren't states coming in and setting these specific guidelines for how AI should be adopted by different organizations. Are you all hoping that this eventually leads to some sort of legislative effort? Or is there a case for this sort of emerging self-governance approach, especially when it comes to how folks are integrating AI within organizations?

Alyssa Lefaivre Škopac: The truth is, is that it's a bit of a sandwich approach. You're going to have like regulation at the top line and then you're going to have things like standards in the middle. But there has always been place in policy for things like best practices, certifications, industry guidance, because it can move a little bit more flexibly and more quickly than even standards development or regulation.

And so when we think of, you know, the trust sandwich, if you will, of regulation, plus standards, plus experts in the field actually implementing these things and providing guidance, there's definitely a place for it. And that's where it can flex, shrink, grow and evolve based on the speed in which the technology is going to move and how people are using it. And so it's not that regulation won't have its role. It's just not going to be able to meet the need and the demand as quickly as, as the technology is going to, to be deployed. So I think that there's always a place for these types of things and not in the absence of regulation or standards or things, or even law, but in addition to, to help buttress and help give people, you know, really quick, easy guidance, that's more simple to implement.

Geoff Schaefer: Yeah, that's exactly right. This isn't meant to replace policy or regulation, and it's not meant to be a replacement for standards. As Alyssa said, this is really extracting the commonalities, the most important focus areas from each of those individual things, to create a sort of a common framework or common language for everyone to use.

Moreover, I'd say we probably don't want policy to be prescribing things in this way at this level. I think so much of the legislative activity and the global policy making that's happened in the space so far is kind of the right body of work at the right time focused on the right things by and large. And that is specific types of use cases. So can we, can we, should we be using AI for surveillance with facial recognition technology? Yes or no. Those types of sort of high level but also very specific types of questions based on certain risk profiles, based on how we want to organize our societies. And what future life is going to look like, you know, working with one another and living with one another.

I think that's where policy has lived, should by and large live for the foreseeable future. And that is not this. And I think that's right. You know, this, this set of controls is meant to extract again, those best practices. The lessons learned from the types of legislation around different risk profiles of use cases and extract what's common amongst those things so that anyone, anywhere that's using AI in any capacity can be assured that if we start with these controls and we end with these controls, We're going to be operating at the minimal viable maturity level that we really need to kind of sleep, you know, at night with confidence that we're doing the right things and we've got the right guardrails in place.

Kevin Frazier: With that in mind, we've seen similar standards in the environmental space, in the cyberspace, and there have been concerns about things like greenwashing. And so I wonder if you all have thought through compliance with this list amount to a sort of responsibility washing, AI responsibility washing, right? You know, we hired our chief AI officer. We had a flashy press release about them and she's brilliant and she's doing all these things. And we checked off the list of the top 20 controls, and now we can wash our hands. AI is over. We solved it. What are your concerns there, and how do you think the controls will actually mitigate against that? And I'll start with Alyssa.

Alyssa Lefaivre Škopac: I would respectfully just change the language a little bit, because we're not actually calling for compliance with the list. You know, not, because we're policy standards folks, I am going to be a little bit of a stickler on language just for clarity. But we're not actually like testing compliance against these lists. I think, you know, the things where certifications and other soft law approaches like standards and the checking of standards like ISO 40:2001 will have that place inside the checks and balances.

I do, you know, a lot of times, think about responsible AI or ethical AI washing. Like, congrats, you put your principles on your website and you hired one role cheers for you. But the whole idea of the, of the top 20 is to actually take it past that into implementation. Am I worried that people are going to say, hey, we implemented the top 20 controls we're done? The idea is if you implement the top 20 controls, you're going to be on the higher end of the spectrum of maturity.

And in this case, when there's actually not a huge driver, unless potentially you're looking to comply with the EU AI Act or certain other regulations that are in force, you don't have to do anything. And so if someone were to say, like, we worked really hard and we implemented, you know, the top 20 controls to the best of our knowledge, I would say that's a pretty significant motion inside of an organization when compliance with regulations that may or may not exist or be enforced isn't necessarily on the horizon.

Geoff Schaefer: Yeah, it's a good question because we, we designed these in part to prevent that by their very nature. So as Alyssa said, if you're implementing these controls, either in part or in full, by the very nature of that work, you, you are already at a pretty high state of maturity. And so we, we detailed these things. We, we created these controls for that very purpose. They're extremely practical. They're extremely specific. And so they're basically building blocks for your mature foundation.

I would worry more about ethics, setting ethical principles and using that as a benchmark for maturity because to your point, Kevin, that's really easy to, to publish no matter how well intentioned and no matter how well thought out. And then stop there and feel like you've, you've really achieved something. But obviously connecting principles to practice and having principles drive actual specific actions and decision makings, decision making across an organization are two radically different things.

And so we do have you know, control that speaks to setting the values of your organization et cetera. But by and large, you won't find a lot of things like, you know, determine your ethical principles or make sure people understand what your ethical principles are. Because we kind of recognize, and this was one of the great insights from wisdom of the crowd that we brought together in Cambridge, is that just won't get you very far. You know, there's so much work to do beyond that and around that to actually have practice that's going to allow you to deploy and use AI with confidence and again, the right guardrails that actually means something.

Kevin Frazier: I'm a law professor. So folks will know that math is not my forte, but even someone with a JD can count to 20. And running through your top 20 list, I only count 15, and that's, that's within my wheelhouse. So, Alyssa, where are the missing 5, and when will we see them, and why are we missing 5? Are you already engaging in false advertising? This just seems like an irresponsible top 20 list. I'm just, I'm concerned.

Alyssa Lefaivre Škopac: Thank you for calling that out. I actually think it's, it's really important and math is also an area of growth for me. So I did double check this. The idea is that we wanted it to be a recognizable name that follows like certain other standards that have existed in the space. But we also wanted to build in innate flexibility, understanding that the controls may need to flex as the technology and different use cases and applications come into place.

So, you know, even in our kind of our materials as they go live, we're certainly going to call out the fact that we can count. But being able to build in the flexibility for the future of when new controls need to be developed. So it's not the top 19 and then the top 24 and things like that. So, you know, I'll let Jeff speak to it a little bit more. We're aware of the number and it's really about building in that, that future, that future proofing for where the industry and the technology may go.

Geoff Schaefer: Yeah, math, math is also a growth area for me. I am enumerate. But yeah, we want these to be timeless and universal. So we recognize that, you know, we, so we have many technical practitioners, folks that are hands on keys developing AI systems. You know, a part of this effort and will remain a part of this effort moving forward. And because of that, we, we recognize that over the next 18 months, we're likely to experience pretty radical development technically in this space, whether it's more competent agentic systems, whether neurosymbolic really becomes a thing or more of a thing and dominates kind of the next generation of architectures. We anticipate over the next 18 months we'll have GPT-5, maybe 6.

We're just assuming there's going to be a lot of technical change and evolution over the next 18 months. And wanted to ensure that these controls span from organizational level controls all the way down to system specific controls. But didn't want to just create those controls in a vacuum when we're on the precipice of such radical change. And so basically we strategically left them at 15. So with the, the broad community here, as we work to make this kind of a common framework across the community and drive adoption with that community, we want to make sure that we are adding the remaining five controls with the level of sort of technical realism as that technology evolves. And so we feel pretty confident about that approach.

But you know, we'll see. Things are moving fast and we may finish the remaining five controls and. You know, three months later, they may prove to be completely irrelevant. So as Alyssa said, we're going to try to maintain some innate flexibility in the framework, but wanted to finish the swing on all 20 when we felt confident that we could make them reasonably timeless and universal.

Alyssa Lefaivre Škopac: We debated. We did have a big debate about whether, about how to name this and control it and things like that, and I think part of it is making it, you know, easily, easily to talk about too, so that's a, that was a consideration.

Kevin Frazier: Branding is important. We all know it. It's a, that's, that's a fact of life. So, I know listeners are just dying to hear some of these for the listeners who didn't do their pre-reading, which is fine. Occasionally one or two of my students don't do all of my assigned reading, which is fine. Anyways, so we've got engage executives, align organizational values and incentives, activate your AI governance team, integrate responsible AI into roles and responsibilities, engage employees, continuous education, establish AI risk management strategy, inventory your AI assets, conduct impact assessments, implement adaptive risk controls, continuously monitor your AI lifecycle, manage third parties, manage emerging regulatory compliance, develop incident response plan, and engage impacted stakeholders.

All right, I won't ask either of you to repeat what all 15 off the top of your head, although I'm sure you can. And we don't have time to do all 15, but there are a few I really want to focus on. In particular, I'm keenly interested in, engage employees. For me, I think this is one of the most difficult conversations to have, because Alyssa, as you hinted, if you introduce an employee to their potential replacement. That's not a great conversation, right? If you walk up to, I don't know, Joe Schmoe point guard on the Warriors and you say, hey, we just drafted Stephen Curry, the greatest three-point shooter of all time. That point guard's going to be a little sad. And potentially a little mad and they're not going to want to be a part of the team anymore.

So what does it mean to engage employees in a way that isn't just, hey, you know, we're actively working to automate more of your tasks and we'll try to give you a good severance package or so on and so forth. What does, what does engage your employees actually look like under the controls you all have outlined?

Geoff Schaefer: Yeah, it's a, it's a, it's a great question. And as you may have guessed, Kevin, this was maybe one of our more not controversial controls, but one of the more considered ones. For me, this is really about making your employees a part of the conversation of what you're using AI for and how, and not only to let them know, hey, we're going to, you know, develop this AI system to automate X percent of your work. And we just want to let you know, you know, just in case, you know, we'll see if this job remains.

It's, it's really more so, hey, we are doing this body of AI work. Here's the business reason. Here's the strategic reason. We want you a part of this development and this design process. Does this make sense to you? How would you shape this AI system? How would you scope this use case? You know, is this actually going to make your life better here? Or is this going to add complexity or complication to the work that you're trying to do? Or is this even where we should be focusing our AI time and attention? Please employee, be a part of this conversation with us and help, help us figure it out.

And if this does happen to impact your, your job, your bundle of tasks in any sort of meaningful way. Like, again, we want you to be a part of that conversation to understand that impact and to mitigate it, address it, rescope it, et cetera, as, as appropriate. So that is control number one also because, or one of the top controls because it's going to inform so much of the rest of the work that needs to be done across the organization. And if the employees are not a key input to that body of work then at best, you're going to miss opportunity. And at worst, you're going to fail to achieve responsible.

Alyssa Lefaivre Škopac: I like to think of like specific examples where some of the use cases with the most impactful ROI actually come from employees experimenting with AI in their tasks. And so one of the things we've seen is that when kind of crowdsourcing ideas for applications or use of AI from people that are actually on the front line and can become more efficient in their tasks is where some of the highest value from AI has come from. This has been my experience versus, you know, people, executives in a boardroom being like, how are we going to use AI to, to transform our business?

That has validity too, but by engaging employees on how they want to use AI, helping them understand where the policies are, and then making sure that you have diverse representation across your organization. One of the things when we're seeing people standing up committees or standing up communities of practice that have a diverse perspective, not only in function and role, but making sure you have women and people with different backgrounds and ethnicities. You're actually going to get a more fruitful conversation about what responsible AI means for your organization.

So it looks like crowdsourcing ideas for applications of AI, helping that inform your policy development, and then, you know, hopefully setting up communities of practice where there is a trust and transparency and being able to flag where they see challenges with AI, but also see the opportunities for, you know, task augmentation. I'm not naive enough to say that AI won't replace jobs. It's, it's, it's going to happen. And I do think that we're maybe overstating the amount. But I do think that the employees that get used to experimenting and using AI and offering ideas where it can provide value are going to have a really strong conversation and foothold inside of organizations as well.

Kevin Frazier: Yeah, there's a, a great line. I forget who to attribute it to, so whoever is listening and this was your quote, please let me know, but in the legal field, someone said, AI isn't going to replace lawyers, but lawyers who are good at AI will replace lawyers who aren't. And I think that's a, a very interesting sort of aspiration for organizations to say, how can our employees be the folks who are good at AI so that they can continue to add value and lean into the technology.

So two other controls that I want to hit on as a sort of melded one would be engage impacted stakeholders and align organizational values and incentives. And the particular context I'm thinking about is some of these large AI labs, suddenly their climate goals have gone out the window and we no longer see them talking about how they're going to keep their, their low carbon footprint and suddenly AI is just become more of a priority. Are you all thinking at all about things like the environment and perhaps the impact on even folks doing content moderation or training of these AI models in the Global South? How expansive are we thinking about the impacted stakeholders and, and sort of these organizational values?

Geoff Schaefer: Yeah, let me let me answer that in sort of a meta way. So the answer is, is, is definitely yes. But one of the things that we did is, for at least this first iteration, we gave some, some details about what we thought, you know, kind of the nature of each one of the controls looks like. It implies some level of maturity or focus. But we weren't being too prescriptive of what the control means in different contexts and what therefore specifics around how you would measure maturity in different industries or sectors or for different problem sets.

That was on purpose again, to A, make the controls as timeless and universal as possible. But B, one of our hopes is through the governance committee and the industry advisory group that these controls eventually evolve such that we have specific implementation criteria or roadmaps or just simply guides for different sectors and for different industries. And so, you know, the financial services industry may see an instantiation of these controls at a level of detail that looks pretty, pretty different because it's, it's really tied to their specific body of work and how that gets done and the unique use cases and risk profiles of that work that's specific to their industry. Versus something like energy production or, you know, just more broad environmental considerations, you know, the level of maturity that, or the way we would engage an impacted stakeholder in that context is going to be, is going to be different.

And so, you know, what the compute does to your environmental goals, or how you're actually building physical infrastructure to do this work, or using AI to help you design physical infrastructure, those environmental considerations are going to be really different depending on the use cases. So, we try not to solve for this at a use case level, because it's going to be again, so variable based on industry and player and organization. But absolutely the intention behind that control in particular is understanding no matter what AI work you're doing for whom and in what context, you're going to have people who are impacted by it sometimes positively, sometimes negatively. And to be responsible, you know, the nature of the control is you understand that impact at a really nuanced level. So you can actually take appropriate mitigations.

Alyssa Lefaivre Škopac: So Kevin, you've asked the question that keeps me up at night. Honestly the, the reason AI is so unruly to govern is because it's so massive and it has so many of these different threads and it's so context specific. So, I even on a, on a separate initiative was a co-author on a paper around how, you know, you know, AI impacts or ESG, and I know ESG can be political, but basically are other types of organizational goals.

You're using AI to help meet an environmental objective potentially, you know, whether it's carbon capture or something like that. But have you considered the compute implications of that? Have you looked at this holistically? I think about all the time where data centers are being built in Latin America, in water scarce areas. I think about all of these things and that end. When we think about where the controls fit in, it's about building that muscle, that risk assessment inside your organization, as Jeff mentioned, to think about, have you looked at this holistically? Have you considered the counter effects of the AI in different areas of your organization?

And I do think the controls will begin to flex, and we'll hopefully work with other organization and partners where they're experts in these things. Jeff, before you started, Kevin and I were talking about, I don't believe in the broad umbrella term of AI expert because you simply cannot be an expert on all of it. If you're an expert in the testing and evaluation or kind of technical side of things, you may not be the policy expert and you may not be the environmental expert, which is why our community is so important because if anyone's an expert in everything, they're a liar. But as a collective, there's a huge amount of mind share that can build this out for our future.

Geoff Schaefer: Yeah. One of the things I, I say is the amount of things I need to have an intelligent opinion on, on a daily basis is simply unreasonable.

Alyssa Lefaivre Škopac: Not possible.  

Kevin Frazier: It's too many things, too many things, but I know you two have at least five to do items on your list, so we'll have to leave it there and let you get to it. Thanks again for joining.

Geoff Schaefer: Thank you so much, Kevin. This was great.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Alyssa Lefaivre Škopac is an independent responsible AI strategist.
Geoff Schaefer is the chief AI ethics advisor at Booz Allen and is responsible artificial intelligence practice.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare