Latest in Podcasts and Multimedia

Cybersecurity & Tech

Lawfare Daily: Jake Effoduh on AI and the Global South

Kevin Frazier, Jake Effoduh, Jen Patja
Tuesday, October 8, 2024, 8:00 AM
How are AI advances impacting human rights in Africa?

Published by The Lawfare Institute
in Cooperation With
Brookings

Jake Effoduh, Assistant Professor at Lincoln Alexander School of Law, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to share his research on the Global South’s perspective on AI. Jake has carved a unique and important research agenda looking into how AI advances are impacting the pursuit and realization of human rights in Africa. 

To receive ad-free podcasts, become a Lawfare  Material Supporter at www.patreon.com/lawfare . You can also support Lawfare  by making a one-time donation  at https://givebutter.com/c/trumptrials .

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Jake Effoduh: There needs to be context driven approaches that is very necessary to just translate the principles of explainability. Like, what is this technology? What does it do? What do you know about it? Even explaining that some aspects of this technology is not fully understood is part of what I'm talking about.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, Senior Research Fellow with the Constitutional Studies Program at the University of Texas at Austin, and a Tarbell Fellow at Lawfare joined by Jake Effoduh, Assistant Professor at Lincoln Alexander School of Law.

Jake Effoduh: There's a blind spot when it comes to the regulation of AI. We don't hear much about what do people in the Global South want? How do they engage with the technology? It's always about like, oh, maybe what tool can we put in the Global South to help them X, Y, Z, you know, without asking what do they want AI, if they want AI.

Kevin Frazier: Today, we're analyzing how the rapid development and deployment of AI is affecting human rights in Africa and more generally, the Global South.

[Main Podcast]

Jake, let's start with this concept of explainable AI. You've emphasized the importance of distinguishing technical explainability on the one hand, and on the other hand, a broader sort of socio-technical conception of explainability. Can you outline the difference between the two and why this concept of explainability is so important?

Jake Effoduh: Yeah, I mean, I think it's one of the most critical aspects of governing or regulating AI systems. If technology is going to serve us as humans, we need to have an explanation as to what it is, what it does and what we should expect from it. So, the difference between a technical definition of explainability and a socio-technical one is really not that different, except we just want to lose people because AI is not a purely technical tool.

It's a socio-technical tool. It engages an environment; it engages with people in different ways. And so why we push for explainability in AI is to ensure that AI systems are designed to provide some clear, understandable explanation of what their decisions are, what actions we should expect from them. And I think this push for a more socio-technical explainability, people should be able to understand or contextualize what AI systems are, what they do, where do they get their data from, who do they represent, who made them, who created them, what like, the whole explainability concept is pretty much around ensuring accountability and ensuring trust. We want to make sure AI systems are more understandable so we can better identify where they might be biased or where they might be discriminatory, you know, and the rest of that.

But the most important thing is a socio-technical drive for explainability in AI is to actually empower people, is to empower individuals to ask questions, to challenge and seek redress for where any decision might negatively impact them. So if we care about a just society, if we care about what technology does, for us and within us, we want to ensure that people understand AI systems within their local context, within, it's about the technology meeting us where we are and explaining to us other than we are going to computer science school just to understand what an app or an AI system is doing within us.

Kevin Frazier: Well, I certainly don't want to go to computer science school, so I'm glad that's off the table. But let's break this down a little bit further. Is it fair to kind of think about the difference, like, understanding how an internal combustion engine works in a car? That's one way to understand how a car works. But a broader understanding of sort of the explainability of a society oriented around the automobile is to think about, okay, now we have to change our roads. Now we have to see who's driving. We have to think about what materials go into the car. Is that a kind of good analogy for thinking about the difference between technical and socio-technical explainable AI?

Jake Effoduh: Yeah, it's quite similar. I mean, if you're going to buy a car, you want to have a manual, which people don't realize is actually by law that those manuals have been put in place. The manual should also contain instructions in a language that you understand. So, if you notice, most cars come with, you see English, Spanish, French, Arabic, you know, Mandarin. You see it in different languages because it needs to explain what the car functionalities are, how to set it up, how to service it, all of that.

But you see, we need to realize that AI systems are way much more technical, right? Because there are aspects to the AI system that are just black boxed. We don't, even the people who developed it don't even know how these machine learning algorithms come up with their outcomes. So, it even requires a lot more explanation, right? How do you prescribe or explain something that you don't fully understand, but has implications for people's lives and would affect people differently? So that is why I think it's a bit different because there needs to be context driven approaches, that is very necessary to just translate the principles of explainability. Like, what is this technology? What does it do? What do you know about it?

Even explaining that some aspects of this technology is not fully understood is part of what I'm talking about. Yes, there is a certain level of education that you need to have to be able to buy a car and have a manual and then know what the car does. Which is why I've sometimes said we need interlocutors, or I'll call them intermediaries, people who could, who understand the technology can explain it to us in the language that we get, in the context that we live in. That way we would reduce some of these complaints, some of these issues around bias about discrimination, because if people know what the technology does, they can better handle the technology. They can better adapt it to their own circumstances. And then we will have these problems of, you know, people raising issues around bias or discrimination or issues around just algorithmic misuse.

Kevin Frazier: Yeah, and I think it's really interesting and inspiring that this conversation is happening relatively early in the age of AI, because if we look to the social media age, for example, if we had known more about the Facebook algorithm, for example, and what sort of behavior it was driving before the Facebook files came out. Well, then even though you may not understand how those algorithms actually work, if we had understood that they were driving at, you know, increased user engagement on the basis of kind of sensationalistic or highly emotional content, well, we would have been in a better position to think about who should use these platforms when, how should we regulate it and those sorts of questions. So, it's good, I think, that we're putting an emphasis on this explainability notion early. Can you share a little bit more about how some international organizations are already working on these sorts of explainability and transparency efforts?

Jake Effoduh: Yeah. So like right now there are no international treaties or conventions that mandate the industry to make sure that there's a lot of explainability around the AI systems that they use. But there's a lot of international efforts. For example, I think in March this year, the UN General Assembly unanimously adopted the first global resolution on AI to protect, you know, passing out data to monitor for AI risk. Pretty much to safeguard against human rights. Of course, they did not mention, like, explainability there. But the entire rationale behind this treaty is for AI systems to be more respectful of privacy, to be more context driven to be more contextual for its users and to be as transparent as possible.

So when he uses that transparency mantra, we can infer that of course, AI system should be more explainable. The industry should try as much as possible to tell people what these tools are, what they might cause to the best of their knowledge, right? There are also some other, like, I'll call them soft international agreements. They're not like hard law, but UNESCO proposed a framework that they have a draft recommendation for ethics of AI. There's the OECD AI principles. There's the G20 AI principles. There's G7. There are tons of international documents that prescribe hey, we want to use AI. We want to see AI, but we want to use it in a way that it advances us and helps us.

Not that from your example, with social media like Facebook and Instagram, we don't want to ensure that these tools are making young people want to, you know, harm themselves or are sexist in the way that the algorithms would flag things like menstruation and take them down, but it would allow like gun violence, for example. So it requires a symbiotic process. The industry needs to engage with its users and be transparent and open with its users while we as users, I get it, we should also have patience to realize that this technology is not perfect. It's not fully understood. It doesn't have explanations for certain things. So we have the rights to choose where and how we want to use it.

There are more laws coming through internationally, but these are mostly like declarations, guiding principles, ethical frameworks. I don't know if we will come to have like an international law or treaty. As an assistant professor of law, I would love to see that, but I don't know if that would come soon.

Kevin Frazier: On the flip side of things, obviously, one of the most important questions is whether the labs themselves are listening to these declarations, to these principles, or adopting their own equivalents. Have you seen any instances of labs trying to adopt and really embody these principles?

Jake Effoduh: Yes. In fact, a lot of the tech companies come up with their own principles to say, hey, these are the things we are doing to maintain ethical use of AI. These are the ways that we get feedback.

For example, you mentioned Facebook. Meta does this thing where it does algorithmic impact assessments. I think, every year, they get experts to go into the field, collect data about how their systems are doing. What are the impacts to the environment, impact marginalized groups? I'm sure you must have heard the issue with there's a big almost a billion dollar lawsuit against Meta for what happened to Muslims in, you know, in Myanmar. And that was one of the first times we saw that, oh wow, a platform like Facebook, which is like a social media platform to engage with your friends, share pictures and updates, could actually be ostracizing or actually led to the displacement of like, hundreds of thousands of people because the platform became a very threatening space to a minority group in some parts of, you know, the Asian world.

But we also now think, well, maybe this could happen in our parts of the world and it could happen to any other community. So we need to be aware, we need to be more critical. So Meta has come up with its own guiding principles. Google has theirs, OpenAI has theirs. But there's all this conversation about, you know, should people, should they self-regulate? Should the people who are in charge of deploying this AI tool, should they be the ones to set the standards and principles as well? I don't know about that. I feel like there is a lot of value to the principles that they set up, to the values and goals that they uphold when they do their work, and I respect that. And I know there's a lot of goodwill in the, you know, like in this space. But we also need some top-down regulatory frameworks that could put checks on balances and also to provide accountability where harms are cost.

Kevin Frazier: Well, and to think about the other end here, which would be bottom-up regulations, where I think one defining attribute of your scholarship is that you really are paying attention to how are regulations, how are technologies affecting users themselves, individuals themselves, rather than just kind of operating in this amorphous policy space that, you know, just a bunch of classroom conversations or conversations in capitals and smoke-filled rooms, or I guess now we have to say vape-filled rooms but that's a whole ‘nother conversation.

Jake Effoduh: Yeah, I should borrow that.

Kevin Frazier: Oh, go for it. You can steal all my bad jokes, please. So, when we're thinking about this bottom-up perspective to AI regulation, why do you think this kind of approach has been neglected by other scholars or perhaps by policymakers and labs?

Jake Effoduh: Very great question. And I am all for the bottom-up approach because that is where— it's the most difficult to be honest, right? It's easy for us states to say. Hey, these are the laws, these are the rules, done. It's easy for a top-down approach. A bottom-up sometimes is difficult because how do you get the sentiment of what people want and what their challenges are? It's very tough. It's very expensive. But it's more helpful. It's actually more productive, because if it doesn't affect or improve the lives of people at the very grassroots, it doesn't do much.

Some scholars call for, and I have supported this, what I call a needle-out interface. It's in between top-down and bottom-up because at some level, states have the heavy hammer. They can, you know, make that injunction, give that judgment over a tech company where their tools go berserk. But also we need where the bottom-up happens where people can co-produce or cocreate this technology in such a way that they can make it adaptable to their lives. Some states have said we don't want self-driving cars. We want to use AI for agriculture and finance, period. We don't need it monitoring our, you know, day-to-day lives or, you know, the lives of our kids. We want AI to help us wash our clothes, help us deliver our parcels, not tell us who to date. Not tell us, you know, just people should be able to choose where AI systems should operate in their lives.

And so a bottom-up approach would help do things like give people the right to contest bias. And I think this can help the industry provide meaningful explanation for where biases come from, or where biases have occurred. And the first point I make about people choosing where and where not to use AI. That bottom-up approach is supported by the UN High Commissioner for Human Rights proposition, saying that any more, any AI system that is high risk, we should place a moratorium on it. If you notice, citizens don't have a right to almost refuse facial recognition when taking a flight. Sometimes they don't even know that their face has been sampled. I think people should have the right, you know, to do that.

And then another reason for a bottom-up approach is I think we can, when it comes to algorithmic impact assessments, if we want to ensure that AI systems don't harm or hamper human rights, people should be able to contribute to the study of the technology, to be able to tag when the technology is inaccurate, unfair, or biased, or discriminatory, and we can't achieve that though a top-down. So top-down is great. We need those powers at the top to tell the industry when they're going left or right and to slam them with those damages. But we also need bottom-up, where people can actually flexibilize, use this technology in ways that meets their needs and can inform the development of the technology to their benefit.

Kevin Frazier: And what I really like about this emphasis on bottom-up and hopefully it becomes a sort of embraced approach is because if you are going to go the bottom-up route, well, then you actually have to take the time and make the investment in educating the public on how these tools work and in and how they're deployed and in what way and what risk they may pose.

 Right now, it's so frustrating to me that we see all of these polls get sent out to the public that say do you like AI or do you think AI is going to kill you? Or you know they're just— depending on the framing of the question the answer is pretty obvious and it's not a really meaningful poll if you don't have any information, right? If somebody polls me, who do I think is going to win the college football championship? I'm just going to say the Oregon Ducks, not because I have any understanding of whether or not the team actually has good players. It's just because I love the Oregon Ducks. If I'm already fearful of AI or fearful of technology, if you poll me about it, and I don't understand it in a really meaningful way, then we're not really assessing what are the fears that mean the most to the public, what beneficial uses are the public most excited about?

Because to your point, you know, I may not want an AI telling me who to love, that makes a lot of sense, because we know that an AI wouldn't have led my wife to me. She surely would have found someone better. But we may want AI to actually help us with some issues, right? Making sure, for example that we have access to an AI tutor for our students. That could be a great thing. So I really liked that emphasis on, let's take the time and make sure people are actually spending the money in investing in this difficult process of a bottom-up approach.

And with that in mind, I think, I'm also curious to hear your thoughts on how we've also seen so many of these AI regulation conversations be biased towards a Western perspective. How is that kind of showing up in current policy conversations? And how would you suggest we kind of reorient that perspective?

Jake Effoduh: Yeah, it's a very vital point you've made. The AI conversation is very lopsided. I think people in like North America, people in the West generally have these assumptive notions that, you know, AI does affect lives the same way across the world without recognizing that while the conversation is like, should we use AI or should we not? Or, you know, the impact of generative AI in our work. Some other parts of the world don't even have the luxury to have that conversation.

We still have people who have not even gotten into the second Industrial Revolution. There are parts of the world where they have no electricity. They are yet to witness the age of the internet, and then here comes, you know, artificial intelligence systems. And it's easy to say, well, they should catch up, like they say, they should leapfrog. But they are also impacted by how these tools would be used in their domain, without their consent, without their knowledge, without them even being up to speed around how these technologies function. So if we are confused as to, we don’t know what the AI is doing, imagine what that does for communities that have absolutely no idea what the Internet is. And the first time they will be coming to terms with technology, it would be an AI tool.

Let me give an example with Myanmar like I mentioned before. The reason why there was a lot of issues in Myanmar was because when Myanmar got in contact with Facebook, that was their coming to age with the internet. So, they didn't have the luxury to experience the Internet. And then after some years, Facebook was one app? No, Facebook was the Internet for them. So it makes sense that they trusted the platform and a lot of things went down on that platform that affected the lives of individuals.

So when it comes to regulating AI systems, 92 percent of the technology is created in the West and it will be deployed to every other part of the world. I've traveled to China, for example, when I've used publicly designed AI tools that did not just recognize me because it could not see, it just locked off. The system just says, I'm sorry, we can't like, cause I'm Black and not many Black people use the tools that exist in that particular city in China. And so maybe the developers didn't realize that maybe black people are going to travel to China.

And we see some of these exact tools being deployed to some parts of the Global South. And it's exacerbating some of the extant issues that people deal with in those regions. So, none of the AI tools I've come across in sub-Saharan African countries have been really adequately designed for that space or adequately explain what the technology does for them. Sometimes they have to adapt it, put a new API, they have to do certain things to meet the local parlance, right? So, there's a blind spot when it comes to the regulation of AI. We don't hear much about what do people in the Global South want? How do they engage with the technology? It's always about like, oh, maybe what tool can we put in the Global South to help them X, Y, Z, you know, without asking, what do they want AI, if they want AI.

I had a small, I won't call it like argument, but there was a company based in North America who was going to deploy a credit, a financial AI tool in Chad. And they weren't, part of this process was to help them become more financially accountable, but more importantly, to help Chadians save money towards their retirement. And I questioned that part of the AI tool, helping them with their credit worthiness and helping them save money towards retirement. And I told them if you had only but Googled or gone to Chad, you would realize that the average, the life expectancy in Chad is not even up to 65 years. So people are not thinking about saving money for their retirement. They need the money now. They need to eat. They need to send their kids to school now.

So, the reason why your app failed and it did not get the buy-in of the people was because you just lacked cultural awareness. You did not know anything about these people, but you are interested in helping them, which is good. But if you only engage with them, if you collaborated with them and you ask them, where exactly do you want AI to help your lives? Like what are your immediate needs at the moment? You would realize that you did not need to think about retirement because they didn't see, most of them didn't see themselves living up to 70 or 75. And so, saving money for that future just didn't make sense, especially because they have imminent needs at the moment.

Again, this is just an example, but I think to get a more balanced view, and I've tried to write a bit about this. We need more Global South perspectives, in ways that would even help the Global North. When I talk about explainability, and I say sometimes we need what I call midwives or griots. These are like intermediaries who can help people who don't understand AI systems, explain it to them in the way that they understand, AI for Dummies. Somebody who doesn't understand English, explain it to them in their language. Somebody who doesn't really know about cars but knows something about food, explain it to them using food.

Someone who doesn't know much, and I try to do this in my writing. I try to even talk about things like, I love to eat a lot. I talk about like cooking, for example, and I say, you know, no matter how sophisticated an AI tool is, they can only achieve specific outcomes when people understand them. So, I have drawn a parallel to a cooking pot by saying, no matter how nice a pot is, no matter the pot's utility, it requires a cook to know how to use it so that they know how to, like, what meals they make with it.

Because people have different, like people have differing culinary needs. People have different preferences, different cooking traditions, and so AI system should not be positioned as, here is a solution. AI systems should be tools for people to cook whatever food they want in whatever way that they want it. And lastly, the pots, the AI tool should come with a manual that people understand in their language so they can maximize the utility of the pots and trust it very well to cook their food. So yes, we can have the highest-level AI quality tool to solve the biggest problem in society, but they will be very insufficient. If they lack adaptability, and adaptability both here and other parts of the world as well.

Kevin Frazier: I have to applaud you because you may be one of the few legal scholars who after reading your article, I left hungry. So that, that was a unique skill set of yours to inspire an emotion other than, okay, now I need to go watch something funny. So, kudos to you for some great legal scholarship, and I do think your analogies are quite helpful.

And so, I am curious now, now's my time for kind of crazy, I get to go into a Professor Frazier mode and throw you a hypothetical. Which is we seem in so many of these emerging technology conversations that there's a sort of assumption that if you create the technology or if you are the host nation of the creators of that technology, then you have some sort of right to get to set the rules of that technology. But it seems to me like the folks who should really be in control of governing that technology are the majority of the people who are going to be affected by it.

And I think something that's just staggering is we all know that AI is going to have decades, if not centuries-long ramifications. Well, as soon as 2050, one in four people on the planet will live on the African continent. And so if you just take a slightly more future oriented perspective and you realize who's really going to be affected by AI? Is it the 400 million Americans or is it the billions of people living on the African continent, living across the Global South, who are going to be the people who should really be setting the rules arguably, right now? So am I crazy? Or what are your reactions to that?

Jake Effoduh: You are good crazy, and we want that kind of thinking. I think the fact that you even know about what the future looks like across the entire world is very commendable because sometimes people struggle with what the other world looks like and I say other world like the subaltern really. If there's no money there, if there's no power there, if there's no value there, people feel like, well, are they really that important? But I agree with you.

Sub-Saharan Africa, for example, is not just a region that is underdeveloped as they so speak, but it's also a region that contributes to the development of AI technologies. There's been cases with OpenAI. There are articles that show that people were paid less than $1 to help like do some data sorting across Sub-Saharan Africa to sort of develop this technology. So, the labor contribution, the environmental impact of AI will be shared by all of us. I'm sorry. It requires this, I don't want to be very UN-speak, but it requires an international solidarity, a collaborative approach to be able to achieve this.

And whilst we romanticize what AI would do, we must not forget that there is also AI as hegemony. There's this push towards global domination. I don't know if anybody would beat the U.S. but there's like Israel, and China, and, like, Russia, like people are trying to be like the global power when it comes to AI. So yes, states will always want to make the rules and we want the states to make the rules. But with the unction and the understanding of what this technology does at the very grassroots, including outside the U.S., outside the country of domain. Because AI systems are de-territorial. There is no big AI system in the world today that is only developed in one country by one country's people. Very rarely. Very rarely so. The data comes from literally all around the world. Sometime the technology is hosted in one country. It's trained in another. It's de-territorial. You can use an AI tool in India and control something completely 1000 kilometers away.

And so, therefore, yes, states can regulate, but states should regulate even beyond the auspice of what they understand as regular regulation. It's not like banking or agriculture. It's a technology. It's software, that doesn't even have, it doesn't need a headquarters, it doesn't need an office. So, yes, we want states to regulate. But regulates with the understanding that people should drive where the regulation should go.

What we see these days with U.S. and Canada and even China is they look at technology and AI more of like a trade-related issue, more of like a communications device. So they're regulating around communications, regulating around like a technology tool, like a car. They're not regulating something that has impact on people's mental health, impact on people's immigration status, impacts on the quality of life of people. They're not looking at the technology as a socio-technical one. They're looking at it like another technology, like a car or a robot or something else that does not have the massive impact that we see AI has today.

AI is going to change literally every aspect of human endeavor, from how people sleep and wake up to how they date, who they see, how they envision the world, how they see themselves, to even lowering their like physical capabilities. AI, like there's even brain machine interface now that is, we don't even want to go there. But there will be disproportionate abilities, I mean, to affect the existence of literally almost every human being in the world.

So yes, we want states to regulate. We don't know if human beings would have that. There's all these people who, I don't know, like what to call them, the people who assume that what if we all get together and decide, but we really don't get together. It's very difficult for we as citizens to get together. That's why we elect people in office to make those decisions for us. And if they're not making it on our behalf. We can question them and push them to do so. What we're having now is people don't know what to ask for.

People don't know or even understand the impact of AI in their lives. Even when they haven't like come in contact with the technology, it's already impacting their lives. 400,000 people were displaced. 400,000 people were displaced in the last three months because AI tools have been able to do their jobs. I'm not saying AI always displaces. It also creates new types of jobs. But the point is, the people in Ghana, Nigeria, Ivory Coast, who they told, I'm sorry don't come to work anymore because we don't need the factory floor of you assembling shoes in boxes anymore. They don't know that their jobs are replaced by an AI tool. They don't know. They're like, well, maybe the company shut down, but yeah, the company shut down because the factory floors were closed. Robots can now assemble those goods in cartons way faster, way quicker, and done. So even though they don't know anything about the technology, their lives are impacted by that.

I was in a negotiation, and I'll stop here, I was in a negotiation last year where a cement company in Nigeria got a lot of complaints that the men who worked for this industry, they had a lot of back problems because they were carrying heavy bags of cement every day. These were casual workers. These are independent contractors, as we say. And so the company felt very embarrassed by all the news media articles of these men having problems with their backs by carrying heavy cement. For me, I'd engage them in things like, what if you give them breaks? What if you give them something like a brace to help them, you know, what if you do exercises? You know, what if you limit the amount of bags a person would carry per day?

But these things don't equate to more economic advantage. So I guess maybe they didn't take those suggestions, but boom, a German company suggested this phenomenal machine that will pick the cement, arrange them very equally, even weigh them in the process, giving them way much more value than these human workers would provide. And the company said, voilà, let's take it. So the AI tool was deployed straight to Nigeria from Germany and 250 workers, all men, were asked to not come back anymore. Literally, they came to work the next day, they came to work today and the next day they told them, there's no need for you. There's a massive phenomenal AI enabled device that would pack all the cement. Don't worry. You don't have to complain about back pain anymore. You'll be fine.

But these 250 men have lost jobs that they've only known for all their lives. And they are most likely heads of their households. They have to support their wives and their kids. And that was it. That was the end for them. You know, that's it. That's a literal impact in a space I've engaged in. And now we have to engage with the employer. And in fact, the employers were proud to email us and tell us, guess what, guys? We solved the problem. No more media articles about we destroying the backbones of these men. AI can do it. Maybe these men can do something else.

Kevin Frazier: Wow.

Jake Effoduh: Yeah. It's not something fun, but again it's, it’s uncomfortable for me to even step back in and say, well, maybe dismissing these people is not really the right option. And the CEO was like, well, what exactly do you want, Jake? Do you want them? Or do you know, do you want us to hire them or do you don't want us to hire them? I thought you were the AI lawyer here. Blah, blah, blah.

Kevin Frazier: Wow. Wow. Jake, on this notion of, you can't report what you don't understand, right? If we're not even tracking that these jobs were lost because of AI, well then now, you know, when the U.S. Senate or the Canadian parliament says, oh, I haven't seen any real reports or any statistics on jobs being lost. If we're not more methodical about making sure all impacted users understand AI and that all impacted communities have some responsibility to share how AI is impacting lives and impacting their economies, then these regulators in Western parts of the world can just say, ah, I didn't see that report. You know, that was never made it to an official UN Report or anything like that.

So I just think your work, making sure that we are actually engaging with the impacted communities is just so essential, especially when it comes to getting a sense of how AI is changing lives and changing communities for better and for worse. So, I'm curious now you've done so much work. You've spent time on the ground in Africa. You've been talking with individuals. What's next on your research agenda? What questions are you studying these days? What are you looking into?

Jake Effoduh: Well, my research still borders on the impact of AI systems, especially on like marginalized groups, like people of African descent, pretty much how AI systems disproportionately affect people of like different racial backgrounds, ethnicities, sexual orientations, gender identities, pretty much recognizing that AI systems would not affect people the same way. And so how do we use the law as an instrument to constantly protect these individuals and see that we benefit from the good of AI as opposed to people using the tool to disenfranchise or further harm other people?

A small example is in some parts of Africa, you know, people can't live to their full, like in the entirety of their full selves. And so there are these AI tools that can detect criminality from the look of people's face, or AI tools that can detect sexual orientation which can be a fun project in like in California. Oh, let's see who's gay through the AI tool there. But if that tool gets to a country like Uganda, for example, we know what the impact is. So I'm constantly just helping people see, hey, you need a more globalist approach towards how you're developing your AI tools. How do we, whilst you're doing this great work that you're doing with your AI systems, how do we ensure that we protect people's whose lives are in the general world still hampered by a lot.

I always reemphasize how law can be a tool for hegemony. Like legal frameworks can often reinforce dominant power structures. So there's always some ruling class and marginalized. And even when the law is like, well, we're using AI tools to help immigration process, we're using AI tools to help in criminal justice process, it still reinforces the dominant structure. If in a particular space, Black people are more targeted by the police, the AI tool used in that police force will likely enhance that targeting as opposed to limit it.

I always talk about data as extraction, right? The collection of data from all around the world. Everyone's collecting data, collecting data, collecting data. But for some people it's just data, but for other people it's their lives, right? Some people are a part of vulnerable communities. Some people need privacy and autonomy. Some people have indigenous, cultural affiliation to their data. So it's not just data. It's people's lives. And do you respect it? Do you honor it?

I also talk about technology as a weapon because no matter how far we want to advance with AI it can be weaponized. Not everyone wants to use AI for agriculture, healthcare, transportation. There are people who are thinking crazy stuff in their head of what AI should and can do. There's a lot of, like, lethal autonomous robots. There's a lot of, like, all kinds of things you don't even want to know. And I wonder why people think like this. Like my dad used to say, there's a reason why some people see the moon. I want to dance in front of it and play drums and sing songs. And some people want to go there and dominate it. Right? Like people have different ideas around certain things that is within them. But I think the last point is about.

And I think people think I'm an AI-pessimist. No, I used to even be an optimist, but the more you see as a lawyer in this space, I've now become like a pragmatist. I need to balance the equation. What we see is people just throwing themselves into AI, using the word AI anyway, which way possible. If it's not AI-enabled, then you're not, you know, you're not savvy enough. Everything is like everything's AI'd up.

Yes, in my class, I allow my student use all the AI tools possible. I show them which ones exist, how to prompt engineer their last language models. I show them the ways AI tools can advance their work as lawyers. Because the truth is, in the next five years, lawyers will use AI would take out the lawyers who don't use AI out of their jobs. AI is not going to replace lawyers, but the lawyers who use AI will replace the lawyers who do not. So my students should be equipped with the state of the art tools, understand how AI systems work, how it impacts their job. But with the consciousness that they, as lawyers, have the responsibility and liability would fall on them if they use some output that is hallucinating or the AI tools used for their clients have become weaponized or dangerous.

So the underpinned biases, the prejudices, the systemic algorithmic inequalities, they have to be aware of it. They have to be aware of what this tool does for the environment. How much cold water your generative AI model is using that you're contributing to. How much energy is used by a data center, the implications of that. If we're going to talk about offsetting carbon, it applies to the AI industry as well. So I'm not like, oh, just let's not use AI. No, I'm all about, we need to interrogate this technology very hard enough so that it serves way much more people than it currently serves at the moment.

Kevin Frazier: Well, Jake, you've got a lot of research questions to dive into and I've already taken up enough of your time. So we'll have to leave it there, but thank you again for coming on.

Jake Effoduh: Thank you so much for having me.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening



Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Jake Effoduh is an assistant professor at the Lincoln Alexander School of Law, focusing on artificial intelligence, technology law, international human rights law, and international institutions.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.