Cybersecurity & Tech

Lawfare Daily: Robert Mahari on Addictive Intelligence, Digital Attachment Disorders, and Other AI-related Concerns

Kevin Frazier, Robert Mahari, Jen Patja
Tuesday, October 1, 2024, 8:00 AM
How could the increased use of AI lead humans to form addictive relationships with AI?

Published by The Lawfare Institute
in Cooperation With
Brookings

Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to explain how increased use of AI agents may lead humans to form troublingly and even addictive relationships with artificial systems. Robert also shares the significance of his research on common uses of existing generative AI systems. This interview builds on Robert’s recent piece in the MIT Tech Review, which he co-authored with Pat Pataranutaporn.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Robert Mahari: You start losing the ability to engage with other humans because you unlearn this giving and taking type of relationship and get used to a relationship where you only receive affirmations and praise.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, Senior Research Fellow at the Constitutional Studies Program at the University of Texas at Austin and a Tarbell Fellow at Lawfare joined by Robert Mahari, a joint JD-PhD candidate at the MIT Lab and Harvard Law School.

Robert Mahari: I expect we're going to see more and more folks turn toward this. So, I don't think it's completely a Black Mirror episode in that sense. The question is, what will the technology do in response, right? And what kind of products will start meeting that demand? And how will those products look?

Kevin Frazier: Today, we're talking about the mental health diagnosis that may soon become all too common, digital attachment disorder, think the movie “Her”, but resulting from what Robert refers to as “addictive intelligence.”

[Main Podcast]

Robert, your recent MIT article highlights the growing role of AI as companions in various human relationships and raises a number of troubling possibilities about over reliance on these AI agents. And before we get into full Black Mirror episode territory, let's break a few things down. So first things first, what is an AI companion?

Robert Mahari: That's a good question. And I think that this will be a recurring theme that, you know, all of these kinds of concepts are not fully defined and fleshed out. But the way we had conceptualized an AI companion in the piece was some sort of AI system that is conversational, that replaces or seems to replace some aspect of human relationships. So that could be someone that you're talking to a therapist or someone who is more like a mentor or a teacher, or one of the kind of ideas we explore in the piece is more sexual in nature, right? So, more of a sexual companion that you're going back and forth with. But that's the general idea.

Kevin Frazier: So, that innuendo aside, going back and forth with, these aren't physical companions we're talking about, correct?

Robert Mahari: No. So, the whole thing gets a whole lot more scary and interesting when you add what people sometimes call embodiment. But for now, the idea is really think ChatGPT, right? You could engage in something that seems to resemble companionship with ChatGPT right now, where you say, hey, I'm really struggling at school. Do you have any ideas for how I might deal with that? And ChatGPT will happily kind of therapy you and help you out. So that's the kind of use case that we're imagining.

Kevin Frazier: And when we think about some of the new capabilities of ChatGPT and similar tools, there's this sort of storage function now that you can have the model remember certain details about yourself. Is that an indication that we're moving further along this spectrum towards a sort of companionship idea?

Robert Mahari: I think it certainly strikes me for certain types of relationships, you need to have this long term ability to recall. For the, you know, sexual messaging piece, probably less relevant to have the long-term recall. And even for a therapy session, you could imagine that being an ad hoc kind of thing.

But as soon as we start thinking of these relationships as something maybe deeper and more meaningful, the ability to have long term memory and be able to refer back to things, and also give the model the ability to learn and adjust its outputs based on your preferences that all requires some sort of recollection.

Kevin Frazier: My wife has heard me talk too much about my failed basketball career. So it would be helpful if I could just prompt ChatGPT to always remember that I never became the NBA star I was aspiring for. So, when folks think about these AI companions, I'm sure some people listening are already thinking, oh, easy, Joaquin Phoenix in “Her.” We're just finally realizing this type companion that travels with you that you can converse with. Is that an accurate conception of a sort of AI companion, would you say?

Robert Mahari: Yeah, I think that is one way that this can play out. There's really a lot of variance for how these types of relationships could manifest themselves. And we'll talk about this in a moment, but lots of different types of harms, including completely harmless applications that arise.

Kevin Frazier: So one of the harms that I think might strike people initially as pretty benign is this idea of AI companions becoming increasingly sycophantic. And off the bat, I'm thinking, you know, it'd be nice if somebody told me you gave the best civil procedure lecture today. Great job. And that might be self-affirmational. Give me some more confidence. Why is this sycophantic nature of AI companions problematic?

Robert Mahari: So there are two aspects to that. And I don't think that it's fair to say that this is a blanket, you know, problem that is going to affect all people, but more that these are two aspects that give us pause. The first is that this is really nice to hear, right? And so just in terms of sometimes people talk about dark patterns in the way that we've designed technology, this seems like it could be a type of dark pattern, right? Of course, if you receive praise and hear things that you want to hear, you're more likely to continue engaging.

The second thing, which is a little bit more maybe abstract and maybe more troubling is that this is not the way that humans normally engage in relationships. There is generally a giving and a taking when we're engaging with other humans. And thinking of a relationship that is only based on receiving, right, you're only going to get positive things. And in fact the model is going to go out of its way to adjust to your preferences and to the inputs that you provide to continue to kind of refine how it's performing. That's quite different.

And so the worry is that if somebody is engaged and we don't know what the level is. But if someone is engaged too much with these kinds of companions, is there the possibility of something that we call in the article digital attachment disorder that you start losing the ability to engage with other humans because you unlearn this giving and taking type of relationship and get used to a relationship where you only receive affirmations and praise?

Kevin Frazier: So, I'm already freaked out. But before we get into full freak out mode, let's level set on a couple things. You mentioned dark patterns as being kind of built into these AI companions. What is this notion of dark patterns? In what other context do we see them?

Robert Mahari: So, it's not clear whether dark patterns are built into AI systems yet. They might have elements already that resemble dark patterns. But to take a step back, people will talk about the way that we've designed platforms like TikTok, like social media platforms, that draw you in that continue to maximize engagement and make it hard to walk away. And this, you know, exploits tendencies in human psychology and just makes us a little bit more vulnerable to overuse of these systems in a way that ultimately benefits whoever is creating the systems.

And so it seems like if we see AI companionship as a product that's developed, you know, in a deliberate way, because right now what we're seeing, and we can talk about that, is people are using AI systems that aren't really intended to be used in this way, in this way. But because they were never developed to draw us in, it's unlikely that, say, OpenAI is thinking actively about dark patterns and maximizing usage. I think they're really focused on utility and maybe the utility in and of itself is kind of drawing us in. But you could imagine aspects of these types of systems that you could exploit to draw people in and to maximize engagement. And given how scary and personal these kinds of things could be, that really gives us pause, right?

Kevin Frazier: And when we're thinking about the incentive of these AI companions to embody dark patterns, what is the incentive there? Is it that idea that eventually these AI labs, which currently seem to be just swimming in money, will need to turn a profit. And the best way to turn a profit is to just keep folks engaged in some way.

Robert Mahari: So I think what's more likely, but it's really early days for this, is that we're going to see some entities that develop the underlying technology at huge expense and that provide that technology to other entities that create products and systems around them. And so, I think it is the latter, the app developers of generative AI who will be incentivized more to draw people in to get people to use their systems.

I mean, it depends because there are completely benign enterprise-type applications of generative AI, where the pricing model will be such that there won't be-. I don't know if like document summarization benefits from dark patterns, but there will probably be some consumer-facing applications where the remuneration for the developer depends in part on engagement. And that then creates an incentive to develop dark patterns.

Usually, people associate this with advertising. It's interesting to think about how developers of AI systems will actually make consumer-facing AI systems will actually make money. If it's a subscription, a monthly subscription, then really what you care about is that people renew that subscription, not necessarily how many hours they're on the platform. In fact, you'd prefer them to renew the subscription and use it as little as possible.

But you could imagine ad models as well where the companion suggests, you know, different types of products or things like that. And then you could imagine a, it’s creepy in a way that doesn't feel like science fiction because we have this already, but you could imagine markets for these advertisements, right? Where people bid and the highest bidder gets placed in an active conversation. And when you ask what's the best car, it tells you Subaru or Volkswagen, as the case might be.

Kevin Frazier: And is it correct to characterize the real concern about dark patterns as the kind of absence of awareness and consent by the user to these patterns that keep drawing you back in and keep sucking you into further use?

Robert Mahari: So I think consent is a really interesting thing in this context, because you have here a system that was trained on the collective human output. All of it, right? And that can embody, you know, history's greatest experts, greatest charmers, most incredible therapists and lovers and singers and songwriters.

I don't know how much longer it's meaningful to talk about consent in this context. I mean, we already know the, you know, systems are unexplainable. I think right now, I mean, I consent to using ChatGPT, there's nothing like that. But at some point you could imagine once you're deeper in these kinds of relationships and, you know, they become maybe more everyday, it's not clear if consent is the mechanism that we should hang our hat on.

And the worry that I have is more related to maybe this kind of subconscious influencing of people to spend more time, maybe neglect other types of relationships. And I don't know if, we've talked a little bit about this in the article where we say, well, maybe there are ways to tell people, hey, you're interacting with an AI. And that knowledge alone helps them, you know, take a step back a little bit like the gruesome pictures you see on cigarette packaging, things like that. But it's unclear if that will really work. And ultimately, we'll probably need more research on this kind of psychological dimension.

Kevin Frazier: Yeah, so a lot of your concerns seem premised on people being willing to engage in these sorts of emotional intimate relationships, if we're going to kind of anthropomorphize these AI companions, engaging in relationships with these AI companions. So what evidence do you have that this is already happening?

Robert Mahari: Yeah, so we point to two pieces of evidence. And again, it's early days, so it's not clear if, you know, maybe this will never manifest and then that's fine. On the one hand, there are companies offering this as a service. Replica being probably the foremost example, which was-. The backstory to that company is kind of interesting because it appears that it was really an effort to resurrect a deceased best friend via AI that then gave rise to a platform product offering that allows people to create these AI companions for themselves. And they have millions of active users. So clearly there's some demand there.

Kevin Frazier: So already, just to clarify, already we're seeing thousands of people already want to form relationships and not to sound blunt, but with replicas of their dead friends that are AI.

Robert Mahari: So it's not necessarily their dead friends. So, the original founding story was an attempt to resurrect a dead friend. The understanding I have is that's not usually how people use the service, right? So, they just create an AI companion for themselves. And there are different applications ranging from kind of friendship to sexual for those applicants, for those companions. So, it's a fun origin story. I don't think that represents how most people are using it.

Kevin Frazier: Okay. But needless to say, there are thousands of people already willingly engaging in these sorts of creation?

Robert Mahari: Yeah, absolutely. The other piece of evidence, which I think is a little bit more complicated, and I'm happy to unpack this with you. For a completely different, unrelated study that had nothing to do with AI companionship we were looking at pre-training data for AI. So the websites and domains that are being crawled to create these large pre-training corpora on which large language models are being trained. And we were interested in understanding their terms of use and what kind of content they represent and things like that.

And one of the figures we generated for that paper compared the types of domains that contribute to pre-training data, which is a lot of news, online encyclopedias, things like that, to how people are actually using the platforms. And so we relied on a data set called WildChat, which represents 1 million interaction logs with ChatGPT. And really the concern there was, if there is a mismatch between the source of the data and what its original purpose is and how it's being used, well, maybe that data is not really suitable for that purpose. There's a secondary kind of more legal question where if you're using the data in a very different way than its original purpose, maybe that favors a finding of fair use.

We were surprised to note that the second most popular category in a data set, by the way, where people knew and consented to their data being tracked and monitored, the second most popular category of use was sexual in nature, sexual role playing, things like that. And it ranged from, I mean, looking at some of these things manually, it ranged from people trying to trick ChatGPT into saying something dirty to people, I think, engaging in a more meaningful, intimate or intimate-adjacent type of conversation.

But you know, the tools have barely been out. There certainly haven't been designed for this. If anything, they've been designed actively to block this kind of use. And if you try to use ChatGPT in this way without exploiting some tricks, it usually will say something to the effect of, you know, I'm here for a professional conversation, or you should look elsewhere and things like that. So clearly we're seeing people using the tools in this kind of way. It seems kind of like human nature.

And the last thing I'll say on that is, you know, loneliness is becoming somewhat of an epidemic if it hasn't already reached that level. I think lots of people are just bored, and they're just interested in exploring this, this is fun and new technology. So I think if for no other reason than people having time on their hands, being interested in exploring this technology and maybe not having a lot of alternatives, I expect we're going to see more and more folks turn toward this. So, I don't think it's completely a Black Mirror episode in that sense. The question is, what will the technology do in response, right? And what kind of products will start meeting that demand? And how will those products look?

Kevin Frazier: So getting a little bit more into that survey of users of ChatGPT and their prompts and purposes, what was the number one use case?

Robert Mahari: It was related to writing. I think it was we termed it creative composition. So people are mostly using these tools to help them brainstorm, help them improve their writing, things like that. The way that at least I use ChatGPT a lot, the way that I think most people who are using this kind of as a productivity tool would be.

Kevin Frazier: And given the importance of knowing why people are using these tools and how they're using these tools, obviously your study was quite important in initially doing a survey. Is there any entity or nonprofit or government authority that's doing this on an ongoing basis, or are any of the labs sharing this sort of information?

Robert Mahari: Well, there's a huge privacy issue here, right? So, on the one hand, I agree with you that we ought to know what people are doing with these tools. On the other hand, it doesn't seem even remotely appropriate to have a surveillance requirement or something like that. If anything, we want OpenAI to develop its tools with, you know, principles like privacy by design so that it's not possible for them to see what the user logs are. People are actually very concerned about user data being used to train for future models and then potentially being generated in response to a future query. I do a lot of work around the use of AI in legal context and that's a big issue for law firms. So, privacy is quite an important concept here.

There have been studies like this one which I believe was done by a nonprofit where they actually went out and they got users’ consent. I don't believe there was IRB approval like for a university, but they did try to take steps to design this study in an ethical fashion. And I expect that given the interest that we're seeing in this space, other studies will follow. But they're always going to be limited by the fact that the types of users who consent to being part of these studies may not represent the general kind of average user. And I don't know if there's a great way to avoid that kind of bias while still maintaining privacy and security in other contexts.

Kevin Frazier: Yeah, and I want to get back to this sort of regulation by design and privacy by design in a second. But for right now, I think it's worth pointing out that when people raise the positive possibilities of AI, a lot of them are grounded in that sort of individual one-to-one connection with an AI companion or an AI bot.

So, for example, I've heard a number of parents say, I can't wait for my child to finally have a personalized AI tutor who knows exactly how they learn, can come up with new assignments, can grade their homework, all these things. So let's imagine that world where we have some great AI tutor that comes out and you start using it in Pre-K. You mentioned this notion of digital attachment disorder. Can you explain that a little bit more and why it's both a problem on an individual basis and then potentially on a societal basis?

Robert Mahari: Absolutely. So, I think we're already seeing some signs of this when children are interacting with Alexa and Siri and the other voice assistants, where it's not necessarily clear. And again, it's I'm not trying to claim that we've done this research, and we have the answers. We're really just trying to highlight areas of potential concern. But it does seem like it's hard for a child to distinguish the difference between talking to a human and talking to an AI. And you could imagine that a child, especially as these companions become more human-like, it might affect how children interact with other people. And it might also affect how adults interact with other people.

And so, while I agree, I think it would be great to have personalized tutoring. And there's an incredible kind of access to education and access to all sorts of things question that could be addressed at least in part by AI, and generally I'm very positive and excited about these applications, I don't mean to say that I'm an AI doomer. But I guess everything in moderation, right? Once you hit some level where you spend so much time with AI, and maybe so much formative time with AI, not just as a child, but also later as an adult, who knows what the effect is on the individual and their ability to interact with others.

And then the societal implication, I think, is pretty clear because if you have enough folks who are beginning to retreat, beginning to unlearn how to interact with other humans, well then that makes us worried about social interaction writ large, about civil discourse, about people wanting to engage in, you know, democratic processes and all sorts of other things that we as a society value.

Again, it's hard to say, right? Because we're so early, that maybe all of this turns out to be exaggerated. And I hope it is right. I don't know. It's hard to know how many people are really vulnerable to this, whether this is a cheat code that we've discovered in the human mind, where when you have a technology that, you know, easily passes the Turing test. It's really hard to know that you're not talking to another human, but for the fact that you know you aren't. This is unprecedented. We've never had to deal with this. And so, time will tell how humans interact and how society responds to this technology. So, so, it's an interesting time, I think, but we're seeing some of these signs that I think are worrying.

Kevin Frazier: I'll go ahead and say that I'm quite worried. I mean, when we just think about to your point earlier, the importance of reciprocity to relationships and developing empathy. And if you don't have a AI tutor, for example, that says, hey, Kevin, I'm tired. You've been asking me about civil procedure for four hours. I want to take a break. And you just never experienced that sort of exchange. And you just come to expect everyone to cater to your whim. Well, in an era in which we already have distrust and arguably a lack of empathy, that, that seems concerning to me.

So if others were having similar concerns and I think after listening to this podcast, you may be receiving quite a few emails, subject line, I'm concerned. Why do you think that, and you mentioned doomers, right? We've heard of the folks who are maybe more on the doomer side of the equation worried that AI is going to lead to certain existential risks. Then we've had other people who are perhaps more concerned about AI exacerbating already known issues like algorithmic bias, for instance. Why do you think that this issue, perhaps if we call it digital attachment disorder, or you've also referred to it as addictive intelligence, why haven't these concerns, in your opinion, kind of risen to the fore of the policy debate?

Robert Mahari: So it seems like these concerns occupy an awkward middle ground. Because on the one hand, you do have to embrace a little bit this thinking into the future with some degree of pessimism, cynicism, maybe combined with tech optimism, right? You think the technology is going to be really good, but you're worried that people will abuse it or be abused by it.

At the same time, it's not an existential risk. At least, I don't think it's an existential risk. I think there's a way to frame it as such, but then we're really entering, I think, kind of unproductive, but maybe intellectually interesting science fiction territory. And so it's hard to find people, I think, that fall in between those two kind of extreme positions between the, all the risks are fairly benign and, you know, it's the stuff we've known about, nothing new here versus the gray goo, the world is going to end type position.

Also, it's a lot of it is revolves around topics that I think are a little bit taboo. I mean, I do think that the sexual companionship plays a non-trivial role here and allows you, we talked about reciprocity. There are other types of norms that you can violate with this kind of agent. And I think it's quite different than say, consuming pornography, where you kind of get what there is, and if you, that doesn't meet your preferences, I guess you can search around, but there's no customization option. Whereas here, everything is customization. So, it's a little bit of a, of an interesting topic.

And then there aren't, I mean, you have legal scholars who are concerned about this. You have technology scholars who are concerned about other aspects of this. I think in the end the universe of people who are really deep in this is maybe not as large as we think it is. And being in it I think it's easy to get the echo chamber bubble syndrome where you think that this is the only thing in the world, but of course, there are other kinds of concerns that people have. Especially because, this is the technology that is used by consumers, but it's also used by governments, it's used by enterprise, and so the types of risks and concerns really span a quite a large range. So, yeah, I think this has kind of maybe fallen in between the cracks a little bit.

And then finally, I'll say it's not clear what we do about this and that's very unsatisfying. I remember when we started writing this piece, we kind of talked to each other. We tried to figure out, like, are we willing to write a piece where we don't provide any concrete answers? And I, we've done our best to sketch some possibilities, but to be completely honest, and I'm, you know, I am honest and transparent about this. I don't think we have any answers here. It's not clear whether regulation in this space is all that easy to square with other kinds of liberal notions we have about where government can and can't or should and shouldn't intervene.

And it feels, I guess, to make it explicit, regulating adultery or something like that, like why is this anybody's business? And you can kind of express the social harms and the individual harms and say, well, maybe we need regulation here, but there, there's not a lot of externality from me being addicted to an AI companion. So I think that's also unsatisfying in a certain way. It's much easier to say, shut it all down, or let's be concerned about biases and really, you know, test them and benchmark them. And a lot of these risks that people raise are realistic. Like, I don't think we should dismiss them. And I think we opened the article with that. So, this isn't to diminish any of the other conversations that are going on.

Kevin Frazier: So let's say though, that somebody has listened and they're in a regulatory capacity and they're thinking, huh, I've heard this concept now of regulation by design and you sketch this out a little bit, what does that term mean? And how could you see it in some ways playing a role in regulating AI companionship?

Robert Mahari: Yeah. So regulation by design has been a theme of my research for a little while, where what you try to do is you try to embed regulatory objectives into technical designs. We've been doing this as engineers all the time.

So, the story I like to tell of is of Elisha Otis who invented the passenger elevator. No one wanted to get into the elevator until Mr. Otis took it to the World's Fair in New York and cut the ropes, and there were ratchet mechanisms that were gravity controlled. And so, as soon as the ropes were cut and the tension was released, they snapped into place and stopped his fall. And then that kicked off an era of passenger elevators, allowing us literally to build to new heights. I mean, it's really kind of remarkable.

As engineers, we do this all the time, right? When people engineer systems, they think about what are the things that we're concerned about and how do we build the system so that it's as safe as possible. Kind of like, you know, building, I think the example we might give in the paper is designing a child's toy to not fit into a toddler's mouth, that kind of thing.

So, then the question is, how do you do this with regulation? And there too, there, there are examples that go way back. If you look at coinage, ancient coinage, people would put ridges on it prevent someone from filing the precious metal off. Since the 1970s you know, the early internet crowd really embraced the idea of privacy by design and thinking about how do we design this, you know, very decentralized, very fragmented infrastructure in a way that allows us to have privacy and then that was also embraced by the European General Data Protection Regulation.

So I think it's a powerful concept that sits somewhere between ex ante risk minimization frameworks and ex post liability. It's a little bit tricky in this context because we don't really know what exactly the harms are, how we would measure them, that kind of thing. But examples would include developing AI agents. There's this whole area of research called alignment where you try to get the models to align with human preferences and values. Maybe that gives rise to sycophancy by accident, but it also means that when you ask it, what's the best dessert, it might give you a, you know, three options. But when you ask it, what's the capital of France it'll give you one answer. These kinds of things, you know, that humans just prefer to have certain types of dialogues. That's all been carefully engineered and programmed. And so too, we could imagine engineering and programming ways to make these systems less desirable as companions.

And if we go, you know, far enough upstream, it becomes challenging to undo that. Maybe that's not a good idea, though, right? So, I don't think that what we're saying here is that there ought to be a blanket ban on AI companionship. There are clearly people who would benefit from them. You gave the nice example of tutors for kids.

Of course, we want them to be a little bit sycophantic, right? We want them to be empathetic. We want them to praise the learner when she gets something right and to be understanding when she gets something wrong. So, it might be challenging to really implement this, but it seems like in a technology landscape that is becoming increasingly fragmented, is quite international, has a lot of different players actually intervening at the level of the technology fairly far upstream seems like a more effective approach than, say, requiring ex ante risk assessments and conformity assessments before you deploy in the market. Because the marketplace here is the internet, and so it's hard to really think about national borders and things like that in this kind of context.

Kevin Frazier: Well, I have one suggestion for anyone listening as to how to get people to stop using an AI companion. You could just say, every 15 prompts we have to use a joke from Kevin, and people will very quickly get tired of it. Just ask my students. But in all seriousness, Robert. What are some items on your research agenda looking forward? What should we be on the lookout for?

Robert Mahari: So, so I'll give you like a 30 second background, which is that I started my PhD really interested in the fact that there's a thriving community that's at the intersection of law and technology. But a majority of that community is focused on the law of technology. And so I was, I started really interested in the technology of law. How do we improve the practice of law? How do we solve access to, or, you know, improve access to justice with technology? How do we study legal systems in new ways? You know, understanding biases and judges, all that kind of thing.

And then I kind of came full circle when I started thinking much more about AI regulation. And it seemed like you needed technology to understand what was going on and understanding of the technology in order to regulate effectively but then really, it's kind of a policy question. And so I'm now working on both of these things. So on the one hand, I'm really interested in continuing to develop tools that kind of reimagine legal services and how businesses, how individuals approach legal services. And on the other hand, I've been working more and more with regulators to think about this idea of regulation by design in this context, but also in other kinds of contexts, trying to have a more thoughtful and nuanced approach to regulating AI.

It's really interesting to see what is developing, right? We have this kind of omnibus regulation that some are characterizing as a little bit heavy handed in the European Union and there's worries about innovation and competitiveness there. And then in the U.S., we're having a very, I think, slow, but extremely fragmented approach where individual states and usually kind of sectoral state regulators are saying, well, you know, insurance in Colorado needs AI regulation, so let's come up with one of those, which is then hard to square with what is ultimately a general purpose technology.

So I think that whole area is quite interesting and I view my kind of role here as trying to bridge the gap between the technology and the law and contributing to that dialogue in a useful fashion. And of course, if folks are interested in collaborating I'm happy to, perhaps there are a couple of professors and academics in the audience. And I can also close by saying that I'll be on the academic job market this cycle. So if someone wants to have me more permanently, I'm available.

Kevin Frazier: All right. Well, you heard it here first folks. And with that, we'll have to leave it there, but thank you again for joining, Robert.

Robert Mahari: Thank you.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening



Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Robert Mahari is a PhD student at the MIT Media Lab and a JD candidate at Harvard Law School.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare