Lawfare Daily: Ryan Calo on Protecting Privacy Amid Advances in AI
Published by The Lawfare Institute
in Cooperation With
Ryan Calo, Professor of Law at the University of Washington, joins Kevin Frazier, Assistant Professor at St. Thomas College of Law and a Tarbell Fellow at Lawfare, to discuss how advances in AI are undermining already insufficient privacy protections. The two dive into Calo's recent testimony before the Senate Committee on Commerce, Science, and Transportation. Their conversation also covers the novel privacy issues presented by AI and the merits of different regulatory strategies at both the state and federal level.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Ryan Calo: Techniques that are used to derive sensitive things about us are fine if we understand what's going on and they're used to our benefit. I'm worried that's not the incentives that corporate America has in all contexts.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, assistant professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, with Ryan Calo, professor of law at the University of Washington.
Ryan Calo: The nature of the internet and AI is everything is distributed all over the place. And many of the harms that we're worried about might originate outside of the United States altogether, but at a minimum they cross over state lines. You know what I mean? It's really exactly what the Constitution has a commerce clause for.
Kevin Frazier: Today we're talking about AI and privacy protections, or should I say lack thereof.
[Main Podcast]
Professor Alicia Solow-Niederman warns that we now live in an inference economy. Ryan, in your July 11th statement to the hearing on “The Need to Protect Americans’ Privacy and the AI Accelerant” held by the Senate Committee on Commerce, Science, and Transportation, you warned that this new inference economy creates a quote serious gap in privacy protections. Let's break that down a little bit. What does this inference economy mean? What are we worried about here in this inference economy?
Ryan Calo: The basic idea, as I understand Alicia's work, is that so much of what's driving the consumer data economy has less to do with the information that you willingly share about yourself or that can be readily ascertained online about you, but rather the inferences that companies can make based on you and similarly situated people.
The way that I articulate it in my work is I say that AI is increasingly able to derive the intimate from the available, meaning what is AI good at the end of the day? At the end of the day, AI is good at spotting patterns, even so-called generative AI is just trying to guess the next word or sound or what have you, or pixel, from a large model of what's come before, right?
And so my concern is that increasingly companies in particular, though also the government, will be able to derive insights about you just based on information that you thought was innocuous, not dissimilar to the way that Sherlock Holmes was able to tell whodunit from a series of seemingly disconnected, irrelevant facts, right?
And what Alicia's work does is talk about the incentives and the gaps and a lot of what that creates and so on. We are aligned on that, she and I.
Kevin Frazier: And so I guess the kind of difference in kind that we're talking about with respect to AI and privacy law is that now those breadcrumbs you're leaving can be used to make a whole loaf of bread.
We can suddenly learn so much more about what's going on, whereas maybe folks were okay leaving those breadcrumbs previously because they thought, ah, this won't lead anyone to anything meaningful. But all of a sudden, as you pointed out it's leading to discovery of intimate information. And so what is it about our existing privacy laws that makes this inference economy and our existing privacy frameworks, a pair of mismatched socks? Why aren't these aligning? Why shouldn't we feel adequately protected?
Ryan Calo: Yeah. As Paul Ohm, another law professor, in this case at Georgetown rather than GW, points out, the overwhelming majority of privacy laws have specific categories for sensitive information, right? Just in general, state and federal privacy laws and rules passed by administrative agencies differentiate between sensitive and non sensitive, or perhaps health or non health, personal, non personal, and so on, public, not public. What I'm saying is that these categories are breaking down. So, one example is there's an older Microsoft research paper where the researchers show that you can derive whether or not a new parent is suffering from postpartum depression based on subtle changes to their Twitter activity, even absent references to the pregnancy or the state of mind. A new parent may be perfectly happy to tweet whatever about their day, but would not be comfortable talking about postpartum depression online, right?
And unless we attend to the inference economy to use Alicia's terminology, or at least unless we have a more expansive definition of what constitutes sensitive, to include not sensitive information only on its own, but sensitive information that can be, and is derived from less sensitive or public information, we're going to have a mismatch where stuff that ought to be covered and things you're worried about will simply not be covered by the relevant laws that govern sharing, that govern the adequacy of the protections that attend the data and so on.
Kevin Frazier: For the folks who are perhaps less concerned about privacy, they may say, hey, Ryan, I get it, you're concerned. But all the negatives you're pointing out sound to me like positives. We discovered someone's postpartum depression earlier. You also raised in your statement an example of Target being able to determine who may be pregnant based off of their shopping habits. The skeptic might say, Ryan, I just don't get it. This is all positive information. We're detecting something earlier. We're helping pregnant mothers be steered toward better purchases. Now they can notify their doctors earlier. All of this seems good to a more informed society. What's your counter to that sort of rosier picture?
Ryan Calo: I never understood that argument. When I was on my way to the testimony, I talked to my sister, Nina, okay? And she's on the board of the Breast Cancer Alliance, and she is a breast cancer survivor. And she said, whatever you do, Ryan, don't restrict how AI can be used to catch breast cancer earlier and come up with more effective treatments. And of course I think that's really important and a great idea. But we also know that depending on the context, these techniques can be misused. It's not mysterious.
For example, if someone is trying to return your pet that got away, you definitely want them to know your address and your phone number. Okay? If you have an ex who has been stalking you, you do not, right. Techniques that are used to derive sensitive things about us are fine if we understand what's going on and they're used to our benefit. I'm worried that's not the incentives that corporate America has in all contexts. Rather, they are trying to use what they know and the power of design in order to extract as much from the consumer as possible.
So, I'm not saying that's what industry does in all contexts. I think no one thinks that, but we certainly have examples of them doing it and, Senator Cantwell used the example of a car company giving away your data to an insurance company, which then raises your premium. I was using examples, and Albert Fox Cahn has a great article about price discrimination and how you end up getting charged your reservation price, meaning the most you'd rather pay, rather than some number that everybody pays and some people benefit from getting it for less than they would pay, all the way down to really being nudged in directions that are not good for you. And so that, that's what I'm worried about. I think that's also what the senators who sponsored the privacy bill you're talking about at the hearing also care about.
Kevin Frazier: To dive into kind of a more traditional understanding of privacy law too is consent as we're talking about on the nose here is, are you consenting to this disclosure of information and do you know how it's going to be used? Do you want it to be used by certain people? Is an idea of informed consent possible in this inference economy?
Ryan Calo: I hesitate to answer specifically about the inference economy per se, right? But what I would say is that it's well understood in the privacy law community that notice and choice has not been enough.
If you want to see a very good example of this, you can see Dan Solove writing at a Harvard Law Review Symposium and talking about the problem of privacy self management, right? We just know and have long known that consumers are not in the position to protect themselves and police the market without any help from government, right? Now, that doesn't mean that consent and notice can't be tools. I've argued in, against notice skepticism that we're not leveraging as well as we could the power of design and contemporary communications technology to give consumers an accurate mental model of what's going on and control over their information.
I think everybody who looks at the situation sees it, sees that it's necessary that we have these things. But as Woodrow Herzog told the same body, the Senate, in the context of AI regulation, transparency and control alone are half measures, to use Woody's words. They're not enough on their own and you can't just say, hey, here's the ability to figure out what roughly what companies are doing --- mission accomplished, right.
Ultimately, if you look back at the history of consumer protection in the United States at the turn of the century, you look at the Sherman Act, you look at the FTC Act, what those lawmakers --- speaking of original intent --- what those lawmakers were interested in was protecting the consumer from being taken advantage of due to the inordinate market power of certain big firms. Okay? And if you just substitute AI for market power, you can see what the role of the government ought to be in this context. Protect consumers from being taken advantage of by superior information and power that comes from leveraging AI.
And notice and control are not going to be adequate remedies on their own.
Kevin Frazier: And when we're thinking about regulatory means to shield consumers and the public writ large, is there an argument to be made that the FTC, for example, given this kind of common law approach to protecting consumers from undue power might be more appropriate, if not the most appropriate, in comparison to privacy laws that are very explicit about what data it's protecting, in what use cases, from what actors that can't really evolve with time in the same way that maybe some enforcement provisions could under the FTC or the DOJ.
Ryan Calo: Wel as you, you and I are very aware of the classic debate between rules and standards. And the FTC, again, because the concern at the time, which I think is still the concern is immoral extraction of surplus from consumers.
The FTC has been empowered to pursue unfair --- that's why that's not moral language --- unfair and deceptive practice. And under that rubric, they have pursued many different kinds of privacy violations. And you alluded to a common law --- people like Woody, who I mentioned, writing with Dan, who I mentioned --- talk about how, taken together, you can understand the FTC's enforcement under Section 5 in privacy as being a kind of common law where it gives a sense of what the FTC is worried about.
My concern is the same concern people have when you pursue enforcement entirely through standards and not rules, which is that you just simply don't know what is wrong until suddenly the FTC comes knocking at your door and saying, this is unfair or deceptive. And therefore, maybe you under police, maybe you over police I don't know.
But the second thing is that the FTC is only one agency with a few hundred people, and it's in charge of literally the entire economy, minus financial institutions and common carriers, right. And alone, they do not have the bandwidth, they don't have the people to pursue all of these things, and, perhaps we need to quadruple the size of the FTC, I don't know, that would make a lot of people a little nervous, but I like elements of Cantwell's bill that suggest that there should be a private cause of action, and the ability of the attorneys general in the various states to pursue violations as well, right.
So for me, I think the certainty of rules coupled with a broader set of enforcers is why I like the national approach more than just leaving it entirely to the FTC.
Kevin Frazier: And I think it's really interesting to hit on the fact that what constitutes something being unfair may take too much time to evolve for the average consumer.
We've seen that consumer awareness about some of these privacy invasions maybe isn't quite as ubiquitous as you and I might anticipate and so when we get to that common understanding of when a practice is definitively perceived by the public writ large is unfair may take too much time before some immediate harms are realized.
So let's say you've convinced the skeptic, yes, we need to find a different sock to match up with this new AI paradigm. Another skeptic inevitably pops out of the crowd and says, okay, I get these privacy concerns. Yes they're problematic. We need to address them. But if we are too stingy on the privacy front, we're going to inhibit AI development, the kind of AI development that your sister was calling for in other contexts.
We know that huge troves of data are essential to AI advances. How would you respond to the critic who's saying, hey, you know what, this is just what we have to deal with these days. If we want the massive pros that are potentially available from AI, then we just have to stomach some privacy losses.
Ryan Calo: I don't know. To me, that sounds similar to the idea that if we want the massive gains from electricity, we're just going to have to stomach some house fires. You know what I mean? It's just, I think the damage that's being done to the reputation of American tech companies is worse for American innovators and American innovation than would be putting some modest guardrails on the pursuit of these things.
Because no one's saying that you can't train a model or that you can't innovate, you know what I mean. Look, and this is an aside, and I think this is going to strike people in the technology community as somewhat controversial, but I will say I am very confident that if you place limits on data collection and retention, scraping every corner of the internet, collecting as much consumer data as humanly possible, though you have no idea what you're going to do with it and storing it indefinitely so that maybe one day you can train an AI, right? I mean, I think that if that is restricted, we can maybe pursue other technical avenues besides making large language models a little bit better by adding more data and more compute, which by the way, is destroying the environment by gobbling up unbelievable amounts of energy, causing Google and Microsoft and others to miss their climate change goals, right? And maybe they'll have to find other less information intense ways to pursue the same thing, right?
I remember having a conversation once with Lisa Ouellette at Stanford Law School, who's a patent scholar, about the impact of patents on innovation, and talking about the notion of patenting around, which is that if you cut off one way of doing something, yes, people can't build on that particular innovation, but they will find another way to accomplish functionally the same thing and then all of a sudden we'll have more innovation. And that's my intuition about what will happen here. I don't think innovation ceases because we place some modest limits on data collection and retention and use. I just don't. And I think that whatever limits to innovation we face, they're worth it if we can make sure that our consumers in America and our trading partners trust us.
Something that I said in the hearing was, what is the point of American innovation if nobody trusts our inventions? What do I mean by that? 81 percent of Americans interviewed in a Pew Research Center survey reported that --- 81%, right --- reported that they thought companies would use AI, their data in AI, in ways that made them uncomfortable. 81 percent, like what do Americans agree on to that degree? And my testimony, I talked about how 30 or 40 percent of Americans are Taylor Swift fans. You know what I mean? 81%. And then you have one of our biggest, our second or third biggest trading partner, the EU, won't share data with, won't allow data to flow freely between our economies because we have not been certified as adequate for privacy protection.
It's a complicated picture. Part of that is our national security apparatus, Lawfare no stranger to that. But a lot of it is the fact that we don't have a certified DPA, data protection authority, and we don't have national privacy laws. So if our own citizens and our biggest trading partners don't trust our tech companies, where does that leave us? So I think that the threat to innovation is overblown. And if there is a threat, it is offset by the trust we get. A phrase I really liked was Frank Pasquale had this paper once about innovation fetishism, which I thought was, it's a little bit, it's a little bombastic of term, but I like it.
Kevin Frazier: Speaking of the “I”-word, innovation, Senator Schumer, when he released the bipartisan Senate working group’s AI policy roadmap said that the north star of AI policy should be innovation. Do you think that this “I”-word innovation has just become too much of a focus of regulators. I know though Chair Cantwell seemed pretty keen and receptive to a lot of the privacy arguments, Senator Cruz's press release after the hearing was very much focused on making sure that there was not any sort of regulation that would inhibit innovation.
What's your sense of the bipartisan support for this sort of privacy? Yes, we know that, and you heard it here, folks, privacy law is now more popular than Taylor Swift. Breaking news. Everyone tweet it out. But is that true on the Hill? What was your sense of whether policymakers are willing to take this on seriously?
Ryan Calo: I think that when policymakers say we should not act because innovation, right, they are not learning any of the lessons from history. Okay. And most obviously the internet where we took a hands off approach and the commercial internet flourished, right? Meanwhile, AT&T loses all phone call data. You know what I mean? Rampant misinformation, hate speech, deep fakes, some of the very things that Ted Cruz says he really cares about, which is apparently the one AI use that he is worried about in particular, is deepfakes of young girls, which, sure, it's really bad. And I think we should address that too. You know what I mean?
But we took a hands off approach and there was a lot of innovation, but look at the situation we have now. A small handful of companies control all of the internet, whether it's search or web services or social media, little tiny monopoly, okay, control ad revenue. All that stuff controlled by a small handful of companies, the same companies that it's also bipartisan, by the way, Kevin, to berate the large companies.
I heard all of the Republican senators doing it. Okay. So we have a small group of companies controlling the internet and pretty much everybody, every thinking person acknowledges that there are all kinds of harms out there. Privacy, security, hate speech, defamation, you name it, right? So the law that supposedly kept the internet open and innovative, Section 230, all kinds of people want to revisit that law, right?
So if you look at that history, but even if you go back and look further, contexts like nuclear power, right? Where we did get a lot of benefit from the innovation of work of nuclear power. And there were a lot of rules that were put in place which helped minimize the accidents that occurred in the United States. But what we didn't have was a plan to deal with dispose of nuclear waste. And we really didn't develop that plan for a really long time. And if you look at the sort of early conversations where people are like, wait a second, this process sounds great, but what are you going to do with all that nuclear waste? You see people saying things like, come on, we split the atom. You think we can't deal with a little nuclear waste? And here we are in 2024 with no real plan to deal with nuclear waste. And Congress didn't even intervene at all to create a national plan until the 90s. You know what I mean? Like 40 years into nuclear power.
Anyway my point is just like what the lessons of history teach us again and again, is that putting some modest guardrails around something, it only serves to ensure that we get the benefit of that thing without having to endure a tsunami of harms first. And that's exactly the mistake that's being made here, right?
There are people going around the Hill, specific people, like people like Eric Schmidt, like specific people going around the Hill talking about how if we put any roadblocks in the way of AI, then we're going to lose out militarily to China and commercially to China and the EU, and it becomes this threat that it's somehow an arms race and we're going to lose it if we, and so on. But it's a shibboleth. It's a shibboleth around really halting the march towards an informational capitalism, to use Julie Cohen’s phrase, we're just at a point where corporate power and government power, but corporate power in particular, is so extreme and extractive and for some reason, people are desperate to keep it that way, despite the evidence. And the shibboleth that they use is innovation.
Kevin Frazier: So I think there's obviously a lot to chew on there. And if we're going to shield ourselves from a so called tsunami of AI risks, and we're going to get beyond something like just handing out floaties to everyone, I wonder, do you think we need this sort of independent AI specific regulator, or is this just a matter, as you mentioned previously, of scaling up the FTC, scaling up the CFPB, pick the agency and just, adding a couple hundred employees?
Or do you think this merits a kind of new, massive effort to really bear down and identify and mitigate those harms?
Ryan Calo: That's a great question and something I've puzzled about for many years. Over a decade ago now, I was at the Aspen Ideas Festival and I gave a talk about why we needed a Federal Robotics Commission. I wrote a piece in Brookings about this. And at the time, my concern was that there was inadequate expertise in the federal government to handle technological change, you know, and I still think that's true. And I thought that perhaps the best way to quickly accrue deep technical expertise was by having an independent commission, a standalone commission.
And the reason is because much like the NSA and NASA, you would have a critical mass of people that were really good at what they did and were tackling problems that no one else can tackle, right? I had the honor of interviewing Admiral Rogers from the NSA when he came through University of Washington, also Director Brennan when he was head of the CIA. But what Admiral Rogers said was, one of our biggest recruiting tools for the NSA is to say, you're going to be working among really smart people on problems that no one else can work on. And so for me, it was an efficiency issue, but I'll tell you today, I really think that in addition to comprehensive privacy legislation, I think there's a few things the government can and must do.
Okay. And I don't think any of them involves standing up a brand new agency that's specific to AI. Okay. However, number one, why are we not refunding the Office of Technology Assessment? That makes zero sense. For listeners who don't know, OTA was spun up in the 70s around the same time as the Office of Science Technology Policy within the White House. And OTA's sole mission was to help Congress make wiser choices about technology. There's dozens and dozens of interdisciplinary experts who produced independent reports about the technologies of the day, genetics, the internet, whatever it happened to be. And it was in place helping make better decisions for 20 years until it was defunded in the Gingrich revolution, okay, in the nineties in Congress. Refund the OTA. Obviously we need that body.
Adequately fund the agencies that have invested in technological expertise, like the FTC. NIST. Everybody's looking to NIST to set standards about AI bias, AI safety, and there's like mold in their ceiling. Another thing I think is super important is if you think the FTC is going to be the one to police a lot of this stuff, you got to remove some of the impediments to that, one of which is the requirement to do cost/benefit around unfairness. So in 1980, under pressure from Congress, the FTC imposed the so called unfairness policy statement, which said, we're only going to pursue unfairness. This thing that means literally unfair, was written like about morality overtly. We're only going to pursue this unfairness idea if the conduct cannot be reasonably avoided by the consumer. And it's not outweighed by a countervailing benefit to commerce. And it effectively imposed a cost benefit analysis that's been used ever since to hamper the work of the FTC. And of course, the same Congress that defunded the Office of Technology Assessment codified the unfairness doctrine and made it into a law. Decodify the unfairness doctrine. That makes zero sense.
Make sure that researchers, who are the ones that are often the way we find out about internet harms that are not obvious or AI harms that are not obvious, is because independent researchers, journalists, academics like us, independent think tanks like RAND or whoever happened to be, do really important research showing that a system is unfair or too easily evaded or whatever it happens to be, volkswagen emission scandal is just one example. But they do so under the threat of laws like the Computer Fraud and Abuse Act, which do not contain exceptions for security research. Make it super clear that the people that are doing accountability work, be they inside whistleblowers or ombuds people, or be they outside researchers, cannot have some cease and desist letter sent to them by Meta because they happen to be looking at the problem of political disinformation on the platform, you know what I mean.
There's just a number of different things we could be doing to improve the overall ecosystem that ought to be bipartisan, ought to be obvious and clear. I think privacy law is one of those things, honestly, it's like third on my list after the things I just mentioned. You know what I mean?
I've talked to countless, I'm on the board of R Street, which is a market focused organization. I have conversations with plenty of people, including libertarians, who all say yeah, Congress should probably have technical expertise. It's not, coming from industry. You know what I mean?
It's like those kinds of things really frustrate me because I think we can, we could have a better ecosystem than we do. Legislators are just chasing whatever, the New York Times has put in front of them. To her credit, someone like Senator Cantwell has been thinking about these things for a very long time in a very sophisticated way. But I think a lot of lawmakers are not.
Kevin Frazier: You're right that you would think bipartisan consensus around informed regulation via independent experts. It's kind of a no brainer, right?
Ryan Calo: It’s such a no brainer!
Kevin Frazier: All agree on that, right? Speaking of potential no brainers not coming to fruition as unfortunately is sometimes Congress’ habit, what do you think of the fallback mechanisms we're seeing right now at the state level attempting to try to mitigate some of the harms you've mentioned. Are you encouraged by efforts by legislators in Colorado, for example, in California, to try to push ahead on their own efforts, or is this just patchwork privacy 2.0 that we're a little concerned about this sort of ad hoc approach that may develop if we don't see congressional action?
Ryan Calo: Yeah, so what I would say is to you is the same thing I told the committee, I applaud different states like California that have put a ton of thought and effort into their privacy laws and hence protect, not just residents of California, but people all over the country in so far as companies are complying with California everywhere because it's hard to have different rules for different states. I think Congress should look to those things and so on. But ultimately, I don't think we can rely entirely on the states for a few reasons.
Number one, so much of this is not just even national, but international. The nature of the internet and AI is everything is distributed all over the place. And many of the harms that we're worried about might originate outside of the United States altogether, but at a minimum, they cross over state lines. You know what I mean? It's really exactly what the Constitution has a commerce clause for, right?
Second, the fact that California or Colorado or a handful of other states, I'm told now that the number is up to a couple dozen that have some kind of privacy law. The fact that they have these laws is no comfort to all the Americans living in states where they don't. You have biometric protections if you're in Illinois and not if you don't, is the idea, right? Not if you're in Indiana or something, right?
And then third, I don't know, maybe it's the person who's done consulting for big tech companies in me or worked at Covington and Burlington in D.C. with huge household name clients, and so take it with a grain of salt, but I do give some credence this idea that it's not fair or particularly efficient to try to get companies to comply with dozens of different laws, right? When I was at Covington, I don't think they would mind me saying that clients would have a data breach and we would have to go into this big, huge folder of every state and look up what states they had consumers in and tell them, in this state, you've got to tell them within three days when you've got to tell all the consumers, but in this other state it's in a reasonable amount of time, but you only tell the AG and was it encrypted? Cause that means- It was ridiculous and it burned a ton of money and time and whatever. And I think it would have just benefited from having a comprehensive federal data breach notification, in a regime.
So I am sympathetic to the patchwork argument. I think a lot of academics aren't, and that's fair, but those three reasons, right? So, and then finally, Europe is not going to trust state by state action. There's never going to be a moment where Europe's like oh, enough U.S. states have good enough privacy that we're going to trust them and we're going to have them be adequate.
Not going to happen. As wonderful as they are over there, and I count, Ashkan and Jennifer and Urban and those folks as like personal friends of mine, as wonderful as they are in California, nobody in the EU is ever going to certify the California agency as the U.S. data protection, and so it just doesn't address some of the real basic issues that we have. And so I think we really need privacy legislation at a national level.
Kevin Frazier: Well, with that, I think we will leave it there. Thank you so much for joining us, Ryan.
Ryan Calo: Oh, this was really fun. Thank you very much.
Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution.
You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.
Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineer of this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.