Cybersecurity & Tech Surveillance & Privacy

The Lawfare Podcast: Riana Pfefferkorn and David Thiel on How to Fight Computer-Generated Child Sexual Abuse Material

Alan Z. Rozenshtein, Riana Pfefferkorn, David Thiel, Jen Patja
Monday, February 5, 2024, 8:00 AM
One of the dark sides of the rapid development of artificial intelligence and machine learning is the increase in computer-generated child pornography and other child sexual abuse material.

Published by The Lawfare Institute
in Cooperation With
Brookings

One of the dark sides of the rapid development of artificial intelligence and machine learning is the increase in computer-generated child pornography and other child sexual abuse material, or CG-CSAM for short. This material threatens to overwhelm the attempts of online platforms to filter for harmful content—and of prosecutors to bring those who create and disseminate CG-CSAM to justice. But it also raises complex statutory and constitutional legal issues as to what types of CG-CSAM are, and are not, legal.

To explore these issues, Associate Professor of Law at the University of Minnesota and Lawfare Senior Editor Alan Rozenshtein spoke with Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory, who has just published a new white paper in Lawfare's ongoing Digital Social Contract paper series exploring the legal and policy implications of CG-CSAM. Joining in the discussion was her colleague David Thiel, Stanford Internet Observatory's Chief Technologist, and a co-author of an important technical analysis of the recent increase in CG-CSAM.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Audio Excerpt]

Riana Pfefferkorn: There is no incentive for platforms to be investing a lot of effort into trying to make the decision about whether something is AI-generated or not because they face a lot of liability if they guess wrong. And so it's going to place even more burden on the pipeline and potentially divert resources from intervening in actual abuse cases.

This is a system that simply was never built to deal with trying to spot novel synthetic media. And that keeps having to be kind of revamped as they go along on this massive rocket ship to try and keep it going to deal with the ongoing upticks in material year over year.

[Main Podcast]

Alan Rozenshtein: I'm Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and this is the Lawfare Podcast for February 5th, 2024.

One of the dark sides of the rapid development of artificial intelligence and machine learning is the increase in computer-generated child pornography and other child sexual abuse material or CG-CSAM for short. This material threatens to overwhelm the attempts of online platforms to filter for harmful content and of prosecutors to bring those who create and disseminate CG-CSAM to justice. But it also raises complex statutory and constitutional legal issues as to what types of computer-generated CSAM are and are not legal.

To explore these issues, I spoke with Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory, who has just published a new white paper in Lawfare's ongoing Digital Social Contract paper series, exploring the legal and policy implications of CG-CSAM. Joining in the discussion was her colleague David Thiel, Stanford Internet Observatory's Chief Technologist and a co-author of an important technical analysis of the recent increase in CG-CSAM.

It's the Lawfare Podcast, February 5th: Riana Pfefferkorn and David Thiel on How to Fight Computer-Generated Child Sexual Abuse Material.

I want to actually start with a really interesting and important paper that you, David, co-authored with some collaborators at Stanford and at Thorn, a nonprofit devoted to countering sex trafficking and child exploitation. And it's a paper called "Generative ML and CSAM Implications and Mitigations." And it's kind of a technical description of the lay of the land of the use of generative machine learning techniques to create child sex material. And I want to start with it in part because it's interesting in and of its own right, but also because it's kind of the empirical backdrop to the legal and policy work that Riana, you did in your papers. I think it's useful to kind of level-set there. So David, can you just give a kind of an overview of what this research project was and what you and your collaborators found?

David Thiel: Sure. So, kind of in response to some inbound inquiries that we had from media and policymakers, we started doing some investigation kind of into the advances that had been made during 2023, specifically in generating explicit content with generative ML. We reached out to Thorn specifically in the context of CSAM to see if they had seen an increase in activity here, and they stated that they had seen a significant uptick in material depicting child sexual abuse that appeared to be using generative ML. In 2022, this was almost non-existent. There had been attempts, but since the release of Stable Diffusion 1.5, a rather large open-source community popped up around expanding the capabilities of that model and researchers coming up with ways to do fine-tuning of models in ways that did not require anything near the level of resources that had been previously required to modify. these generative models.

So, we did basically an overview of the technologies that had been implemented that made this very easy. Previously, it would have cost tens of thousands, hundreds of thousands of dollars to do a significant amount of retraining. But at this point, people are able to do it for a few hundred dollars and they can do it at home without having a cloud-computing environment.

So a few examples of things that had been developed during 2023, or at least, widely implemented were things like textual inversion and what are referred to as "LoRAs" or low rank adaptations. These are tools that basically--they're very small, they're nowhere near as big as machine learning models. They can be trained on a relatively small number of images and they can teach models new concepts. And those were being used by explicit content creators to come up with, basically, more accurate and detailed explicit content. The reason why this is useful is it means that people don't have to mess as much with giving complex prompts to try and get the exact type of image that theywant.They can nudge the model itself to be more likely to produce those types of images. So, that was used for legal adult content to depict different types of sex acts, to depict different types of subjects in images themselves.

But it had also been observed in that community that there were some problems with a lot of these models that were generating explicit content. People would have to explicitly note that you would have to tell the generator, in what's called a "negative prompt," which is just, you describe all the things you don't want to see. They would have to say, "Put child in your negative prompt and things like that," meaning that basically if you did not specify very explicitly that you did not want children to be depicted in this explicit material, they would be rendered. And this was fairly commonly known among the community that was producing this explicit content.

Unsurprisingly, there were communities that were dedicated to doing the exact opposite and making it so that the resulting imagery did depict children. They used a number of tools that could be basically age-reducers of the subjects that were depicted. They would use that potentially to age-reduce well-known celebrities and pick material that way. And then there were also newer tools like control net, which allowed people to basically say exactly what kind of pose they wanted by providing like a human skeleton model effectively and using that to guide the generative output.

So all of those things combined resulted in an environment where a fairly large amount of material is being generated on a daily basis. That material was getting more and more photorealistic according to detectors that Thorn had implemented that gauge photorealism of material without actually having to examine it. And the chats that had been analyzed in those communities indicated that there was a very strong current of trying to make things as realistic as possible, as undifferentiatable as possible. It's not the entirety of those communities, but it is a significant thread within them.

Alan Rozenshtein: How much more technical innovation is there to be had in this space? In other words, have we reached the point where these models are now producing as photorealistic images as anyone would want if they're trying to produce or consume CSAM? Or should we expect that as the pace of machine learning increases, right, as it has done in a kind of dramatic way over the past several years, the situation is going to get even worse, either because they're going to get even more photorealistic, or they're going to get even cheaper to make, or you can make videos, or you can make 3D scenes. Or, in other words, how do you see this technology developing on a technological basis in the next several years?

David Thiel: So when it comes to photorealism, I think when we've talked to various child safety organizations and to law enforcement, and they're definitely getting a decent amount of material that is difficult for them to distinguish between photographic material. We've definitely gone past the point where with adult content or generating people in general, you can fool people that are not already primed looking to differentiate it. So, I think we're well past the point where an average person can be fooled into thinking that this is an actual photograph. In terms of where it's going to develop, I think that's basically what we are expecting, is that there are going to be video models that become more compelling. As an industry, there are fewer technologies to help do detection on that type of novel content. But when it comes to static imagery, I think it'll mostly just become easier and easier, you won't have to mess with prompts as much. Newer models are kind of designed so that there's less kind of hand-waving and keywords that don't really make sense like saying, "Oh, this is trending on art station or something because we know that makes it have high quality." Those weird little maneuvers are becoming less and less necessary for generative models in general.

Alan Rozenshtein: So let's turn now to Riana's paper and talk about the legal implications of all of this. Before we get to how the law should and can deal with computer-generated CSAM, let's just talk about how the law deals with regular old CSAM. In your paper, you distinguish between two offenses under, at least, federal law that apply here. One is about child pornography itself, and the other is about obscenity. And so, if you can just give an overview of what these two clusters of prohibitions are, and also, in particular, how they relate. I think for a lot of folks, it's understandable why not all obscene things are necessarily child pornography, but it might be a little confusing to know that actually not all things that are child pornography are necessarily obscene. And so if you could unpack that, I think it's really important before we then get into the First Amendment questions about computer-generated CSAM.

Riana Pfefferkorn: Sure. So it's probably familiar to most people listening to this particular podcast that the First Amendment generally has relatively few carve-outs for what is actually completely unprotected speech. And among those are carve-outs for obscenity and separately, as you mentioned, for child pornography, more commonly now called child sex abuse material, or CSAM for short, although child pornography is still the phrase on the books. Obscenity has never been protected by the First Amendment. This goes back hundreds of years. And the rationale for that is that it has basically no redeeming value and any value it might have is outweighed by protecting the social interest in maintaining order and morality. It sounds a little bit antiquated maybe to modern ears. And the familiar test for something being obscene is the three-part Miller test of whether an average person applying contemporary community standards would find that a particular work taken as a whole appeals to the prurient interest, whether that work describes or depicts in a patently offensive way, sexual conduct as defined under state law, and whether the work taken as a whole lacks serious literary, artistic, political, or scientific value.

David Thiel: and just to jump in, this is where that famous, I think Lewis Powell, Justice Lewis Powell, "I know it when I see it," comes in and trying to figure out at what point something goes from our First Amendment protected artistic erotica to just obscene pornography, and therefore would not be protected.

Riana Pfefferkorn: Right. However, we don't just leave it up to juries and the government to decide they know it when they see it. We have a slightly more elucidated test there for juries to apply.

And separately, there's a prohibition against the creation or distribution, receipt, pandering, etc., of CSAM. And the rationale there, as you mentioned, is a little bit different, where instead of it being on the touchstone of whether something is obscene or not, the idea is that this kind of speech is speech that's integral to criminal conduct, underlying abuse crimes that is harmful to children in various ways, and society has an overwhelmingly compelling interest in protecting children from suffering those harms. And in the 1982 Supreme Court case that upheld child pornography as being outside of First Amendment protections, the Supreme Court said, "Look, whether something has serious literary or artistic value is totally unconnected to the inquiry of whether a child is being harmed from undergoing this type of abuse and having it documented." And so the court said, "The obscenity rationale for prohibiting speech just doesn't really apply here. We are going to prohibit this speech on separate grounds."

But, as you mentioned, Alan, it may be a little less than intuitive that there's a difference between these two things. And the courts since then have observed that a lot of the time, something that constitutes CSAM is often going to be obscene as well. There's not a 100 percent overlap in the Venn diagram. There is some material as we'll discuss, especially in the computer-generated space, that might not be obscene. But nevertheless, they are two separate and distinct rationales for finding that these two different categories of speech are not covered by the First Amendment.

David Thiel: And just to make that concrete, so one could imagine, for example, an image of a child who is being sexually abused, but that image is journalistic, right? It's meant to explain a thing that's happening. That, that might be CSAM, right, as defined, but it might not necessarily be obscene. And so, different legal regimes would apply and therefore different First Amendment doctrines as well.

Alan Rozenshtein: So let's take each of these two in turn. And actually, let's start with the specific CSAM prohibitions, because my understanding from your paper is that's where most of the prosecution happens. And so, the main law here is called Section 2252A. And, in particular, a Supreme Court case from the early 2000s called Ashcroft v. Free Speech Coalition, which is, I think, the first time the Supreme Court really waded into what has since become this much bigger problem of computer-generated CSAM. So just explain what that case was about and why that's one of the touchstone cases for trying to think about how to deal with this problem today in 2024, more than 20 years later.

Riana Pfefferkorn: Sure. So Section 2252A of Title 18 bans child pornography, CSAM as we would now refer to it, and it bans the possession, it bans the receipt, it bans the distribution. There's a separate statute for production, actually. But 2252A is kind of the workhorse for federal CSAM prosecutions. And in banning child pornography, it is referring to the statutory definition in Section 2256 sub 8, which covers a few different categories. One is if the material is something that was produced using real children, it's a depiction of a minor in a sexually explicit context. Another is imagery that is or is indistinguishable from a depiction of a real child being abused. And the final category is what's typically called "morphed images," although that's not the phrase that's actually in the statutory definition. And that is imagery that has been altered so as to make it look like an identifiable minor is in a sexually explicit conduct or pose.

And that is the language on the books now, but as you mentioned, there was a 2002 case involving a prior version of the definition of child pornography, which had previously prohibited, under the original 1996 statute, an image that is or appears to be that of a minor in sexually explicit conduct. And that was the portion, specifically the "appears to be" portion, that the Supreme Court ended up striking down as unconstitutional in the Free Speech Coalition case. And there, the Supreme Court held that the rationale that it had elucidated 20 years earlier in the New York v. Ferber case was that the reason that child pornography is prohibited and it falls outside the First Amendment is that it involves harm to real children.

And the Court said here, "where you have" what the Court called "virtual child pornography," virtual CSAM, "there is no harm to any real child in the creation of a computer-generated image, for example, and therefore that rationale of protecting children of speech that's integral to criminal conduct just doesn't apply." and therefore, the Court threw out that "appears to be portion" of the definition of CSAM. And so that was what the--after the Supreme Court issued that ruling, Congress went back to the drawing board in 2003 and updated that particular provision of the definition of CSAM to its present definition of including computer-generated imagery that is or is indistinguishable from an image of an actual child being abused. And that has actually never been tested, subject to a constitutional challenge, although it's hard to see that much daylight, really, I think, between saying something is a computer-generated image that appears to be of an actual child being abused versus saying something is a computer-generated image that is indistinguishable from that of a real child being abused.

Alan Rozenshtein: So how then does the current case law on First Amendment protections for at least some categories of computer-generated or virtual CSAM, how do those apply today? Right, if you're trying to think through the categories, what is and is not then protected?

Riana Pfefferkorn: So there are some low hanging fruit here, which is if it is a depiction of an actual child actually being abused, what I would call photographic CSAM in my paper, then that is prohibited. That's the New York v. Ferber case. It's still real imagery. It doesn't matter whether it's obscene or not, it can be prohibited. If we have a computer-generated image that is obscene or an image of actual abuse that is obscene, well, then it falls under the Miller rationale for finding that to be outside of First Amendment protection, and it can be prohibited on that basis. And so, if a computer-generated image qualifies as legally obscene under that three-part Miller test, it can be prohibited.

And then we get into areas where I think there's a bit more nuance. When we're talking about computer-generated imagery, particularly in the generative AI context, where we're looking at both what is the input for the model that yields the image and what is the output of that image? Where you have a morphed image, meaning it depicts an identifiable minor, whether that's Sally Smith from ninth-grade homeroom or whether that's a child celebrity or child influencer, for example, that morphed image that has altered some underlying image to make it look like that identifiable child is in a sexually explicit pose or conduct, then I think that is going to be the low hanging fruit, again, for the courts as they start to consider computer-generated imagery.

The morphed images prong of the definition of CSAM was not in front of the Court in Free Speech Coalition, but multiple courts of appeals have said that it's not First Amendment protected, or that it's at least not protected speech when it is being disseminated beyond just whoever created it to begin with. If they keep it to themselves, it's not actually prohibited under the law on the books. But when it gets disseminated, whether that's to everybody in ninth-grade homeroom, or whether that's being put up on the internet, or whether it's just being sent to the victim themselves, then the courts would say that's not protected speech and it doesn't matter if it was photoshopped or created virtually. Free Speech Coalition just doesn't apply because of the harms to the child there. Even though the child isn't being directly abused themselves, there are still privacy harms, reputational harms, emotional and mental harms from being victimized with a morphed image. And so for the sorts of cases that we're seeing out there cropping up in the news more recently of real life teenagers who are being victimized in this way with pornographic deepfakes of teenage girls most commonly, I think it will be fairly easy to say existing law, the statute on the books prohibits that, and Free Speech Coalition does not come into it.

Where things get a little more difficult, I think, is if a particular generative AI output was trained on images of actual abuse in the data set, then I think it's very likely that the courts would say that that can be prohibited on the same rationale as photographic imagery itself, like the actual underlying images in the training data, because the reuse of those images to generate a new image, a computer-generated image, is still harming the children who are depicted in the underlying abuse imagery. That's my prediction for where the courts might come out once they have to start considering generative AI outputs like this. And there, it wouldn't necessarily matter whether it's photorealistic or not. I think what the courts would focus on there is this re-harming children in actual abuse images by training the model that created the image on the output end.

And then finally, where I think the statutory language nominally prohibits an image, but where I don't think it would hold up to constitutional scrutiny, is where there is a computer-generated image that is photorealistic because that falls so squarely within the statutory language about an image that is indistinguishable from a photographic abuse image. And there it might not necessarily matter if the training data did or did not include actual photographic abuse imagery. But there, I think if prosecutors were to rely upon that "indistinguishable from" provision on the books right now, because it is something that is photorealistic and it's hard to tell if this is a real image or not, that's where we might see a test of that previously, so far, untested language about computer-generated images that are indistinguishable from actual abuse imagery. And so, I think that there may be some incentives for prosecutors to try and find other ways to bring offenders to justice, rather than potentially to risk having another Free Speech Coalition-style case that strikes down this revised portion of the statute.

Alan Rozenshtein: So before we get to that question of alternatives to using the child pornography, the CSAM statute, I actually want to turn to you, David, because one thing Riana mentioned was this question of what happens if the image itself is not of an identifiable child, but that the model that's generated it has been trained on real CSAM? And I want to ask you, because you actually late last year, you released a very interesting report that I think kind of made a big splash about how actually the CSAM images are actually prevalent, or, at least, links to those images are prevalent in some major open-source data sets of images that are often used for otherwise totally legitimate image machine learning. And so, I was hoping you could just talk about that and what the status of this is in the broader community. Because if it is the case that these images have found themselves into a lot of these foundational models, that just seems a big problem even beyond the issue of those models then being used to create CSAM itself.

David Thiel: Sure. So in that study, basically what we did was we took one of the most prominent data sets, sets of images, that was used to train the models most frequently used to produce both legal explicit material and used to produce CSAM. And, that model being LAION-5B, it's about almost 6 billion images that were scraped from the internet in a fairly wide crawl. To do that, given that we were dealing with almost 6 billion images, we basically took a safety predictor cutoff and ran the URLs to those original images through an API provided by Microsoft that recognizes known instances of CSAM. About 30 percent of all of those images were already down. These datasets are just links to images and not the images themselves, and this is for copyright and other liability concerns of redistributing sets of images. And what we found was that there are on the order of maybe a thousand to three thousand or so instances that we could identify, through a combination of using photo DNA and also interrogating the the model itself to ask what other images are similar to these, and then working with a child safety organization to do manual verification.

So there are at least several thousand images that are likely to be CSAM. We were able to work with those organizations to verify that minimum 1,000 were known instances. And we also know that it doesn't take a lot of instances to train a model on a new concept. You can teach a model a new character, a new subject or style with a couple of dozen images. And some of these were reinforced because they appeared over and over again in the dataset. So, we have confirmation that at least some of the training material in these models, when it comes to explicit content, was actual CSAM that these models were trained on.

We also know that these models were trained on many images of children, of real children, and many images of explicit activity. So that combination of factors, when you show that models have been trained on CSAM, models have been trained on existing children, and they're capable of producing these explicit outputs, I don't think anybody had--no one had considered what that constitutes when it comes to morphed imagery or whether you're transforming an image. So even if you're taking an image of an existing child and using the model's concept of that to produce new explicit material, it's unclear whether people were really thinking about that as morphed imagery in the classic sense. I don't know if you have any thoughts on that, Riana.

Riana Pfefferkorn: Well, I'll add that the statutory definition of morphed imagery requires that there be an identifiable child in the output of the picture. So where it is not, if it's a this-person-does-not-exist type situation where it isn't depicting a real person, where it is even an output that isn't depicting a child at all, or not a sexual situation at all, that's where I think--I touch on this extremely briefly in the paper, because David, you put this research out right before we went to press on the paper--but, I think there's some really interesting questions there about, like, how broadly can you take the rationale for prohibiting actual photographic CSAM, which is it harms children, it's reuse and recirculation harms children, and how far can that actually apply to any First Amendment argument that you can also prohibit the output of abuse-trained models where the output is not child sexual abuse material.

I don't think it's the law or that we want it to be the law that everybody who unwittingly downloaded the LAION-5B data set, not knowing or having any reason to know that it contained links to actual abuse imagery, that all those people are criminals. I mean, this is why we have knowledge requirements in the laws on the books, including the statutes that we've been talking about today. And I think that you get a much more attenuated argument for why an image that has nothing about a child or nothing about sexual explicit conduct in it, but that has somewhere in its metaphorical DNA, actual abuse imagery that that should be prohibited. I think that's going to be quite a conundrum for the courts.

But if anything, it might be kind of the opposite of what I think is going to be the more common situation, which is rather than saying "We know what was in the training set and the output is not CSAM," the more common situation that I think investigators are going to be confronted with is the output is CSAM, but we don't know A, whether this is AI-generated or an actual image of an actual child, and B, what was in the training data to begin with, because a lot of times the provenance of an image that's been floating around amongst different communities or on the dark web is probably going to be unknown and difficult, if not impossible, to determine.

Alan Rozenshtein: So let's talk about what all this means for the fight against CSAM and, for our purposes, against computer-generated CSAM. Riana, you spent a lot of time talking about how the rise of this technology, as well as the limitations that the courts in the First Amendment put on at least certain attempts to fight it, will affect prosecutors. And so, I was hoping you'd give an overview of that. How do you expect prosecutors to deal with this, what sounds from David's research, to be a deluge of this kind of harmful material?

Riana Pfefferkorn: So I think there are a few different things. It's going to be trade-offs all the way down and triaging cases is tough enough as it is. There are so many reports that go into the desks of various law enforcement officers all over the country right now that involve what has heretofore indisputably been actual photographic abuse imagery, that even triaging those can be really difficult to figure out what to follow up on or whatnot, especially when you also may have missing persons cases or homicide or whatever else that you might need to deal with as well. And having now computer-generated images added to the pile is going to make it even more difficult.

I think there are a few different things that investigators and prosecutors are going to be thinking about. One is whether there is an appetite to square up that constitutional challenge about material that may actually be fully virtual, meaning there's no abuse material in the training set and the output isn't a morphed image, and try and see, like, can we prosecute under that "indistinguishable from" language that's on the books or not. If prosecutors don't want to deal with that, then they will need to know, as an evidentiary matter, is this a real image of an actual identifiable child? What was in the training data? And that, as I said, might not necessarily be very readily apparent.

So a couple of different strategies that prosecutors might use. One might be to decrease reliance on Section 2252A, the CSAM statute, and instead rely on the child obscenity statute, Section 1466A. One thing that was surprising to me when I was doing this research was to find that for the 20 years that the child obscenity statute has been on the books, it's been used, or at least cited in federal court cases, fewer than 150 times. And the CSAM statute, Section 2252A, has been cited more than 50 times as many as the child obscenity statute. They just don't really use the child obscenity statute. If I were to speculate about why that is, it would probably be that it is effectively a strict liability offense to possess actual photographic CSAM, whereas in order to obtain a conviction under 1466A, prosecutors would need to go through that thre-pronged Miller test to the satisfaction of a jury. But they don't have to prove that a real child was involved. It's a defense to a charge of possessing CSAM that everybody in it was an adult or that no real child was in it, meaning it's a fully virtual image. That's not a defense to a child obscenity statute.

And so to the degree that there is a fair amount of overlap where a lot of CSAM imagery, whether it is photographic or virtual, will also be obscene, we might see prosecutors shift to reliance on the child obscenity statute instead. But that will take more work. It may be longer, rather than get in going to trial more often, rather than having defendants plead out a lot.

And I'm also concerned about the degree to which reliance on obscenity doctrine lets in jurors' biases. One of the things discussed in the paper is that even well into the 1990s, in obscenity cases, juries were finding material to be obscene under Miller because it involved homosexual conduct, where if it had been heterosexual conduct, maybe that would not have been the conclusion after all. And now that we live in an era where there is a huge backlash against the very existence of queer and trans people and a renewed tide of hatred and violence and laws to discriminate against them, I don't think it will do justice to use obscenity doctrine and thereby let the deciding factor in whether somebody goes to prison or not for child obscenity be whether the accused image depicts a trans body instead of a cisgender body, or depicts homosexual conduct rather than heterosexual conduct. So there are some drawbacks to increased reliance on the child obscenity statute, rather than the CSAM statute, but that might be something that becomes more common if it becomes difficult to tell is this an image of a real child or not.

But another option, and one that I've spoken with some child safety folks about that they seem to agree, is that instead of charging somebody for what is possibly, or believed to be, computer-generated imagery, to simply charge defendants for the photographic actual CSAM that they also possess. Defendants in these sorts of cases tend to own a large volume of actual photographic CSAM, and they can and have been charged on that basis rather than charged for the drawings, cartoons, virtual computer-generated materials that they also were found to possess. So if an investigation shows, based [on what] may have been instigated by, believed to be AI-generated image, an investigation shows that there are lots of images of actual CSAM on the defendant's devices. They can be prosecuted for that instead. This avoids the evidentiary questions of what's in the training data. It avoids the First Amendment questions because photographic CSAM is incontrovertibly outside of First Amendment protection. And I think it would more properly focus prosecutorial limited resources on cases involving harm to actual children, rather than going on what might be a wild goose chase trying to determine can we prosecute this AI-generated image or not.

But there's not going to be any easy answers here. Like I said, the triaging is going to involve difficult decisions and trade-offs in any event.

David Thiel: Yeah. So as far as we know, we haven't seen that case where somebody has been prosecuted for purely computer-generated imagery because they do usually possess those other images. Part of that is because the communities that are producing a lot of this generated imagery, they are actually training extensions based on known victims and known instances of CSAM to generate new imagery. So because that community is interested in, or tends to be interested in, this existing landscape of CSAM that has been present for years, they have particular victims that they want to replicate the. Cases where people just have generated material on their own machine that they have not interacted with any other material, I think it's going to be something rare.

Alan Rozenshtein: In addition to the role of prosecutors I do want to also touch on the role of the platforms themselves because obviously they're the front line in terms of detecting this, reporting it, screening it. Riana, what do you expect to be the effects on the platforms of, again, this, what potentially might just be a kind of orders of magnitude, larger amount of CSAM, whether or not it's computer-generated or not?

Riana Pfefferkorn: As things stand, there's already existing federal law that requires online platforms to report, quote unquote, "apparent violations of the CSAM laws when they detect it on their services." And the violations they have to report include violations of Section 2252A, but actually not of 1466A. But so, if they see a parent CSAM on their services, they don't have to go looking for it, although many voluntarily do and use various automated tools to try and scan their platforms at scale to find this content. Once they do have actual knowledge of it, they are required to report it through a pipeline called the CyberTipline, which is run by the National Center for Missing and Exploited Children, or NCMIC for short. And then NCMIC acts as a clearinghouse that takes those incoming reports from platforms, figures out which jurisdiction they need to go out to, and routes it out to the appropriate law enforcement agency.

And as things stand, the CyberTipline receives well over 30 million reports from platforms a year in recent years. I think they're going to release their 2023 report on those numbers sometime in March. And so, when you already have almost 32 million instances from platforms reporting in 2022 alone, that is a huge burden for a relatively small organization like NCMEC to deal with, much less the various recipients on the law enforcement end of all those reports to figure out what to do about each of those reports, whether they are something that they need to act on and follow up on, or whether it's something that they lack enough information to follow up on, or maybe doesn't necessarily even violate the law in a particular jurisdiction, since the vast bulk of these reports go internationally.

 And so when you have these reporting obligations, though, not only is there a lot that's being reported, it also means that there are lots of incentives to over-report. And the reason for that is because non-compliance by a platform with its reporting obligations is a punishable offense with hefty fines attached to it. And on the flip side, platforms are generally immune if they report an image that does not actually meet the statutory definition of CSAM. So failing to report actual material leads to legal liability. It leads to bad PR. You get yelled at by Congress like several tech CEOs were earlier this week, as of the week that we're recording. But you don't face liability for reporting something that isn't actually CSAM. And therefore, as we already see, platforms will report things that don't meet the statutory definition. They report cartoons, even though that's specifically excluded from the definition of what constitutes child pornography. They report photos that pediatricians ask parents to take of their children and send to the pediatrician over email. And so that's all something that NCMIC and law enforcement has to wade through to try and separate wheat from chaff, as it were.

And once you start adding in AI-generated material, platforms are just going to report all that too. There is no incentive for platforms to be investing a lot of effort into trying to make the decision about whether something is AI-generated or not, because they face a lot of liability if they guess wrong. And so, it's going to place even more burden on the pipeline and potentially divert resources from intervening in actual abuse cases.

This is a system that simply was never built to deal with trying to spot novel synthetic media. And that keeps having to be kind of revamped as they go along on this massive rocket ship to try and keep it going to deal with the ongoing upticks in material year over year.

David Thiel: Yeah, and one of the things that the existing ecosystem of detection and reporting of CSAM was based on was what is a fairly accurate and non-resource-intensive way of detecting already known material, which just would recirculate endlessly online with production of new material being certainly there, but not nearly at the volume that it can be with synthetically produced material.

So, given how many new instances are being produced every day, the actual methods of detection for these platforms are also getting more complicated because it takes some time for those images to be identified, have their fingerprints added to centralized databases that all the platforms can then use, and then do new detection. So, it's going to be fairly complicated to resolve that from the platform's point of view.

I also want to maybe expand a little bit what we think of as a platform in this regard. So certainly, there's a lot of well-established practices when it comes to social media platforms, file-sharing platforms, things like that. But we also have platforms that are made to share things like machine learning data sets and generative models and augmentations to those models that do things like make subjects look younger, make explicit material easier to generate. So, part of the question that has been raised is, what are the responsibilities of those platforms that basically host community-produced augmentations and models that make producing this content easier?

And that is, legally, it's difficult to say. But when it comes to technical measures, there are some things that can be done to make existing models much more resistant to producing CSAM. You can use what has been called a few different things, but concept erasure is basically what it boils down to where you effectively retrain a model so that it's really bad at producing a particular thing. And so you can actually retrain a model, basically resume where it stopped its training, start it again and say, "Hey, this is what a child is. You don't know what that is. You're a model that knows how to produce explicit material. You should not know what children are." And that actually does appear to work to some degree from what we've heard from platforms.

So, there is a possibility that there may become a new standard of practice of taking, if you're a commercial platform distributing these models or augmentations, some degree of responsibility for making sure that they can't output both explicit material and imagery of children and thus be able to combine them.

Alan Rozenshtein: So I want to finish by thinking about some other potential policy responses, whether it's from Congress or the administration. Riana, you have some interesting thoughts in the paper and I'd love for you to give an overview of them.

Riana Pfefferkorn: Yeah, sure. I think the first thing is just to say that this is a complex problem that doesn't admit of very easy solutions and policymakers need to treat it with the nuance that it requires. Understandably, the knee-jerk reaction is just to ban everything that's computer-generated CSAM, because it's hard to tell what's real from what's computer-generated, or because we find this material repugnant. But this is an area where the Constitution does have something to say, and so rather than focusing on potentially First Amendment violative proposals, policymakers need to be digging in a little further, and especially digging in, I think, on the technical issue.

It is a good question what sort of additional resources NCMIC and law enforcement will need in order to handle the influx that they're going to get of CG-CSAM into the CyberTipline. You can imagine a world where platforms are excused from reporting CG material if they have a good faith belief that something was AI-generated. I don't think that would catch on though, so I doubt that that's something we're going to see. And so I think if you just have the existing reporting, where it's going to be even more than they have right now, it seems like there's going to need to be some investment in technical assistance for NCMIC and for law enforcement to try and do a better job of detection and provenance questions.

One thing that seems to be very low-hanging fruit at the state level, and the paper mostly focuses on federal policy, but it turns out that some states don't have a morphed image provision in their state level CSAM laws. And so, for the states to add that, if they don't have it already, would help with a lot of the real world harms that are currently already happening in terms of non-consensual deepfake imagery of teenage girls, which means it's CSAM when it's underage children. And that would go a long way right now to help the existing real people who are directly being harmed and are being told over and over again by state and local police that there's nothing under state law that they can do.

On the federal level, I think there is room to potentially craft a very narrowly tailored law to prohibit the possession, manufacture, and sale of basically trafficking in ML models that have been trained on actual abuse imagery. In general, the law strives not to prohibit general purpose technology. It's not illegal to own Photoshop. It's not illegal to own a digital camera. It is illegal to use Photoshop to make a morphed image. It's illegal to use a digital camera to create a photographic CSAM. So there would have to be some careful crafting to only narrowly target that particular misuse of machine learning models for training for the sorts of purposes that David was discussing. But we do have examples in current federal law where Congress has narrowly targeted trafficking in devices for purposes that are solely illegal. For example, trafficking in devices for illegally wiretapping people's conversations or for circumventing digital rights management on copyright protected works. I'm not a big defender of the DMCA, but if you're going to use a trafficking and tools prohibition, if anything, it's much worthier if you want to apply it to the problem of abuse-trained ML models than in copyright, I would have to say.

And so, that would have to have a careful knowledge requirement, carefully tailored, but I think it's something that is a real gap in existing law because generative AI simply presents us with a different technological regime for the creation of this material than we've really been used to seeing before. So that's something that I think Congress could look into doing.

And then just overall, when the White House released its national strategy, the Executive Order on AI last fall, there were only a couple of mentions in there about prevention of AI-generated CSAM, and it just needs to be something that gets integrated into the overall national strategy that Congress and the Executive Branch use, while being sensitive to the particular wrinkles here that make research and development much more difficult because you can own as many copies as you want of an image or video of slowed-down Nancy Pelosi. You can't own as many images as you want of actual photographic CSAM. But I'm curious to hear what David thinks in terms of other policy issues and what's possible on the technical side.

David Thiel: Well, I do think that there there are a couple of gaps that we have seen when it comes to particularly this issue of undressing apps that have circulated, that have been used in multiple school environments to basically produced imagery of underage, actually existing kids. Like you mentioned, the state laws that address that are somewhat variable. It's unclear where that line between this is a a naked photo versus this is an explicit photo. I do think ,and you may disagree on some of the specifics of implementation and probably be correct since you actually know things about the law, but I think in terms of having something both at the state level and at the federal level that addresses non-consensual distribution of nude and/or explicit imagery of underage kids is probably something that needs to be a little bit more thoroughly addressed and explicitly addressed in a way that we don't see this.

 Part of the reason why we were working on this paper in the first place is because we keep seeing this, this reporting saying that like, "Oh, nobody knows what to do when this happens. There doesn't seem to be any law addressing it." It's like, no, there are laws addressing it. It's just the education about it. And there are some gaps that need to be filled in on that federal level when it comes to non-consensual distribution of imagery at all, not just of underage kids. So I think focusing on those gaps is something that I would like to see.

Alan Rozenshtein: I think this is a good place to end it. Riana, David, thank you so much for joining and for the really important research you're doing on this issue.

Riana Pfefferkorn: Thanks.

David Thiel: Thanks for having us.

Alan Rozenshtein: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfairmedia.org/support. You'll also get access to special events and other content available only to our supporters.

The podcast is edited by Jen Patja and your audio engineer this episode was Noam Osband of Goat Rodeo. Our music is performed by Sophia Yan. As always, thank you for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory. Her Mastodon handle is @riana@mastodon.lawprofs.org
David Thiel is the big data architect and chief technical officer of the Stanford Internet Observatory.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare