Cybersecurity & Tech Surveillance & Privacy

The Responsibility and Power of Platforms to Tackle Inauthentic Content

Kevin Frazier
Tuesday, June 18, 2024, 9:54 AM
Lawmakers have yet to pursue an effective way to diminish the creation and spread of inauthentic content—mandating that platforms use prosocial interventions.
Taylor Swift (Eva Rinaldi, https://commons.wikimedia.org/wiki/File:Taylor_Swift_(6966830273).jpg, CC BY-SA 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

For a brief moment, Taylor Swift captured the attention of hundreds of lawmakers, when a Telegram group formed for the purpose of sharing “abusive images of women,” broadly disseminated fake, sexualized images of Swift online. The rapid spread of these images made politicians aware of the need for regulatory action. Specifically, lawmakers realized the need to limit the creation and circulation of inauthentic content via generative artificial intelligence (AI).

In months following the Swift deepfakes, lawmakers introduced myriad proposals, but the window for meaningful inauthentic content legislation from Congress is rapidly closing. The House and Senate have fewer than 40 days in session before the election.

One of the most effective regulatory options to combat the spread of false information remains on the table: to direct social media platforms to use tools already at their disposal.

Platforms have the most reliable and effective tools to reduce the creation and spread of inauthentic content. Use of these tools is currently at the discretion of the platform. Most platforms opt not to tap into these effective interventions. Congress could correct that missed opportunity by mandating that platforms use proven tools to reduce the odds of users creating inauthentic content. A legislative approach would improve on current efforts to increase enforcement of outdated and ineffective laws or pass new laws unlikely to systematically address this important issue.

The Problematic Spread of Inauthentic Content

Generative AI tools like ChatGPT have made it easier for more actors to create and spread inauthentic content, defined by the Cybersecurity and Infrastructure Security Agency as “false information that is communicated and spread, regardless of intent to deceive.” Bad actors from members of toxic Telegram groups to individuals at Russia’s Doppelganger, which may have ties to the Kremlin, have exploited these tools to propagate everything from fake sexualized photos of celebrities to fake comments on social media posts on controversial topics, such as the war in Ukraine. OpenAI recently disclosed that online influence operations out of Russia, China, Iran, and Israel had leaned on ChatGPT to generate such comments in multiple languages and debug the code of various apps to share that content more quickly.

The unchecked spread of inauthentic content can have significant and long-lasting effects. For now, OpenAI reports that “[t]hese influence operations still struggle to build an audience.” That is unlikely to remain the case as AI labs, including OpenAI and Anthropic, introduce more powerful tools and bad actors become more savvy users. That combination will make the spread of inauthentic content more disruptive and pervasive.

How best to respond to that likely future is an open question. Policymakers seem keen to use a traditional set of tools to stem the spread of such content. Proposed legislative responses are, at best, ineffective and, more likely, sources of distraction that diminish the will to pursue more responsive legislation.

Advocacy for greater enforcement of existing, outdated laws or implementation of new laws gives the appearance of a proper regulatory response. Action for the sake of action, though, is not what this moment demands. Legislative resources would be better spent on efforts to require that platforms adopt interventions that have been analyzed by experts and address the creation of the content itself.

Limits of New Legislation and “Old” Laws

In the aftermath of fake images of Taylor Swift going viral near the end of January, the White House and state legislators across the country called for legislative action. At the same time, OpenAI disclosed how bad actors used ChatGPT to generate and spread disinformation, rekindling calls for regulation of influence campaigns.

In response, legislators rushed in to combat inauthentic content by encouraging greater enforcement of existing laws as well as by developing new regulations. The Washington state legislature passed a law criminalizing the disclosure of fabricated (read AI-generated) intimate images. Indiana enacted a similar statute. Same goes for Tennessee. A similar story is taking place in Washington, D.C. In a press conference following on the heels of the Swift story, White House Press Secretary Karine Jean-Pierre called on Congress to pass responsive legislation. She said matter-of-factly, “That’s how you deal with some of these issues.”

There’s just one problem: It’s not clear that legislation of this sort is the best way to tackle these issues. Proposed legislation and related laws already on the books in several states may fall short due to the sheer amount of social media content, making enforcement at scale virtually impossible. Enforcement resources will always lag behind what’s required for consistent and comprehensive application of the law. That mismatch is true in many contexts; all car thieves, for example, are not pursued. Inadequate enforcement, though, is particularly troubling when it comes to policing content, which is often an expression of an individual’s most important views and sentiments. When and if laws are enforced, the decision to bring the force of the law against one user out of hundreds or thousands who have authored similar content may appear arbitrary, raising due process concerns.

One big problem for state-based proposals is jurisdiction. State courts do not have the authority to hale just any American, let alone a user from another country, into their courts. Whether a court has jurisdiction over any one content dispute is a fact-intensive inquiry that requires substantial judicial resources. A recent decision by the Montana Supreme Court, in which a majority held that a New Yorker could be brought before Montana’s court as a result of numerous Facebook posts, may signal a liberalization of personal jurisdiction. For now, however, the difficulties tied to determining “where” the spread of violative content occurred might thwart rigorous enforcement of new state laws.

Federal legislation has drawn skepticism for a different reason: chilling speech. A full overview of such concerns merits a separate essay. For now, a focus on Congress’s decision to create the Foreign Malign Influence Center (FMIC) illustrates concerns about government efforts to snuff out disinformation. The FMIC’s broad authority and vague mandate provided skeptics of government intervention in the marketplace of ideas with plenty of cause for concern.

Established in 2022 and housed within the Office of the Director of National Intelligence, the FMIC aims to coordinate the government’s efforts to stymie “subversive, undeclared, coercive, or criminal activities by foreign governments, non-state actors, or their proxies to affect another nation’s popular or political attitudes, perceptions, or behaviors[.]” Judicial Watch responded to the FMIC’s formation by suing the director of national intelligence, seeking information on alleged censorship of social media users by FMIC. Officials such as Sen. Mark Warner (D.-Va.) questioned the need for the FMIC given similar offices with related aims. Most recently, Rep. Jim Jordan (R.-Ohio) subpoenaed the director of national intelligence for “communications between the [ODNI], private companies, and other third-party groups[.]” Jordan alleged that the ODNI and the executive branch in general have “coerced and colluded with companies and other intermediaries to censor speech.”

Relatedly, such proposals and laws do not offer a systemic solution to the uptick in inauthentic content. A case-by-case approach to reducing the spread of such content under state laws is akin to thinking that capturing one rabbit will keep the colony population down. The fake Swift images were viewed by millions of users across numerous platforms in a manner of days. Even the FMIC cannot broadly alter the means available to platform users and generative AI users to create and spread inauthentic content.

Platforms Can and Should Lead the Fight Against Inauthentic Content

A novel and necessary legislative response would require platforms to use the proven tools already at their disposal. Under Section 230 of the Communications Decency Act, platforms have broad discretion over content moderation. If platforms so choose, they can make it significantly harder for users to create and share inauthentic content. The problem is that they rarely choose to do so. That’s exactly where Congress should come in and mandate that platforms use interventions shown to reduce the creation of inauthentic content.

How best to create that friction is the subject of rigorous study. The Prosocial Design Network, which aims to share “evidence-based design practices that bring out the best in human nature online,” maintains a database of potential platform interventions to foster a healthier information ecosystem. Those practices should be adopted voluntarily by platforms to stem inauthentic content, or legislatively required.

The Prosocial Design Network assigns a confidence rating to each intervention by level of empirical evidence, ranging from “inference” to “validated.” Hiding potentially violative content behind a warning screen that describes the potential violation received a confidence ranking of “inference.” Theoretically, this moderation tool provides platforms with a middle ground between removing and promoting content. This content purgatory, though, has not been shown to achieve its intended impact. Relatedly, warning labels that content may contain sensitive topics received only inferential support when it comes to reducing the spread of such content.

“Validated” interventions can and should be prioritized by platforms to reduce the spread of inauthentic and illegal content. A small, but effective step may be to simply develop norms regarding the creation and spread of AI-generated content. This type of intervention, in short, amounts to a platform prompting users to review the norms prior to posting. The theory behind this intervention—that a norm nudge prior to sharing content will increase adherence to those norms—has been tested empirically. Both a 2019 experiment on Reddit and a 2022 experiment on Nextdoor indicated that this simple nudge can have a meaningful, prosocial impact not only on the content of the post in question but also on comments it may elicit. Whether the intervention would have a similar impact on Facebook or TikTok users remains up in the air, but it is the sort of low-cost intervention that minimally merits experimentation.

A “convincing” intervention would be to deploy AI for “good” by using AI to detect draft posts likely to violate community norms, the law, or both. On paper, this intervention gives pause to users thinking of dispersing antisocial content and, as a result, reduces the odds of them sharing that content. A private study conducted by OpenWeb on the eponymous platform confirmed the efficacy of the intervention. About half of the users prompted to reconsider their posts opted to revise or rescind their content. A similar experiment on Twitter (now X) indicated that proactive prompts caused users to post 6 percent fewer offensive Tweets than users in the control group.

The Prosocial Design Network includes many more interventions worthy of consideration by platforms and policymakers. Although these measures may not seem as weighty as a new law or a new tool, that’s precisely the point—platforms have easily implementable interventions at their fingertips that have been shown to positively alter user behavior. The fact that these interventions are easier than others is a deliberate kink, not a mishap. As the actors with the greatest amount of control over content and user behavior, platforms themselves should be the focus of regulatory efforts. If platforms fail to voluntarily implement proven interventions, then legislators should impose a legal responsibility to do so.

Though such laws would rightfully deserve First Amendment scrutiny, they would likely survive that review. The First Amendment does not prevent Congress from directing platforms to alter the flow of content. Sofia Grafanaki, chief operations officer of Data Elite, an early-stage venture fund for big data startups, notes that the design of a platform, including algorithms that determine which content will reach more users, is beyond the “core of First Amendment doctrine.” She reasons that such design choices are distinct from editorial decisions as to which content is permissible on a platform. Ideally, though, platforms would simply adhere to their own proffered values, like “build social value,” and adopt these interventions without legislative direction.

As a brief aside, AI labs are also in a position to reduce the creation of inauthentic content. OpenAI’s ability to detect bad actors misusing ChatGPT suggests that the company has the capacity to flag users who violate the tool’s terms. Those means can and should receive the company’s continued investment. Greater investment in detection capabilities would back up OpenAI’s claims that it is actively attempting to limit the creation and spread of inauthentic content. Labs can and should also make it harder to create an account by requiring some form of identification in order to do so. One final strategy may be to limit users with free accounts to only using prompts from a specified list. All of these strategies pose no legal barriers and are relatively easy to implement.

Conclusion

The proliferation of inauthentic content, such as AI-generated images, poses significant challenges for lawmakers and platforms alike. Despite recent legislative efforts at the state level to penalize creators of such content, these laws face practical limitations in enforcement and jurisdiction, rendering them insufficient for a comprehensive regulatory response. Furthermore, traditional laws and reactive measures like watermarking AI-generated content may not effectively curb the spread of inauthentic content due to technical and behavioral loopholes.

A more promising approach lies in platform-level interventions. Compared to lawmakers, platforms can more easily implement and regularly fine-tune effective measures, such as generating warning screens, developing normative prompts, and using AI to detect potentially violative posts. These interventions, endorsed by the Prosocial Design Network, offer a proactive means to foster healthier online behavior. Platforms have the responsibility and capability to implement these strategies, creating friction that discourages the spread of harmful content. If platforms continue to fail to realize that responsibility, Congress should direct platforms to implement evidence-based interventions. These interventions would alter user behavior and contribute to a more authentic digital environment.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow.

Subscribe to Lawfare