Criminal Justice & the Rule of Law Cybersecurity & Tech Surveillance & Privacy

ChatGPT and the First Amendment: Whose Rights Are We Talking About?

Alan Z. Rozenshtein
Tuesday, April 4, 2023, 8:16 AM

If ChatGPT is granted First Amendment rights, it won’t be because we are convinced that it has attained human-like personhood.

The Pioneer Building in San Francisco houses OpenAI’s offices. (HaeB, https://tinyurl.com/ycx5c85b; CC Attribution-Share Alike 4.0 International, https://tinyurl.com/4kj4nc5e)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week, Benjamin Wittes argued that, in developing large language models (LLMs) like ChatGPT, “We have created the first machines with First Amendment rights.” He warns us not to “take that sentence literally” but nevertheless “very seriously.” I want to take up the challenge. Wittes is absolutely correct that any government regulation of LLMs would implicate—and thus be limited by—the First Amendment. But we need to be very careful about what we mean when we say that ChatGPT—or indeed any nonhuman entity—has “rights,” First Amendment or otherwise.

Justifications for free expression—and thus for the First Amendment’s prohibition on government action “abridging the freedom of speech”—fall into three broad categories: (a) furthering the autonomy and self-fulfillment of speakers; (b) enabling a “marketplace of ideas”—a legal and cultural regime of open communication—that benefits listeners; and (c) promoting democratic participation and checking government power.

Keeping these justifications in mind clarifies when and why the law grants nonhuman entities First Amendment rights. Take the controversial example of corporations. When the Supreme Court held in Citizens United that corporations had First Amendment rights to spend on political speech—and when then-Republican presidential nominee Mitt Romney infamously told hecklers that “corporations are people, my friend”—they weren’t metaphysically confused, thinking that corporations are people in the same way that you and I were. Rather, the legal assignment of First Amendment rights to corporations exists because, according to its supporters, allowing corporations to invoke those rights in litigation serves the purposes of the First Amendment. (Whether it actually does support the purposes of the First Amendment, or, as many critics argue, subverts them, is a separate question.)

So if ChatGPT is granted First Amendment rights in the near future, it will be on that basis: not because we are convinced that it has attained human-like personhood but because giving it the ability to raise a First Amendment defense against government regulation serves the purposes of the First Amendment. (If ChatGPT does gain sentience and becomes a nonhuman person, this all goes out the window. But at that point we’ll have bigger issues to worry about, and I for one look forward to debating the finer points of AI metaphysics with our robot overlords.) In other words, whenever you see the argument “ChatGPT has First Amendment rights,” you should translate it to the more accurate (albeit unwieldy) “the law should limit the government’s ability to regulate ChatGPT in order to advance First Amendment values.”

The reason it’s so important to keep this in mind is that it avoids providing too little protection on the one hand and too much protection on the other. ChatGPT and other LLMs provide enormous benefit to listeners who use it to learn and to speakers who use it to communicate. That itself is reason enough to view any attempt at government regulation of this technology with First Amendment skepticism (though some regulation may well be permissible).

At the same time, it would be a potential disaster to grant ChatGPT rights on the basis that it—or, more specifically, its corporate owner OpenAI—is itself normatively entitled to speak freely, because any regulation would have to be justified not merely on cost-benefit grounds but as an infringement on the rights of ChatGPT/OpenAI.

This is the general problem with broad arguments for corporate First Amendment rights: They tend to go beyond the point at which the ascription of those rights benefits individual speakers and listeners (and thus society generally). As I have argued, this is particularly important when we’re dealing with First Amendment arguments made by technology companies. Because the potential scope of “Silicon Valley’s speech” is so broad, even small details about the scope of their First Amendment entitlements will have major consequences for users, both as speakers and as listeners.

Of course, OpenAI is itself controlled by individuals, and those individuals have their own First Amendment rights. But the whole point of legal corporate personhood is that it shields the individuals who run the corporation from its liabilities. And precisely for this reason these individuals should not be able to augment their own First Amendment rights by drawing on the resources of the company they happen to manage. At most they should only be able to ensure that the company’s speech is not mistaken for their own (an issue that can come up when the government compels corporate speech).

So by all means, we should be open to the law granting the output of ChatGPT First Amendment protections, but we should always be clear as to whose rights we are ultimately concerned about: the speakers and listeners who use and benefit from ChatGPT, not ChatGPT itself or the people who created it.

 


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare