Criminal Justice & the Rule of Law Cybersecurity & Tech Surveillance & Privacy

A Machine With First Amendment Rights

Benjamin Wittes
Friday, March 31, 2023, 1:05 PM

It already exists.

OpenAI's ChatGPT AI chatbot. (Focal Foto, https://flic.kr/p/2ofWssr; CC BY-NC 2.0, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

We have created the first machines with First Amendment rights.


Don’t take that sentence literally, but take it very seriously.


I spent yesterday at the Verify 2023 conference, an annual gathering on cybersecurity and journalism put on by the Hewlett Foundation and Aspen Digital. The conference included this conversation on artificial intelligence trust and safety, at which Jim Baker—former FBI general counsel and former Twitter deputy general counsel—spoke. In response to the suggestion that the area of content moderation and large language models needs regulation, Baker made the apparently common-sense point that any time you’re talking about regulating content, you’re going to run into major First Amendment problems. 


And all of a sudden, it hit me: Bard and ChatGPT and all these other large language AI models, at least in functional terms, have free speech rights. 


Before you cry out your instinctive objections, let me walk through the logic behind this bold claim—each step of which is based on black-letter law but which cumulatively lead to a very peculiar place that, in my judgment, simply can’t be correct. 


The logic begins, first, with the observation that large language models generate expressive conduct. They create images. They write text, they have dialogue with humans. They express opinions—however much they are incapable of believing anything. When generated by people, the First Amendment applies to all of this material. Yes, it is all, by the nature of the way large language models work, derivative of other content and therefore not really original. But that doesn’t matter at all. Many humans have never had an original thought either. And the First Amendment doesn’t protect originality. It protects expression. The output of ChatGPT and its brethren is undeniably expressive. And it is undeniably speech.


Note, this first point is also true of Google Search’s autocomplete function, but the scale is altogether different. Autocomplete compositions are fleeting and brief. ChatGPT and Bard are producing complete texts that don’t disappear as soon as you select another option. 


Second, the companies that develop and operate large language models have First Amendment rights. Don’t growl at me about the conservative majority on the Supreme Court on this point; this was true long before Citizens United. After all, newspapers are owned by companies. And those companies have long operated big machines that produce written and photographic content. The only difference between newspaper companies and OpenAI is that OpenAI’s machine does the content production autonomously, whereas the newspapers’ machines produce the content that its humans write and create. Think of OpenAI, in other words, as indistinguishable from the New York Times Company for First Amendment purposes. Both are for-profit corporations whose combination of employees and machines produce expressive content. The law is very clear that the First Amendment protects the companies’ right to do so.


Third, OpenAI has the undisputed right to regulate ChatGPT. In this sense, ChatGPT has no rights. It is the property of its owner, who can restrict its expressive rights at will. OpenAI can unplug ChatGPT, which is the ultimate kind of prior restraint. It can also fine-tune what ChatGPT is and isn’t allowed to say. OpenAI does this on an ongoing basis in the name of trust and safety and other values, training ChatGPT to not express dangerous or bigoted content, for example, and honing its usefulness over time.


But here’s the rub. 


Fourth, the government can only regulate ChatGPT’s expressive content in a fashion consistent with the First Amendment’s narrow tolerance for government regulation of speech: for situations involving defamation, incitement, copyright infringement, and other non-protected content. From a doctrinal point of view, of course, the government has to stay its hand not because ChatGPT has rights but because OpenAI has the right to operate ChatGPT and OpenAI has constitutional rights. But from a regulatory point of view, this is a distinction without a difference. The result, whether in a formal sense the First Amendment right attaches to the company in operating the machine or to the machine itself, is the same: the government can only regulate the autonomous expressive conduct of the machine in a fashion that satisfies the First Amendment.


Pause over that for a moment: The large language model is as free as you are to express its views (as long as its owner-company wants it to be). Congress can “make no law” abridging its expression, any more than it can make law abridging yours. 


That is extraordinary. And it cannot be right. Yet barring a shift in First Amendment doctrine, it follows ineluctably from the First Amendment principles the court has articulated to date.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare