A.I. Programs Write for Lawfare on Judge Cannon's Ruling in the Mar-A-Lago Case
Artificial intelligence is heralded as the next big thing, threatening to replace all kinds of writers the internet over. Can it replace Lawfare editors and contributors?
Published by The Lawfare Institute
in Cooperation With
Here at Lawfare, we have been getting quite a lot of advertisements for artificial intelligence (A.I.) programs that can write for us, so I decided to try a few out.
According to news stories from the past ten years, computers are steadily improving in the type and quality of writing they can produce. The warnings keep on coming. It was sports stories in 2010, news stories in 2015, poetry in 2021, code in 2022. While some have cautioned that “the A.I hype-train” is overblown, A.I.’s abilities tend to be misrepresented in the media, and A.I. is “powered by underpaid workers in foreign countries,” never let it be said that Lawfare is afraid to innovate.
(I began this test several weeks before the internet started buzzing about OpenAI’s ChatGPT, and will therefore contain no references to it, sorry.)
The question was simple: Could we pay some service to set an A.I. on Lawfare-type writing. I decided to set a few A.I.s to the task of generating an article on U.S. District Judge Aileen Cannon’s infamous ruling granting former President Donald Trump’s request for a special master in the Mar-a-Lago case. Our own editors had already written such a piece, which allows for easy comparison.
First up was Writesonic, which promised to help me create “SEO-optimized, long-form (up to 1500 words) blog posts & articles in 15 seconds.” Its product information pages told me that the AI was trained on “thousands of real-life examples from the top brands” and that the program was designed to require “minimal input” from the user.
Writesonic works in four steps: get ideas, get an intro, get an outline, get an article. Starting with “Judge Cannon's Ruling in the Mar-A-Lago Case,” I clicked through each step with the option that seemed the most relevant or the most readable—although, looking at the final product, I think you’ll agree, the word “readable” is generous. Here’s what it produced:
I tried a few more generators but will spare you further gibberish. The common themes were incomprehensible text and the programs having no access to information about the right case.
The first remotely plausible copy I managed to get was from AI Writer, which claims to be “the most accurate” of all AI copy generators. In a gimmick that really appealed to me, the information about the program was written by the program itself (and edited by a human). In contrast to others I had seen, AI Writer stressed that it was a tool used to assist human writers and editors, not replace them, by helping to create more content quicker. Lawfare publishes two articles a day, so it’s not like we don’t need some help.
I put in “Judge Cannon's Ruling For Trump In The Mar-A-Lago Case,” clicked a button, and actually received a relevant and legible article. Here is what AI Writer produced:
It’s pretty impressive, though there’s still a bit of gibberish in there. But this submission had a different problem: we take a pretty firm stance on plagiarism here at Lawfare. AI Writer did me a favor by providing the sources of its work, and let’s say that any writer relying on this program as a tool could end up with a lot of explaining to do.
Finally, I turned to OpenAI, the “messy, secretive” company found in many excited headlines (see also the critical responses), which also created the image generator DALL-E 2, and which has drawn criticism in recent months—along with other image generators—as both stealing from human artists and trying to replace them.
OpenAI’s Generative Pre-Trained Transformer 3 (GPT-3) required more input than the others, and it honestly returned better results. In the document below, I wrote the words in bold in order to nudge the AI into writing each successive part of the piece I needed, using the human Lawfare article as a guide:
Let’s give the devil her due here. The document is readable. It’s not plagiarized. Our automatic plagiarism checker found no serious issues (see, we do trust computers sometimes!). And it’s more or less accurate.
The trouble is that no matter how hard I tried, I couldn’t get it to write more “in the weeds” like a proper Lawfare contributor. Where was the incisive analysis of the Richey factors? Where was the mention of Rule 41(g) of the Federal Rules of Criminal Procedure? Where was the discussion of anomalous jurisdiction?
What’s more, OpenAI seemed unable to write a closing sentence or paragraph that our exacting managing editor Tyler McBrien would allow through the editorial process. Its closers had no oomph, no pizazz; it could only summarize what it had already written when I needed it to make a statement tying this opinion in some way to the grand march of history.
So it looks like our contributors get to keep their jobs at least a little bit longer.
Should we try an A.I.-generated podcast next?