Cybersecurity & Tech

The Cyberlaw Podcast: When AI Poses an Existential Risk to Your Law License

Stewart Baker
Wednesday, May 31, 2023, 2:57 PM

Published by The Lawfare Institute
in Cooperation With
Brookings

This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since it’s so squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling- you-have-to-laugh-to keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post, the model returned exactly the case law the lawyer wanted—because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing. I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked, “Are the other cases you provided fake,” the model denied it. Well, all right then. Who among us has not asked Westlaw, “Are the cases you provided fake?” Somehow, I can’t help suspecting that the lawyer’s claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you’re wondering whether AI poses existential risk, the answer for at least one lawyer’s license is almost certainly “yes.”

But the bigger story of the week was the cries from Google and Microsoft leadership for government regulation. Jeffery Atik and Richard Stiennon weigh in. Microsoft’s President Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. And they point out that, as usual, Smith is advocating for a process that Microsoft could master pretty easily. Google’s Sundar Pichai also joins the “regulate me” party, but a bit half-heartedly. I argue that the best way to judge Silicon Valley’s confidence in the accuracy of AI is by asking when Google and Apple will be willing to use AI to identify photos of gorillas as gorillas. Because if there’s anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes

Moving from policy to tech, Richard and I talk about Google’s integration of AI into search; I see some glimmer of explainability and accuracy in Google’s willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.

Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S. - based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia’s CEO foresees. Jeffery and I agree that Invidia has much to fear from a Chinese effort to build a national champion to compete in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether its effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.

China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn’t be able to repay China’s infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the friend and the money. 

Finally, Richard and Jeffery both opine on Irelands imposing—under protest—of a $1.3 billion fine on Facebook for sending data to the United States despite the Court of Justice of the European Union’s (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their deal on a third effort to satisfy the CJEU that U.S. law is “adequate” to protect the rights of Europeans. Speaking of which, anyone who’s enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will  debate the latest privacy framework. If we can, we’ll release it as a bonus episode of this podcast, but listening live should be even more fun!

Download 459th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.


Stewart A. Baker is a partner in the Washington office of Steptoe & Johnson LLP. He returned to the firm following 3½ years at the Department of Homeland Security as its first Assistant Secretary for Policy. He earlier served as general counsel of the National Security Agency.

Subscribe to Lawfare