The Cyberlaw Podcast: Are AI Models Learning to Generalize?
Published by The Lawfare Institute
in Cooperation With
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having “emergent” capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn “generalization” of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. It’s lawyers holding forth on the frontiers of technology, so take it with a grain of salt.
Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it’s not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There’s no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. But the public measures adopted by the West do not seem likely to effectively defeat or deter China’s strategy.
The group is less impressed by the New York Times’ claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don’t do it well, I argue.
Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer.
Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations.
We will be eagerly waiting for resolution of the European fight over Facebook’s subscription fee and the move by websites to “Pay or Consent” privacy terms fight. I predict that Eurocrats’ hypocrisy will be tested by an effort to rule for elite European media sites, which already embrace “Pay or Consent” while ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task.
Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more.
Cristin also covers a detailed new Google TAG report on commercial spyware.
And in quick hits,
House Republicans tried and failed to find common ground on renewal of FISA Section 702
I recommend Goody-2, the ‘World’s ‘Most Responsible’ AI Chatbot
Dechert has settled a wealthy businessman’s lawsuit claiming that the law firm hacked his computer
Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections
The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen.