The Lawfare Podcast: Jim Dempsey and Jonathan Spring on Adversarial Machine Learning and Cybersecurity
Published by The Lawfare Institute
in Cooperation With
Risks associated with the rapid development and deployment of artificial intelligence are getting the attention of lawmakers. But one issue that may not be getting adequate attention by policymakers or by the AI research and cybersecurity communities is the vulnerability of many AI-based systems to adversarial attack. A new Stanford and Georgetown report, “Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications,” offers a stark a reminder that security risks for AI-based systems are real and recommends actions that developers and policymakers can take to address the issues.
Lawfare Senior Editor Stephanie Pell sat down with two of the report’s authors, Jim Dempsey, Senior Policy Advisor for the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, and Jonathan Spring, Cybersecurity Specialist at the Cybersecurity Infrastructure Security Agency (CISA). They talked about how AI-based systems are vulnerable to attack, the similarities and differences between vulnerabilities in AI-based systems and traditional software vulnerabilities, and how some of the challenges and problems with AI security may be social as much as they are technological.