CNAS Report on War in the Robotic Age

Matthew Waxman
Friday, January 24, 2014, 9:50 AM
Robert Work and Shawn Brimley of the Center for a New American Security just published 20YY: Preparing for War in the Robotic Age.  It’s a provocative report about how many new technologies (in cyber, robotics, miniaturization, etc.) will reshape warfare.  I was particularly interested in some things it says about autonomous weapon systems – weapon systems that can identify and engage targets on their own – and other technologies that pose challenges for the law of armed conflict and other international n

Published by The Lawfare Institute
in Cooperation With
Brookings

Robert Work and Shawn Brimley of the Center for a New American Security just published 20YY: Preparing for War in the Robotic Age.  It’s a provocative report about how many new technologies (in cyber, robotics, miniaturization, etc.) will reshape warfare.  I was particularly interested in some things it says about autonomous weapon systems – weapon systems that can identify and engage targets on their own – and other technologies that pose challenges for the law of armed conflict and other international norms. In our own paper on regulating autonomous battlefield weapons, Ken and I argue against proposals for a treaty banning them.  Not only do we think such a ban is dangerous and morally questionable, but we’re skeptical that a global ban could work.  This CNAS report, though not focused on such regulatory issues, helps explain why that is. For one thing, the report explains that development of many of the enabling technologies of autonomous weapons systems – artificial intelligence and robotics, for example – are being driven by private industry for many commercial and societally-beneficial purposes.  They are developing and proliferating rapidly independent of military demand and investment.  Moreover, the report explains that optimal future war-fighting will involve combined human-machine decision-making, but the degree and nature human role may diminish as technology evolves and will depend significantly on context, mission, operational environment, and so on.  This sliding scale of human-machine interaction and the varying types of human involvement in ever increasingly rapid tactical decision-making means that reaching international agreement on where to draw and how to interpret prohibitory lines will be extremely difficult.  In practice, a ban on autonomous weapons will not be an easily policed bright-line rule. I wish the CNAS report had more to say on policy recommendations. It also raises only very briefly other major issues of interest to Lawfare readers, such as how unmanned or autonomous systems might affect deterrence and crisis stability, especially as their uses and responses to them might not fit within well-established norms and patterns of state behavior.  This is only the first report, though, from multi-year project by CNAS that looks to be very interesting and ambitious.

Matthew Waxman is a law professor at Columbia Law School, where he chairs the National Security Law Program. He also previously co-chaired the Cybersecurity Center at Columbia University's Data Science Institute, and he is Adjunct Senior Fellow for Law and Foreign Policy at the Council on Foreign Relations. He previously served in senior policy positions at the State Department, Defense Department, and National Security Council. After graduating from Yale Law School, he clerked for Judge Joel M. Flaum of the U.S. Court of Appeals and Supreme Court Justice David H. Souter.

Subscribe to Lawfare