Cybersecurity & Tech

Challenging the Machine: Insights from a Workshop on Contestability of Advanced Automated Systems

Susan Landau, Jim Dempsey
Friday, June 21, 2024, 9:33 AM
Not all AI is inscrutable; as governments adopt advanced technologies, design choices can ensure protection of individuals’ due process rights.
Artificial Intelligence & Machine Learning (mikemacmarketing, https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_%26_AI_%26_Machine_Learning.jpg, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

In October 2023, President Biden issued an executive order on the safe and responsible development and use of artificial intelligence (AI). As we noted in Lawfare in early March:

When AI or other automated processes are used to make decisions about individuals—to grant or deny veterans’ benefits, to calculate a disabled person’s medical needs, or to target enforcement efforts—they directly implicate the principles of fairness and accountability called for in the executive order. But contestability—a person’s right to know why a government decision is being made and the opportunity to challenge the decision, tailored to the capacities and circumstances of those who are to be heard—is not merely a best practice. Across a wide range of government decision-making, contestability is required under the Due Process Clause of the Constitution. 

But how does contestability work in practice when the government uses advanced automated decision-making technologies such as AI? The executive order raised but did not resolve this question, and it remains not fully resolved today. To advance understandings of contestability in the face of technological advancement, we convened a workshop in January (along with Ece Kamar of Microsoft and Steven M. Bellovin of Columbia University) that brought together technologists, representatives of government agencies and civil society organizations, litigators, and researchers. The spectrum of participants enabled a diverse set of voices to be heard.

In early March, building on the workshop discussions and other literature, we published our recommendations. Later that month, the Office of Management and Budget (OMB) issued policy direction to federal agencies for managing the risks posed by agency use of AI systems. We were pleased to see that a number of the recommendations from the workshop are found there; we note, of course, that we were probably not the only source of these ideas.

We have now completed a summary of the January workshop. Though it is not a comprehensive review of all current work on contestability and advanced automated decision-making systems, the report should nonetheless be valuable to policymakers, scholars, and educators.

Three insights from the workshop stand out. First is the need to consider the human factor. When agencies turn to automated technologies to improve processing of claims for government support, the officials overseeing design of the systems must consider the technology from the perspective of the individuals affected. A system prone to errors can have devastating impact: missed rent payments and the prospect of eviction; inability to afford vital medication; reduction in home care services that support a disabled person in the basic activities of eating, dressing, bathing, and using the toilet. Unexplainable algorithms and opaque appeal processes can overwhelm people already on the margin in dealing with life’s necessities. Quite literally, designing advanced decision-making systems to be understandable and contestable can prevent human suffering.

A second key point is that the risks of advanced technologies include their use as a substantial input to a process, even if the machine is not making the final decision. We are not aware of any federal agency that is currently using or planning to use AI or other advanced techniques to autonomously make final decisions about individuals. As far as we were able to determine, under currently contemplated use cases, a human would always make the final decision. But that may be a distinction without a difference. Suppose, for example, that an automated system assigns a credit score of 580 to a person. If the programmatic rule applied by a loan officer examining the application for a federal loan guarantee says that loans can be approved only for persons with scores above 580, then when a human says, “Credit score of 580: loan denied,” this is effectively a decision made by the algorithm. Thus, our recommendations for contestability-by-design apply not only to decision-making technologies but also to decision-support technologies.

A third key point to emerge from the workshop discussion is that it is in fact possible to design systems that use the most advanced technology while still being understandable and contestable. There is a clear choice for system developers between techniques that are not understandable and those that are. At all levels, government officials responsible for programs that make—or support—decisions about individuals should understand the technology and insist that contractors choose approaches that build in contestability from the outset. Experts at our workshop agreed it can be done. Doing so will require technical expertise at the contracting agency as well as at the contractor, and agencies should develop in-house expertise to ensure rights-impacting advanced automated systems are contestable.

The OMB’s March 2024 guidance memo addressed some issues, but ensuring contestability will require additional effort. Following the OMB memo, multiple agencies issued guidelines for entities under their purview on the development and use of AI. For example, the Department of Health and Human Services issued guidance on promoting the responsible use of AI in automated and algorithmic systems by state, local, tribal, and territorial governments in public benefit administration. The U.S. Department of Agriculture issued similar guidance for the 16 federal nutrition programs that it manages, serving a variety of populations, from infants and children to the elderly. The secretary of labor published guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. And the Department of Housing and Urban Development issued guidance on complying with the Fair Housing Act, the Fair Credit Reporting Act, and other relevant federal laws when using tenant screening systems, which use data, such as criminal records, eviction records, and credit information, that can lead to discriminatory outcomes.

Further careful analysis of how contestability is being implemented under these guidelines will be needed, along with monitoring of the AI inventories that agencies are required to publish under the executive order and the OMB memo. Given the ubiquity of AI and other advanced technologies, government agencies, corporations, advocates, and academics in multiple disciplines need to work together to ensure the protection of affected individuals’ safety, security, and rights.


Susan Landau is Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, and is founding director of Tufts MS program in Cybersecurity and Public Policy. Landau has testified before Congress and briefed U.S. and European policymakers on encryption, surveillance, and cybersecurity issues.
Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare