The AI Bill of Rights Makes Uneven Progress on Algorithmic Protections
The Biden administration issues a clarion call for algorithmic justice but misses some key early opportunities.
Published by The Lawfare Institute
in Cooperation With
The White House has released the Blueprint for an AI Bill of Rights—which is likely the signature document reflecting the Biden administration’s approach to algorithmic regulation. Paired with a series of agency actions, the Biden administration is working to address many high-priority algorithmic harms—such as those in financial services, health care provisioning, hiring, and more. There is clear and demonstrated progress in implementing a sectorally specific approach to artificial intelligence (AI) regulation. The progress being made, however, is uneven. Important issues in educational access and worker surveillance, as well as most uses of AI in law enforcement, have received insufficient attention. Further, despite its focus on AI research and AI commerce, the White House has yet to effectively coordinate and facilitate AI regulation.
So what is the Blueprint for an AI Bill of Rights? In late 2020, the Trump administration released its final guidance on regulating AI. In response, I argued that the document did not consider a “broad contextualization of AI harms.” Under the Biden administration, the United States is no longer lacking in this respect.
Developed by the White House Office of Science and Technology Policy (OSTP), the Blueprint for an AI Bill of Rights (AIBoR) is foremost a detailed exposition on the civil rights harms of AI. It is focused primarily on AI’s proliferation in human services, including hiring, education, health care provisioning, financial services access, commercial surveillance, and more. It is not meant to be universal AI guidance, and it gives relatively short shrift to other uses of AI, such as in critical infrastructure, most consumer products, and online information ecosystems.
The AIBoR includes a well-reasoned and relatively concise statement of just five principles in addition to a longer technical companion with guidance toward implementing the principles. The statement first calls for “safe and effective” AI systems in response to a broad overestimation of AI’s actual capabilities, which has led to widespread failures in research and application. Its insistence on “notice and explanation” is also important to ensure that individuals are aware of when they are interacting with an AI system and are therefore more able to identify and address possible errors. The third principle on “algorithmic discrimination protections” is strongly worded, calling for proactive equity assessments of algorithms and ongoing disparity mitigation. These are well-founded AI principles, and some form of them is most often found in essentially every AI ethics statement.
The inclusion of data privacy, the fourth principle, is slightly less common. But it is welcome, as data collection practices are inextricably linked from algorithmic harms. It specifically advocates for data minimization and clarity in users’ choices related to the use of their personal data. The last principle, human alternatives, consideration, and fallback, encourages the availability of a human reviewer who can override algorithmic decisions.
Overall, these are perfectly fine principles for the design and use of AI systems in the United States, and the AIBoR extensively justifies the need for their broad adoption. But, because they are nonbinding, the degree to which the AIBoR will culminate in substantial changes to these systems is largely dependent on the actions of federal agencies.
Criticisms of these principles itself as “toothless” are missing the forest for this particular tree. OSTP’s work was never going to have teeth. The real and lasting regulatory and enforcement work of these principles is and will happen first and foremost in federal agencies. The summation of federal agency action is quite significant and has grown since I last reviewed them in February. Collectively, the agencies are working on many, though not all, of the highest priority algorithmic harms.
Highlights of the agency actions include:
- The Federal Trade Commission’s proposed rulemaking on unfair or deceptive practices in commercial surveillance, as well their orders for algorithmic deletion in response to illegal data practices.
- The Equal Employment Opportunity Commission’s technical guidance on improving the market of AI hiring software for people with disabilities.
- The Consumer Financial Protection Bureau’s assertion that the Equal Credit Opportunity Act requires companies to offer a simple explanation if they deny credit access, even if that denial was issued by a AI system.
- A series of efforts from the Department of Health and Human Services aiming to combat racial bias in health care, starting with a systemic review, and to be followed by principles for health care provisioning algorithms, and possibly regulatory action through Medicare policy.
- A new initiative from the Department of Education on AI in educational technology, with a first report and recommendations expected to come in early 2023.
- A multiagency effort led by the Department of Housing and Urban Development (HUD) on addressing inequity on property valuation, including the significant role of AI.
That’s commercial surveillance, hiring, credit, health care provisioning, education technology, and property valuation. The AIBoR also mentions workstreams on tenant screening, veterans’ data, and illegal surveillance of labor organizing. This is a really significant amount of progress, and future AI regulatory challenges can build on the expertise and capacity that agencies are developing now. Of course, this list is not without flaws. And there are some noticeable absences, especially in educational access, workplace surveillance, and, disconcertingly, law enforcement.
Notably, there is no mention of the algorithms that determine the cost of higher education for many students. Generally, the Department of Education appears a bit behind—its first project on algorithms in teaching and learning will likely not be delivered until 2023. At the White House launch event, Secretary of Education Miguel Cardona was less able to clearly articulate the risk of AI in education and had less concrete work to announce as compared to his peers from Health and Human Services, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission.
Aside from the Federal Trade Commission, federal agencies have also largely failed to directly address AI surveillance issues. The AIBoR notes that “continuous surveillance and monitoring should not be used in education, work, housing,” and that these systems can lead to mental health harms. Yet there is no obvious associated effort from federal agencies to follow through on this issue. On employee surveillance, the Department of Labor’s only project is related to surveillance of workers attempting to organize labor unions, and there is no mention of the Occupational Safety and Health Administration, which could be issuing guidance on worker surveillance tools, especially its health impacts and its use in home offices.
Most noticeable, however, is the near total absence of regulation of, or even introspection about, federal law enforcement’s extensive use of AI: There is no highlighted development of standards or best practices for AI tools in that field, nor did any representative from law enforcement speak at the document’s launch event. And, glaringly, the AIBoR opens with a disclaimer that says its nonbinding principles are especially nonbinding to law enforcement. This certainly does not present an encouraging picture. One is left to doubt that federal law enforcement will take steps to curtail unapproved use of facial recognition or set limits on other AI uses, such as affective computing, without mandated direction from leadership in the White House or federal agencies.
In announcing the AIBoR, the White House has revealed a continued commitment to an AI regulatory approach that is sectorally specific, tailored to individual sectors such as health, labor, and education. This is a conscious choice, and the resulting process stands at odds with issuing direct and binding centralized guidance—which is why there isn’t any. There are advantages to a sectorally specific (or even application-specific) approach, despite its being more incremental than a more comprehensive approach.
In a sectorally and application-specific approach, agencies are able to perform focused analysis on the use of an algorithm, appropriately framed within its broader societal context. The Action Plan to Advance Property Appraisal and Valuation Equity (PAVE) is a great example. Originating from an interagency collaboration led by HUD, the PAVE action plan tackles inequitable property assessment, which undermines the wealth of Black and Latino/Latinx families. As part of this broader problem, the PAVE plan calls for regulation on automated valuation models, which is a type of AI system known to produce larger appraisal and valuation errors in predominantly Black neighborhoods. Critically, the PAVE plan recognizes that the use of these algorithmic systems is a part, but not the whole, of the underlying policy challenge, as is generally the case.
Agencies can also be better incentivized to address sector-specific AI issues: They might be more deeply motivated to address the issues that they choose to work on, especially if they are responding to calls from engaged and valued stakeholders. Before the PAVE action plan, advocacy organizations such the National Fair Housing Alliance called on HUD to address property appraisal inequity and specifically called for more attention to algorithmic practices. In general, I expect more effective policy from agencies that choose their own AI priorities, rather than responding from a top-down approach.
Further, by tackling one problem at a time, agencies can gradually build capacity to address these issues. For example, by hiring data scientists and technologists, agencies can increase their ability to learn from, and consequently address, a more diverse range of AI applications. This process may help agencies learn iteratively, rather than implementing sweeping guidance about AI systems they don’t quite fully understand. Application-specific regulation enables an agency to tailor its intervention to the specifics of a problem, more precisely considering the statistical methods and development process of a category of algorithmic systems.
Comparatively, the European Union’s (EU) AI Act is attempting to write relatively consistent rules for many different types and applications of algorithms—from medical devices and elevators to hiring systems and mortgage approval—all at once. The many ongoing debates and intense negotiations have demonstrated how challenging this is. It is helpful to consider that an algorithm is essentially the process by which a computer makes a decision. And algorithms can be used to make, or can help in making, functionally any decision (even though they often should not be). This is illuminating, because it reveals how tremendously challenging it is to write universal rules for making any decision. Further, when the EU’s broad and systemic legislation is passed, many regulators and standards bodies in the EU may find themselves suddenly handed the enormous task of creating AI oversight for an entire sector, rather than a more gradual buildup.
Of course, the United States’ incremental and application-specific approach has clear drawbacks too, which are especially apparent in the aforementioned applications that warrant immediate attention, but have so far received none. Some of these, perhaps especially law enforcement, may need more than a polite suggestion from OSTP. Generally, it can be forgiven that some AI rules are currently missing, so long as the federal government is receptive to adjusting its focus over time. The decades-long proliferation of algorithms into more and more services will continue for many years to come. This ongoing algorithmic creep means that no matter what targeted regulations are implemented now, agencies will have to continually tune and expand their algorithmic governance to keep pace with the market.
If the majority of the algorithmic oversight and enforcement initiative is to come from federal agencies, the White House should then act as a central coordinator and facilitator. It can help smooth out the unevenness between agencies by working to increase knowledge sharing efforts, identifying common challenges between different agencies, and placing political pressure on more lax agencies that are reluctant to implement change. The AIBoR is a first step in this direction, noting the broad set of challenges that affect various agencies, and suggesting action to address a wide range of AI issues. It also contains an impressive collection of examples of how governments at the local, state, and federal levels have started to address different algorithmic harms—potentially providing a template, or at least ideas, for how others can proceed.
The White House, however, missed two opportunities for more concrete agency action on AI governance, and further the AIBoR does not clearly articulate a plan for a central coordinating role to aid agencies moving forward with these regulations.
First, the Biden administration could have better executed an inventory of government AI applications. In its closing days, the Trump administration issued Executive Order 13960, requiring all civilian federal agencies to catalog their nonclassified uses of AI. Twenty months later, the results of the federal catalogs are disappointing. The Federal Chief Information Officers (CIO) Council was tasked with developing guidance for the inventory but only required answers to three questions: department, AI system name, and description. Almost every federal department decided to meet that bare minimum requirement, leaving much essential information unknown: Where did the data originate? What is the outcome variable? Is there an opt-out procedure? Are the AI models developed by external contractors, as an estimated 33 percent of government AI systems are, or by the agency itself?
While the CIO Council has released a draft version of an algorithmic impact assessment (which is certainly a useful starting point), there has been no public reporting akin to model cards, the widely accepted algorithmic transparency standard in the private sector. Nor has the government produced a bespoke data standard for documenting AI models, as the U.K. has done. This is a significant shortfall in public disclosure around public-sector AI use, the realm in which the federal government has the most direct control. The progress here is concerning, and it makes it more difficult to trust that the AI Bill of Rights will lead to higher standards on government AI use, as it claims it will, and as Executive Order 13960 calls for.
Second, the Biden administration did not enforce guidance from the Office of Management and Budget (OMB) that was published in the last days of the Trump administration. Based on a 2019 executive order, the December 2020 OMB directive asked agencies to document how their current regulatory authorities might interact with AI. Many agencies did not respond, such as the Department of Education, the Department of Transportation, HUD, the Department of Labor, the Department of Justice, the Department of Agriculture, and the Department of the Interior. Other responses were functionally useless. For example, the Environmental Protection Agency’s response suggests that it has no relevant regulatory authority and no planned regulatory activity, despite, for example, regulating air quality models since 1978. The Department of Energy functionally offered a nonresponse, suggesting that it “has no information,” despite regulatory authority over energy conservation in appliances, industrial equipment, and buildings that is progressively more enabled by AI.
This was a missed opportunity to collect broad information on how agencies were considering the impact of AI use in their sectors. The Department of Health and Human Services provided the only meaningful response, extensively documenting the agency’s authority over AI systems (through 12 different statutes), its active information collections (for example, on AI for genomic sequencing), and the emerging AI use cases of interest (mostly in illness detection). The thoroughness of the agency’s response shows how valuable this endeavor could be, and the Biden administration should consider resuscitating it.
These first shortfalls were rooted in failure to follow through on two Trump administration guidance documents, both of which were enacted directly before the presidential transition. Some leeway is called for, however, as the Biden administration was greeted by understaffed agencies and a raging pandemic. Still, these are worthwhile endeavors, and both are worth revisiting.
It is not clear what coordinating role the White House envisions for itself in the future implementation of the AIBoR, which, after all, is just a blueprint. While the White House could still take a stronger, more organizational role in the future, the AIBoR would have benefited from a list of actionable next steps for OSTP or the White House at large.
Perhaps most crucially, this could include documenting shared barriers and structural limitations that prevent agencies from meaningfully governing algorithms. Depending on the agency and circumstances, this could include challenges in hiring data scientists and technologists, for which AIBoR could have pointed to the new data scientist hiring process developed by the U.S. Digital Service. Alternatively, agencies looking to provide oversight may be limited in their data access or information gathering capacities, which can be a critical limitation in evaluating corporate algorithms. Now or in the future, agencies may also struggle with building secure technical infrastructure for regulatory data science. It’s not clear which of these challenges may be shared or systemic—finding out, coordinating knowledge sharing between agencies, and elevating the intractable issues to the public’s and Congress’s attention should be a future goal of the AIBoR. In all likelihood, some of this work is ongoing, but there is little indication in the published AIBoR.
AI regulation is perpetually going to be a key issue into the future, and the White House should give it the same attention and dedication it has directed toward AI research and AI commerce—which have a dedicated task force and external advisory committee, respectively. Given the extensive algorithm harms that the AIBoR has documented so thoroughly, surely a similar initiative for AI regulation would be to the benefit of American civil rights.