Congress Cybersecurity & Tech

The Facial Recognition Act: A Promising Path to Put Guardrails on a Dangerously Unregulated Surveillance Technology

Jake Laperruque
Tuesday, November 1, 2022, 8:16 AM

There’s consensus in Congress that facial recognition needs to be reined in, but not nearly enough action to bring about effective rules. A new bill could jump-start the debate and move the nation toward the comprehensive set of limits that are needed.

U.S. Customs and Border Protection officer assists a passenger at a Biometric Facial Recognition station prior to boarding a flight at Houston International Airport in 2018. (Official photo by U.S. Customs and Border Protection)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Sept. 28, Rep. Ted Lieu (D-Calif.)—along with Reps. Sheila Jackson Lee (D-Texas), Yvette Clarke (D-Mass.), and Jimmy Gomez (D-Calif.)—introduced the Facial Recognition Act, a bill that sets broad limits on law enforcement use of facial recognition surveillance. The legislation would enact a set of policies that effectively address both the risks that come about when facial recognition technology doesn’t work well—including wrongful arrests and algorithmic bias—and those that arise when its success opens the door to pervasive surveillance and abuse.

Many members of Congress have raised significant concerns about facial recognition technology. Despite these conversations, Congress has failed to enact any measures to put guardrails on the technology, leaving it largely unregulated. Such inaction is ominous given how rapidly facial recognition surveillance is being adopted by police departments across the country (for example, Clearview AI’s facial recognition systems are now used by over 3,000 police departments, roughly one in every six across the country). Also, congressional legislation is the only way to limit the use of the technology by federal agencies such as the FBI, U.S. Immigration and Customs Enforcement, and U.S. Customs and Border Protection.

While several other bills on facial recognition have been introduced, the Facial Recognition Act is the first thoughtful and detailed articulation of what a regulatory regime for state and federal law enforcement use of facial recognition technology could look like, if codified. The bill should jump-start a much-needed discussion in Congress about how to address and regulate the use of the technology, in the hopes that it will prompt the passage of much-needed limits into law. 

Until an effective regulatory regime is in place, the government could implement a moratorium on law enforcement use of facial recognition. My organization, the Center for Democracy and Technology, has long advocated for this policy. Members of Congress have also discussed legislation that would implement a moratorium—most notably the 2021 Facial Recognition and Biometric Technology Moratorium Act, which would impose a moratorium on all federal government use of facial recognition and other biometric surveillance. Pausing law enforcement use of facial recognition while policymakers assess reasonable rules would be far better than the status quo, where debate over limits occurs as the technology is deployed in unrestricted and unsafe manners. 

These circumstances are certainly not ideal, but that is no reason to avoid the discussion of what regulations and limits are needed. The Facial Recognition Act will hopefully be the first step in fostering meaningful engagement by Congress and pressing it to pass meaningful limits into law.

Elements of the Facial Recognition Act

The bill includes measures to limit the uses of law enforcement use of facial recognition technology in the following ways:

 The Warrant Rule: According to the bill, any law enforcement facial recognition scan would be required to be authorized by a judge-issued warrant. The issuance of the warrant would be based on a determination of probable cause that the targeted individual to be scanned and identified has committed, or is committing, a serious violent crime. This critical measure would help prevent dragnet surveillance and abuse, such as using facial recognition to identify protesters. And given that probable cause is focused on the individual to be scanned, it shouldn’t be an onerous obstacle for noncontroversial uses like identifying someone photographed during commission of a homicide. 

The warrant rule includes sensible but limited exceptions for identifying deceased or incapacitated individuals, crime victims, and missing children. It also includes an exception that generally allows law enforcement to use facial recognition in identifications during postarrest booking, a situation where the technology poses fewer risks to civil liberties since it occurs after an arrest rather than contributing to one. It has a limited emergency exception, too: If law enforcement uses facial recognition pursuant to the emergency exception, it must show a judge within 12 hours why immediate use was necessary to prevent “immediate danger of death or serious physical injury.” If the judge does not agree that the situation constituted such an emergency, all information obtained from the scan must be deleted.

 The Serious Violent Crime Limit: Absent the exceptions described above, the bill limits law enforcement use of facial recognition to investigating serious violent felonies, as defined by 18 U.S.C. § 3559(c)(2)(F). That definition enumerates a set of serious offenses—such as murder, kidnapping, assault, rape, robbery, carjacking, and arson—and includes any other offenses that have a maximum sentence of at least 10 years and also as an element of the crime involves use of physical force against another or “by its nature, involves a substantial risk that physical force against the person of another may be used.” 

The class of crimes for which facial recognition could be used is narrower than the class of crimes for which wiretaps can be authorized. Wiretaps established the long-standing precedent that the government’s most invasive surveillance powers should be limited to investigating serious offenses. The current Wiretap Act list includes a broader set of offenses, such as those relating to narcotics and counterfeiting, than the “serious violent felony” definition used in the Facial Recognition Act. Law enforcement will likely object to this restriction on when they can use the technology. However, law enforcement defenses of facial recognition typically focus on the technology’s use in serious cases such as homicides—which is difficult to pair with an objection to restricting its use in investigating misdemeanors and nonviolent offenses.

Limiting use of facial recognition to cases involving serious crimes is key to preventing pervasive use of the technology to investigate minor offenses (as occurs in China), as well as to stop selective, punitive uses (such as use by Baltimore police to identify protesters they could arrest for unrelated offenses).

Notice to Defendants: The bill requires that any individual who is arrested in an investigation involving facial recognition technology be notified about the details of how it was used. Specifically, individuals must be given details about what photos law enforcement scanned (which are commonly referred to as probe images), any modifications made to these photos, the database of reference photos used, the algorithm used and a report on its accuracy, and the full set of results from the scan including other potential matches. 

Law enforcement often hides its use of facial recognition technology in its investigations and prosecutions.This is a massive problem that denies individuals their due process rights and undermines a key accountability tool for preventing sloppy and irresponsible use of the technology. Notably, the bill requires notice to any individual who is arrested, a more effective measure than if the bill had merely required notice only to individuals charged or put on trial. This rule will make it more difficult for law enforcement to pressure a person accused of a crime to accept a plea bargain before that person is aware that the investigation hinged on a facial recognition scan that could be unreliable and subject to challenge. The bill also requires notice to defendants whenever evidence is derived from facial recognition, and defines that term broadly enough to ensure that notice will be effective and that the use of the technology will not be hidden from the defendant or the court. 

Cannot Be Sole Basis for Arrest: The bill also prohibits a facial recognition match from being the sole basis for an arrest, or from being the sole basis for establishing probable cause for police actions such as a search. This is a commonsense measure. Facial recognition matches are far too unreliable—and dependent on factors such as photo quality no matter how high quality the algorithm used—to be the sole basis for an arrest. This practice can lead to wrongful arrests, which evidently have serious consequences even if later remedied. Erroneous facial recognition matches have resulted in innocent individuals such as Robert Williams, Michael Oliver, and Nijeer Parks being improperly arrested and placed in jail, which understandably has had long-lasting harmful and traumatic effects on their lives and that of their families. 

No Untargeted Scans: The bill bans the practice of untargeted facial recognition. Instead of attempting to identify a single, targeted individual, law enforcement will scan everyone in a crowd, or move across a video feed for mass identification. (The bill defines this practice as face surveillance.) Pilots of such a system in the United Kingdom had disastrous results: Error rates of identification in London and Wales were over 90 percent. Meanwhile, in China, where untargeted facial recognition seems to work more effectively, it is deployed as a tool for pervasive, dragnet surveillance and oppression. Chinese authorities use untargeted scans to monitor the daily movements, activities, and interactions of the oppressed Uighur minority in the Xinjiang region on an unprecedented scale, and to scan and track individuals en masse in large cities.

No Use for Immigration Enforcement: The bill prohibits any use of facial recognition for immigration enforcement, where it is already deployed for sweeping identifications. This would not fully prohibit use by the Department of Homeland Security and its component agencies, which could still use the technology in cases such as the investigation of terrorist plots. Given that the current immigration system can be cruel in its enforcement and does not appear up to the task of carefully scrutinizing the technology as needed, this restriction seems justified.

This rule would remove facial recognition from U.S. Customs and Border Protection’s biometric entry-exit program, which has drawn criticism from CDT and many other civil liberties advocates. The current biometric entry-exit program uses facial recognition to identify individuals against flight manifests (or sometimes manifests of all individuals going through an airport on a given day) when flying into or out of the United States. Because the bill includes verification in its definition of facial recognition, even a one-to-one matching system—where, for example, individuals boarding a plane could be matched against their passport photo—would be prohibited for biometric entry-exit. Such one-to-one matching to verify identity is generally more accurate and poses fewer threats to civil liberties than one-to-many matching programs. Thus, the bill’s inclusion of one-to-one face verification may be an unnecessarily restrictive measure. However, even if this broad prohibition were applied, other biometric options that U.S. Customs and Border Protection has not tried—such as fingerprint scans—could still be used in the biometric entry-exit program. 

No Use of Improperly Obtained Photos: The bill prohibits using any systems or photo databases that are built on “illegitimately obtained information,” such as data that was obtained illegally, in violation of a contract, or in violation of a terms-of-service agreement. This concept was borrowed from the Fourth Amendment Is Not for Sale Act, which bars the government from buying data it would normally need a court order to obtain, or from using illegitimately obtained information. The fact that “illegitimately obtained information” includes violating terms of service is especially notable: It would bar law enforcement from using Clearview AI—a facial recognition platform that notoriously scrapes billions of photos from social media without user consent, and in violation of websites’ terms of use—or any other vendors that build their database in a similar manner.

Testing and Accuracy: The bill requires that the National Institute of Standards and Technology (NIST) test any and all law enforcement facial recognition use proposals prior to their deployment, including continued examination “as it is used in the field.” This is a critical detail, given the impact of photo quality on accuracy. The bill states that law enforcement may not use any system that fails to achieve a “sufficiently high level of accuracy” (a standard that NIST is tasked with assessing)—in terms of either overall accuracy or variance in accuracy based on race, ethnicity, gender, or age (a significant problem for many algorithms). Including such a standard is wise. But it would be even more effective to have facial recognition algorithms that display any statistically significant variance based on demographic factors. Therefore, if a variation in test results is identified at a statistical level (and thus could not be written off as a fluke), different facial recognition accuracy results for different demographic groups in real-world settings would likely be prohibited. If facial recognition systems display algorithmic bias at all, they should not be in the hands of law enforcement.

Enforcement: Any use of facial recognition that violates any provision of the bill would result in full suppression of the match results, as well as any information derived from them. This type of broad suppression rule is crucial to ensuring law enforcement compliance and preventing improper uses of the technology from being exploited. The bill also includes a provision for administrative discipline for improper use of the technology and provides a civil right of action for individuals impacted by uses of facial recognition that violate the bill, with damages of at least $50,000 per violation.

Application to States and Localities, and Non-Preemption: The bill creates an incentive for state and local law enforcement to enact similar rules by cutting 15 percent of Justice Assistance Grant (JAG) funding to any department that does not comply with its rules. JAG is a key federal grant program that provides nearly $200 million annually to state and local law enforcement to hire personnel, purchase equipment, and provide other assistance. It’s difficult to determine if this threat would be sufficient to compel police departments to comply. By comparison, Rep. Pramila Jayapal (D-Wash.)’s 2021 Facial Recognition and Biometric Technology Moratorium Act would remove all JAG funds from departments that do not comply with the bill’s moratorium on facial recognition. A partial cut to JAG funds does, however, make the bill more likely to survive a federalism challenge, which could strike down any effort to push states and cities toward responsible facial recognition policies. 

The bill also includes a non-preemption clause, which ensures that it will not hamper any stronger state or local laws limiting facial recognition, such as the bans many cities—such as Boston, Portland, and San Francisco—have passed on law enforcement use of the technology. This is an important measure—if the residents of a city don’t want law enforcement to use facial recognition there, a federal bill meant to set guardrails shouldn’t end up forcing the technology on them. 

The Facial Recognition Bill in Today’s Political Landscape

 In the past several years, members of Congress have repeatedly denounced facial recognition surveillance and the unregulated, wild west landscape it exists in. Yet despite the notable level of bipartisan consensus that the technology needs to be reined in, there has been little momentum in moving legislation forward at the federal level. 

At the same time, in the past several years, states have been increasingly active on the issue, with a dozen states enacting new rules. Virtually all of these measures (or a stronger version of them) are included in the Facial Recognition Act: States have limited use of facial recognition to serious crimes, required that notice of its use be given to defendants, prohibited facial recognition matches from being the sole basis for arrests, and created testing rules. While no state has yet enacted a warrant rule, two (Massachusetts and Maine) have gotten part of the way: Massachusetts makes scans contingent on a court order (but not at a probable cause standard), and Maine requires probable cause (but not verifying this via a court order). The most comprehensive bill under consideration—a new proposal in Massachusetts that is backed by the state ACLU and has cleared the State House—would make it the first state to condition use of facial recognition on law enforcement obtaining a warrant based on a judicial finding of probable cause (the bill also includes a range of other strong policies). 

These state laws and bills demonstrate momentum for enacting a strong set of safeguards on use of facial recognition. The Facial Recognition Act seeks to take that growing momentum and build on it to set forth even stronger and more comprehensive limits nationwide.


Jake Laperruque is Deputy Director of the Security and Surveillance Project at the Center For Democracy & Technology (CDT). His work focuses on national security surveillance, facial recognition, location privacy, and other key issues at the intersection of new technologies with privacy, civil rights, and civil liberties. Prior to joining CDT, Jake worked as Senior Counsel at the Constitution Project at the Project On Government Oversight. He also previously served as a Program Fellow at the Open Technology Institute, and a Law Clerk on the Senate Subcommittee on Privacy, Technology, and the Law. Jake is a graduate of Harvard Law School and Washington University in St. Louis.

Subscribe to Lawfare