How Companies Can Help Make Police Facial Recognition Systems More Transparent

Jake Laperruque
Tuesday, September 24, 2019, 9:33 AM

As facial recognition becomes an increasingly common law enforcement tool, the risks it can pose are becoming increasingly clear.

Surveillance Camera (Source: PXHere)

Published by The Lawfare Institute
in Cooperation With
Brookings

As facial recognition becomes an increasingly common law enforcement tool, the risks it can pose are becoming increasingly clear. Recently, police in Hong Kong are weaponizing the technology to identify protesters, who are using everything from face masks to laser pointers to try to avoid government retaliation for taking to the streets.

It may seem implausible that police in America would use facial recognition to identify and target protesters—but this has actually already happened. Several years ago, Baltimore police used a social media monitoring company, Geofeedia, to scan social media photos of people at a constitutionally protected protest of the death of Freddie Gray in police custody. Police used this information to find out if any protesters had outstanding warrants, and, according to the Baltimore Sun, arrested them “directly from the crowd.” Social media companies responded to this news by cutting off Geofeedia’s ability to scrape their photo data.

To the best of the public’s knowledge, law enforcement agencies have not tried to harness social media and online photo databases for facial recognition since the Geofeedia incident. But both technology companies and civil liberties advocates need to prepare for this possibility. Major companies that maintain personal photos through photo storing and sharing services—such as Facebook, Google, Apple, Amazon, Dropbox and Yahoo—should consider adding information on facial recognition to their annual transparency reports detailing many of their interactions with the government, including how the companies responded to government demands to turn over content.

Companies should also alert the public if the government issues orders to access these photo databases to conduct facial recognition scans. Given many companies proactive attitudes toward increasing transparency about government surveillance demands, publishing information on facial recognition would meaningfully expand their work to protect and better inform users.

The rapidly growing use of facial recognition makes it all the more important for companies to take such a step now: Already, at least one in four police departments possesses the capability to use facial recognition surveillance, and the FBI conducts an average of 4,000 facial recognition searches every month.

Thus far, the images law enforcement proactively acquires for facial recognition surveillance have generally come from one of two sources. One is police closed-circuit TV networks. Detroit police, for example, used government-owned-and-operated cameras to build a pervasive facial recognition system through a “green light” camera network that constantly monitors huge portions of the city. A second type of system involves public-private partnerships, such as in New York City, where police access thousands of privately owned security cameras. However, it is possible that law enforcement will go beyond these sources and again seek to tap into the vast troves of photo data available on social media. Indeed, there are a variety of ways law enforcement entities could feasibly seek to use social media and online photo storage companies to augment facial recognition surveillance systems.

One potential technique for facial recognition surveillance could be based on a controversial theory of content monitoring: that there is a difference between “scanning” content and conducting searches that are subject to Fourth Amendment limits. Two prominent examples demonstrate that the government is already acting on this “scanning content is not searching content” legal theory. First, for the upstream portion of Foreign Intelligence Surveillance Act (FISA) Section 702 surveillance, the government broadly scans internet traffic but maintains (with the approval of the FISA Court) that a “search” occurs only when systems conducting the scan find content connected to a target and then subject that content to human review. Second, in 2016 news reports indicated that the government also issued a demand that Yahoo scan mass amounts of email content to find a certain unique signature and maintained that the “search” applied only to the returned emails rather than to the millions of individual emails being scanned.

The theory that scanning is distinct from searching could open the door to expansive demands to subject social media databases to government facial recognition surveillance. Let’s take an example drawn from the television show “Breaking Bad”: Imagine a scenario in which Albuquerque, New Mexico, police have a warrant to demand photos of Walter White for a narcotics investigation. They want to find out where in the city White had been on a certain day. Law enforcement might order companies like Facebook and Instagram to conduct a facial recognition scan of millions of users’ photos to return any that contain White. Officials could argue that, following the federal government’s reasoning that scanning all emails in Yahoo’s databases only counted as a “search” for one specific signature, a demand to conduct facial recognition scans on photos from everyone in an entire city is only a demand to access photos of a single suspect. In other words, Albuquerque law enforcement could look at thousands of people’s pictures, but Fourth Amendment scrutiny would apply only to the one target.

Fictional drug kingpins aside, if this technique becomes a norm for investigating minor crimes, it could become one of the most pervasive forms of surveillance in densely populated areas, effectively conscripting everyone who posts on social media into a photo surveillance system.

The government could also make similar facial recognition scanning demands to seek information on those with whom individuals are associating—again, absent suspicion for the individuals whose photos are being scanned. Under the scanning-is-not-searching theory, the government could ask companies to scan the photo databases of individuals not currently connected to an investigation and then report any facial recognition matches within those photos for existing criminal suspects. You, along with hundreds or thousands of others might have your photos scanned just to check (without any basis to believe this is the case) whether your photos include an investigative suspect. These scans could cover not only social media profiles but also private photo databases, such as those stored by users on Google and Amazon.

Finally, law enforcement could attempt to revive the Geofeedia technique by developing new methods of scraping photos directly from public posts and subjecting them to facial recognition scans. The government might claim that so long as its methods do not violate the social media and photo storage companies’ terms of service, users have no expectation of privacy.

These potential methods of co-opting photos for government facial recognition programs would pose a range of serious risks. As has been widely reported, facial recognition is prone to misidentifications. Such an expansion of its use would lead to a significant degree of overcollection, by collecting photos not just of the actual target but also of people the computer system misidentifies as the target. Moreover, as already seen in the case of Baltimore police using Geofeedia, such unfettered access to personal photos creates the potential for improper targeting, raising both the specter of abuse and the likelihood of chilling constitutionally protected activities such as attending protests.

Even if most of these techniques are at this point theoretical, companies can and should begin to incorporate information about facial recognition into their annual surveillance transparency reports. All companies that offer cloud storage of personal photo libraries should disclose if they receive orders to conduct facial recognition scans of photos, or provide the government access to photo databases to conduct its own facial recognition scans. Additionally, social media companies that offer photo sharing as a significant component of their services should include a description of current measures they take to limit scraping photo data in a manner that could be repurposed for government facial recognition scans on a mass scale.

Tech companies’ transparency reports have been valuable for improving basic knowledge about government surveillance, facilitating a better-informed debate on surveillance topics such as FISA reauthorizations, and offering reassurance to users about what protections are in place for their private communications and data. But as surveillance practices evolve, so too must these efforts. Facial recognition is now the subject of a critically important privacy debate, and companies should take a proactive approach by adding information on the matter to their transparency reports.


Jake Laperruque is Deputy Director of the Security and Surveillance Project at the Center For Democracy & Technology (CDT). His work focuses on national security surveillance, facial recognition, location privacy, and other key issues at the intersection of new technologies with privacy, civil rights, and civil liberties. Prior to joining CDT, Jake worked as Senior Counsel at the Constitution Project at the Project On Government Oversight. He also previously served as a Program Fellow at the Open Technology Institute, and a Law Clerk on the Senate Subcommittee on Privacy, Technology, and the Law. Jake is a graduate of Harvard Law School and Washington University in St. Louis.

Subscribe to Lawfare