Criminal Justice & the Rule of Law

Overview of U.S. Lie Detection Systems for Airport Security Checkpoints

Natalie Salmanowitz
Monday, December 3, 2018, 9:50 AM

Over the next five months, travelers crossing external borders in Hungary, Latvia and Greece will have the opportunity to participate in the European Union’s latest effort to increase the security, efficiency and efficacy of its border checkpoints. The new system, “iBorderCtrl,” involves a voluntary two-step procedure. First, travelers register online, where an animated border agent asks a series of questions.

Published by The Lawfare Institute
in Cooperation With
Brookings

Over the next five months, travelers crossing external borders in Hungary, Latvia and Greece will have the opportunity to participate in the European Union’s latest effort to increase the security, efficiency and efficacy of its border checkpoints. The new system, “iBorderCtrl,” involves a voluntary two-step procedure. First, travelers register online, where an animated border agent asks a series of questions. As the traveler answers, an automated deception detection system measures “micro-gestures” on the traveler’s face and generates a risk assessment score. Those deemed “low risk” will face minimal security checks when they arrive at the border, while “high risk” travelers must undergo an extensive screening process—including fingerprinting and palm-vein analyses. The system then produces a new risk assessment score for “high risk” travelers, which guides the (human) border agent’s decision whether to take further action.

Though iBorderCtrl has garnered significant attention, the prospect of using lie detection to enhance border security is by no means novel. As this post will explain, the United States has developed at least four deception detection systems over the past decade. While proponents tout deception detection measures as a more objective, streamlined approach to homeland security, others, such as Gene Kosowan, view these techniques as “part of a broader trend towards using opaque, and often deficient, automated systems to judge, assess and classify people.” This article will summarize the legal debate surrounding lie detection measures in the United States and explain why a technique like iBorderCtrl—if ever implemented in the United States—may avoid the legal pitfalls of other systems.

Although psychologists have experimented with a number of general deception detection techniques, researchers and the Department of Homeland Security (DHS) have specifically designed four methods with an eye towards homeland security: Behavioral Screening of Passengers by Observation Techniques (SPOT), thermal imaging cameras, Future Attribute Screening Technology (FAST), and Automated Virtual Agent for Truth Assessment in Real Time (AVATAR).

SPOT was launched in 2007, and is the most low-tech system of the four: it trains Transportation Security Administration (TSA) agents to recognize a checklist of verbal and non-verbal cues, ranging from a traveler’s furtive eye movements and breathing patterns to “excessive throat clearing.” Though SPOT is still used today, the ACLU and the Government Accountability Office have both criticized the program for being “unscientific and unreliable.”

Next, thermal imaging cameras measure temperature changes on an individual’s face when answering questions. At least in theory, when a person is lying, blood flow increases in specific areas of the face, which in turn raises the temperature of that region. When a team of researchers pioneered thermal imaging for lie detection back in 2002, they explicitly hailed its potential for deception detection in the airport setting. But subsequent research has revealed mixed results. While one set of researchers reported an 87 percent accuracy rate, others reported figures under 70% and found that thermal imaging—when used to guide an interviewer’s judgment—is no more accurate than human judgment on its own. Although these cameras do not appear to have been implemented in U.S. airports as a stand-alone lie detection measure, the United Kingdom has tested them in British airports.

The United States has, however, publicly tested two measures that incorporate thermal imaging as just one part of a complex lie detection system. Both FAST and AVATAR use thermal imaging cameras in conjunction with remote cardiovascular and respiratory sensors, eye trackers, and motion detectors. Whereas FAST is designed to surreptitiously predict travelers’ “malintent” as they pass through an airport checkpoint, AVATAR uses kiosks to collect biometric data, conduct a short interview by a virtual agent, and generate a risk assessment score. Although the two measures are intended solely to guide a security officer’s decision to engage in further screening, FAST has been put on hold—most likely due to its low accuracy rate (70%) and uncanny resemblance to “Minority Report.” AVATAR, on the other hand, has enjoyed more success. The Canadian Border Services Agency and the European Union have recently tested the system, and American Airlines has plans to use the technique for certain international flights.

Common Legal Concerns with Lie Detection at the Border

As the technology behind deception detection systems evolves, legal scholars have raised a number of concerns. To date, legal discourse has generally focused on three doctrinal areas: the Fourth Amendment, the Fifth Amendment, and the Equal Protection Clause.

Fourth Amendment

The threshold question in the Fourth Amendment analysis is whether the lie detection measure at issue constitutes a search or seizure. Though the detection measures involved with SPOT and FAST do not in any way “seize” the individual traveler, any subsequent interrogation that occurs based on the results of the lie detection assessment may constitute a Fourth Amendment seizure if the traveler does not reasonably feel free to “terminate the encounter.” The same principle applies to AVATAR, given that travelers must stop at the kiosk to answer questions and provide biometric data. Moreover, under the framework of Katz v. United States, the Fourth Amendment applies if an individual has “exhibited an actual (subjective) expectation of privacy” that “society is prepared to recognize as ‘reasonable.’” Some scholars argue that systems like FAST are not searches under the Fourth Amendment, as eye movements and indications of stress are observable to the public and therefore cannot reasonably be considered private. Since SPOT also focuses on physiological and behavioral factors, this line of reasoning could likely apply to that technique as well.

However, FAST goes well beyond basic detection of behavioral or physiological cues, as its thermal imaging component reveals bodily signals that are not visible to the general public. For that reason, some point to the Supreme Court’s decision in Kyllo v. United States, which held unconstitutional the police’s warrantless use of thermal imaging to detect marijuana inside a person’s home. Given the Court’s language concerning the intrusive nature of a technology that is “not in general public use” and whose mission is to detect “intimate details” of a person’s life, the analogy to Kyllo seems somewhat plausible—thermal imaging (when used in a complex system like FAST or AVATAR) is not in general public use and seeks to capture (through physical indicia) intimate details of a person’s mind, such as the intent to deceive or cause harm. Yet this analogy rests on a thin reed, as thermal imaging cameras—at least as standalone devices—are now becoming increasingly popular among the general public, as evidenced by their availability on Amazon for less than $200. Even so, some scholars look entirely beyond Kyllo’s emphasis on obscure technologies and apply its reasoning to SPOT as well, noting that TSA agents use specific facial expressions as a proxy for determining “the [private] thoughts and feelings hidden behind the facial façade.”

If lie detection measures qualify as seizures and/or searches, the next question is whether they are unreasonable. In the context of airports and security checkpoints, two legal frameworks apply: consent and the “special needs” doctrine. Start with consent. Given that travelers may avoid searches and seizures in the airport by choosing not to fly, some courts have reasoned that travelers implicitly consent to screening measures by virtue of entering—and proceeding through—the security line; as long as consent is freely and voluntarily provided, security checkpoint measures are generally deemed reasonable. While the notion of implied consent likely extends to lie detection systems, some scholars and judges doubt the underlying premise of the implied consent doctrine and question whether the “choice” between submitting to screening mechanisms and declining to fly is truly a voluntary one.

Meanwhile, other courts have bypassed the consent rationale and focused primarily on the special needs doctrine, in which the government may conduct routine searches and seizures if the primary motivation for establishing a checkpoint is something other than general law enforcement, the screening process does not grant security officials standardless discretion, and the government’s specific enforcement needs and the effectiveness of its program outweigh the individual interests at stake. Although preventing threats to homeland security is a valid purpose—not to mention a weighty interest—critics have questioned whether SPOT enables unacceptable levels of discretion, and whether FAST is too inaccurate to justify the government’s special needs interest. Even if the AVATAR system may be more accurate than FAST, and—unlike SPOT—creates specific risk assessment scores to guide security officers’ discretion, AVATAR may nevertheless falter under the special needs framework as well if its intensive combination of seven different screening devices (which collect about 50 different measurements) is considered too severe an invasion of personal privacy to warrant a suspicionless search.

Fifth Amendment

Depending on the specific circumstances at issue, lie detection measures may implicate the Fifth Amendment’s right against self-incrimination. To be clear, this right only applies when the government has “initiat[ed] legal proceedings” and attempted to use incriminatory evidence against the individual. Plus, the Fifth Amendment only protects testimonial, communicative evidence—not physical, identifying information.

Assuming a situation does arise in which the government initiates legal proceedings and uses lie detection evidence against an individual, there are a few points that merit discussion. First, similar to the arguments made in the Fourth Amendment context, one could point to the underlying purposes of deception detection programs like FAST and SPOT to argue that a person’s thoughts—not merely their biometric data—are the ultimate target of the screening process. While this argument may carry intuitive appeal, self-incrimination usually requires individuals to play an active role in “transferring [testimonial information] to the government.” As a result, self-incrimination arguments are inherently stronger when the deception detection mechanism compels an individual to speak—for instance, when thermal imaging is used to question a traveler about his or her intent to commit a crime. Some scholars contend that thermal imaging data is part and parcel with any words actually spoken, as the data theoretically reveals whether the statements are true or false. These concerns seem particularly amplified with respect to AVATAR; at least according to one demonstration of the technology, the system explicitly asks travelers whether they have used illegal drugs and factors their responses into the ultimate risk assessment score. Such questioning goes well beyond merely “identifying” information and may bring the Fifth Amendment into play.

Equal Protection Clause

Finally, to the extent that TSA differentially applies a deception detection measure to specific classes of travelers on the basis of race, ethnicity or gender, such a program could violate the Equal Protection Clause. But under Washington v. Davis, anyone challenging a deception detection system under the Equal Protection Clause would have to prove discriminatory intent and effect, which is often a substantial hurdle.

Even so, scholars have voiced equal protection concerns with respect to SPOT in light of the program’s vulnerability to subjective bias and the degree of discretion involved. For instance, what if the Behavior Detection Officers are only looking for micro-expressions in some travelers who pass through a given region of the airport? And are all individuals who exhibit “suspicious” micro-expressions stopped for further questioning, or just a select (and predictable) few?

Accordingly, more automated deception detection systems like AVATAR may be less susceptible to equal protection challenges given the diminished opportunity for human discretion. Yet one could still envision equal protection concerns with technologically sophisticated systems, especially if the algorithm that produces the risk assessment score is biased against particular races. There is also the question whether security officials—once armed with a high risk assessment score—use deception detection results in the same way for all travelers. Will the scores virtually control security officials' decisions? Or will other data points, such as country of origin, be an explicit (or implicit) factor in follow-up questioning? Without more data about how deception detection systems work on the ground, it is hard to predict specific equal protection challenges.

Why iBorderCtrl May Pass Muster

While there are many unknowns about the iBorderCtrl system—such as its precise level of accuracy—it may be more likely to survive legal scrutiny than its counterparts.

Start with the Fourth Amendment. At the first stage of the iBorderCtrl process, travelers answer questions from their home computer while the automated deception detection system measures their face and eye movements in real-time. The system also stores an image of travelers’ faces to detect impersonation when they arrive at the border. To the extent that iBorderCtrl remains an entirely voluntary, opt-in system, any “search” is likely permissible under the Fourth Amendment, as travelers have explicitly given their consent to undergo screening. What is more, the image of one’s face—and the face and eye movements that occur when one speaks—are likely not “private” in any reasonable sense. And unlike FAST and AVATAR, iBorderCtrl does not use thermal imaging technology at either the first or second phase of the screening process; though iBorderCtrl collects fingerprint and palm vein data from “high-risk” travelers, such information may be less private or intimate than thermal imaging data since it does not reveal a person’s thoughts or feelings.

Next, iBorderCtrl likely does not implicate the Fifth Amendment at all. As opposed to AVATAR, which considers the substance of the traveler’s answers, iBorderCtrl focuses solely on micro-gestures—not the content of a person’s statements. Indeed, iBorderCtrl does not appear to analyze the traveler’s vocal responses in any form. As a result, iBorderCtrl data likely qualifies as mere “physical” evidence—unprotected by the Fifth Amendment.

Lastly, iBorderCtrl may also fare better under an equal protection analysis. As a structural matter, iBorderCtrl may be less prone to subjective misuse of risk assessment scores, as the system generates two different scores at various points in the process. Whereas a TSA agent using AVATAR has only one risk assessment score to guide his or her subsequent decisions, a security official using iBorderCtrl must recalculate scores of “high risk” travelers after performing various biometric checks and document verification processes. At least in theory, these extra steps might encourage more reasoned decision-making; when a system requires more information to justify a second high-risk score, it may be more difficult to use discretion in discriminatory ways.

***

Beyond legal questions, iBorderCtrl and other deception detection technologies raise ethical issues that have not been addressed here. But in some form or another, lie detection technologies are here to stay, and courts and legislators will increasingly be asked how the law should account for the technologies’ growing ability to detect an individual’s thoughts and intent.


Natalie Salmanowitz is a third-year student at Harvard Law School. Prior to law school, she worked as a Fellow at Stanford Law School’s Program in Neuroscience and Society. She holds a B.A. in Neuroscience from Dartmouth College, and a M.A. in Bioethics and Science Policy from Duke University.

Subscribe to Lawfare