Courts & Litigation

Fighting Insider Abuse After Van Buren

Bryan Cunningham, John Grant, Chris Jay Hoofnagle
Friday, June 11, 2021, 12:53 PM

A win for civil libertarians does not mean a loss for data owners.

US Supreme Court, Washington DC (dog97209, https://flic.kr/p/A3YAfj; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The U.S. Supreme Court’s decision in Van Buren v. United States on June 3 was a significant victory for civil liberties groups, researchers, the defense bar and others troubled by the broad reading of the Computer Fraud and Abuse Act (CFAA) urged by the government. Writing for the majority, Justice Amy Coney Barrett correctly, in our view, struck down the “broad” view of the CFAA in a 6-3 vote. The majority rejected the government’s expansive interpretation of the statute that would have empowered private companies, simply by the way they drafted employee policies or terms of service, to criminalize “a breathtaking amount of commonplace computer activity.” The Van Buren decision established that, going forward, to violate the CFAA, a user must access data from part of a device or network to which the user is not permitted access. This is a far steeper bar than the government’s preferred reading of the CFAA, which would have criminalized “misuse” of data—to which the user had authorized access—under policies dictated by the data owner.

On its face, the Van Buren ruling puts in a difficult position data owners who rely on the CFAA to protect against insider and outsider threats. But this view is accurate only if data owners rely exclusively on law to prevent abuse of their data and systems and protect their intellectual property. We argue that modern technological controls can protect these interests better than a broad interpretation of the CFAA did, all while avoiding the damage to civil liberties and open internet research that the broad interpretation had threatened.

Post-Van Buren code is indeed better than law, both to protect these vital data owner interests and to safeguard civil liberties. By embracing newly available technological controls, data owners can turn the Van Buren decision into a win-win.

In this post, we explain how tools we are familiar with from our various experiences supporting organizations trying to responsibly manage large-scale datasets could be employed to address legitimate threats—both from insiders (employees of an organization holding sensitive information) and outsiders (external hackers or others trying to steal intellectual property or otherwise improperly access information). Such tools are available now in multiple technology platforms, enabling owners of even the most sensitive datasets to manage users’ access to sensitive data; to quickly detect and block improper access; to deter unwanted uses of protected data; and, finally, to compel—or at least “nudge”—users to access systems responsibly. These tools enable institutions—from military and law enforcement agencies to local governments and private-sector companies—to protect sensitive data from misuse.

These approaches enable employers and web-based companies to protect their important interests far better than an overbroad computer hacking law. They also vastly expand the “range of the possible” technical data governance techniques that could inform future legislative efforts, such as the proposal by Orin Kerr, to create a prohibition on personal uses of sensitive databases. Any consideration of new laws in this area must take into account the technologies discussed below.

More Data, More Danger

Civil libertarians may cheer the court’s pruning back of the CFAA, but this victory will be a pyrrhic one if government and private owners of intellectual property do not find other ways to deter, detect and prevent wrongdoing. In this era of massive data and analytics capabilities, insiders and outsiders can do great damage to privacy and civil liberties if left unchecked.

One way to view the Supreme Court’s Van Buren decision is that it creates a binary “gates up or down” test for determining CFAA liability—if you legitimately gain access to a database, you can do whatever you like with information therein without worry of violating the CFAA (though there may be other existing laws that could punish such misuse). Without appropriate controls, this is an increasingly dangerous situation because of the infinitely greater—and more sensitive—data, accessed by many more users, and subjected to significantly more powerful search and access capabilities, in many of today’s data systems than could have been contemplated by the CFAA’s drafters. This data may be used by private-sector companies, individuals, and law enforcement and other government agencies, for economic benefit and to promote health, security and safety, and for many other beneficial activities. But outside or inside threat actors who improperly access or use data in such systems can threaten privacy, undermine economic prosperity, and even damage individuals’ collective safety. Insider access abuse in large-scale databases is real (as exemplified by Officer Van Buren) and can be difficult to deter, detect and prevent. An early 1990s case suggests multiple risks of undetected insider abuse. An IRS employee, also apparently a Ku Klux Klan leader, improperly obtained tax returns on a woman he was dating, the district attorney who was prosecuting the employee’s father, political opponents and others. While the IRS reportedly had audit trails intended to detect such abuse, most organizations rarely (if ever) proactively monitor them, relying instead on potential after-the-fact punishment.

The risks of “all-or-nothing” access are expanding as exponentially more data is collected and stored. Once logged in, users can have access to vast quantities of sensitive data, with the CFAA “unauthorized access” approach post-Van Buren now too narrow to provide meaningful protection, if such a post hoc criminal punishment regime ever provided such protection.

Fortunately, advances in data governance technology offer a better outcome.

An Advanced Technical Infrastructure

For information technology security managers, the increasing scale and complexity of organizational datasets has rendered reasonable access control and other data security measures impossible to implement without modern technological controls. Currently available in-system tech tools provide users with clear and persistent notice of data-handling rules, control data at a granular point-by-point level, enable dynamic human-driven decision-making, “nudge” or compel correct appropriate user activities, and deter malfeasance through aggressive oversight.

Armed with this technology, organizations can manage risk far better than through reliance on threats of criminal prosecution for the rare cases in which bad user activity is detected. Organizations—from intelligence and law enforcement agencies to health care and financial service providers to high-tech manufacturing and transportation companies—are successfully using such tools today. Given the increasing demands of more robust data protection laws such as the California Consumer Privacy Act to the EU General Data Protection Regulation to the recently updated New Zealand Privacy Act, this may mark a time when organizational compliance with applicable laws and regulations is impossible without some or all of the technical capabilities described in this section.

Notice

A baseline principle of effective data governance must be that human data owners and users have primary responsibility for determining when and how data should be used. Human judgment will remain necessary even with the most advanced technologies. Thus, system users must be provided with effective notice of the processes and procedures governing the use of data within that system. Traditionally, users must complete some combination of reviewing and acknowledging voluminous documentation and enduring (often self-guided) training sessions that often become little more than bureaucratic box-checking exercises.

Today’s technology brings this notice and user education to users immediately, frequently offering context-specific guidance for managing individual data decisions. Beyond notice and acknowledgment screen banners upon login, users can receive specific, just-in-time notifications in response to certain user activities within the system. These notifications will remind users that certain specific rules must be considered before taking these actions and will ask them to electronically acknowledge their understanding of the requirements and that they have authority and a valid purpose for the data access or use they are about to make. For example, a user attempting to print or email a specific document might be forced to acknowledge a reminder detailing the criteria that must be met before such data transfer can occur. Such granular, in-system notifications not only nudge users in the right direction and promote awareness of data use issues but also provide an auditable record that users were specifically aware of data-handling rules at the time they performed a specific action with data, providing both deterrence and a clear record if future legal action is required.

Such notice capabilities would have required Van Buren to face several “gut check” moments as he pursued his quest. Upon attempting to access data from the sensitive confidential informants system, he could have faced one or more pop-up notifications and/or persistent screen banners that clearly indicate the sensitive nature of the information he sought. He would be unable to claim that ignorance or confusion caused him to make a mistake. A prompt requiring him to provide a valid justification for his search not only would serve as a further reminder of the valid uses of the data he sought (if any) but also, had he provided a false justification management, would support later consequences against him.

Precision

Until recently—and still with some aging technologies—data was most commonly managed at the “system” level. That is, each authorized user was given access to an entire network of databases or, at the very least, the entirety of a single database. This approach is akin to giving every employee a master key to a building, enabling them to unlock and open every wing, every door, and every locker, file cabinet, drawer and folder in every room. Now, however, data systems can manage access to, and use of, information on a data-point-by-data-point basis, including the metadata (like information describing data such as when it was created, last modified and so on). This obviously is a much more secure and privacy-protective architecture.

Granular access control allows data managers to selectively disclose data to authorized users while limiting the exposure of extraneous information. Users of a system can now access some information in a record while other, nonrelevant, data points remain obscured. Nonidentifying data can be searched, compiled, and included in statistical calculations without disclosing identifying or sensitive information contained in a larger record. Certain types of data can be classified (with access concomitantly limited) at a higher level than less sensitive information. In addition, temporal access controls allow data managers to grant temporary access to data, automatically revoking privileges at the end of a set period of time. Purpose-based access controls match user responsibilities and legitimate data needs with datasets appropriate to their duties. Context-based access controls add a further layer of control, granting access to users based on the device, network, or other electronic indicia of identity and authorization.

Such an advanced access control system would have given Georgia police more options for managing information about their confidential informants (CIs), making it significantly more difficult for Van Buren to compromise data about them. Some law enforcement organizations operate dedicated CI systems, which are ostensibly more secure but, of course, enable the simple inference that the presence of a name in the database means the individual is a CI. Compare this to a system that maintains records on individuals for any number of purposes but can lock down specific sensitive data fields identifying a specific person as a CI. Van Buren might have found the individual he was seeking but, without access to protected fields, could not have “fingered” someone as a CI.

Dynamic Access and Use Control

Critics of rigid electronic data governance capabilities will often point out that many factors might require real-time changes in how data may be accessed and used. An inflexible “on-off” electronic access control regime might keep critical information out of the hands of users who need it—particularly in exigent or unusual circumstances. Modern technologies now facilitate dynamic—even real-time—data management, with in-system review and approval of user activities. Data stewards can now review individual data access or use requests as they are made and immediately deny or allow access and provide an auditable explanation, also enhancing individual accountability, all with the data safely ensconced in its original location.

Such dynamic, real-time approaches are more secure, more privacy protective, and more efficient than older mechanisms for data access approval, including via exchanges of email outside the data system. Such systems also enable the use of a more layered data control regime whereby governance decisions can be allocated across the organization, with different departments, supervisors, and compliance teams controlling different types of data access and use, rather than creating data management bottlenecks via a single privacy officer. In some cases, certain access and use decisions could even be accessed and audited by outside oversight bodies.

Such a system would have allowed the Georgia police to ensure that an Officer Van Buren committed to the lawful execution of his job would still have been able to effectively function subject to the other data controls we have described. Had Van Buren had a legitimate need for CI data, he could have articulated that within the data system and been required to document his asserted access need. Superior officers could have granted or denied this request within minutes, and the entire exchange would have been conducted within the secure virtual walls of the data system rather than over more vulnerable communications, like by phone, email or interoffice memo communications.

Deterrence and Accountability

Enhanced governance capabilities enable an increased level of user activity oversight, accountability and deterrence. Traditional data management usually relies on pre-access training whereby system users are provided with a torrent of rules and regulations backed by the threat of penalties that may be meted out sometime after misuse of the platform has led to some external harm. Modern systems can and should be built to meet auditing requirements of applicable information security and privacy. Organizations willing to commit the resources to build and operate such an oversight infrastructure can implement effective oversight regimes that can quickly and effectively identify user malfeasance or, better yet, prevent it entirely.

User knowledge of the existence of such oversight backed by the threat of penalties (up to and including termination and/or criminal prosecution) can actively deter misuse of the system. And, to the extent that a future legal regime retains the concepts of “exceeding access,” such a system could provide conclusive evidence of bad actors’ knowledge that they were exceeding their access.

Sensitive records, like those of the confidential informant accessed by Van Buren, could easily be flagged in the system so that an automated alert to a supervisor or system privacy/security officer is generated whenever the record is accessed. Regular training and in-system pop-up messages would remind users of the fact that these systems are proactively monitored and more than likely would have deterred Van Buren from even attempting the search.

Compellence and “Nudges”

Many data governance decisions do—and should—require accountable human intervention to determine reasonableness, necessity and exigent circumstances. Modern technologies also can compel, or at least nudge, better user decisions in real time. Current technologies can enable administrators to block entirely certain types of data access, analysis and management tools. Further, where absolute limits on access to functionality are impractical, thoughtful user interface design can also nudge users toward better behaviors. For example, a law enforcement system could be configured to allow a user to electronically identify someone as a suspect or gang member when a requisite number of fields have been completed, thus ensuring that people are not subjected to potentially damaging conclusions about them without a critical mass of supporting information.

Van Buren’s access to sensitive records could have been made contingent upon the entry of a valid case number, the name of an approving supervisor, a sufficient justification, or other supporting materials, thereby providing specific justification for his access. If Van Buren had been unable to complete the required fields, he would have been denied access to sensitive data. If he had falsified his justifications, either the system could have caught him (such as if he used a made-up case number) or his knowledge and culpability could have been more easily proved after the fact.

Of course, not all unauthorized or improper actions can be prevented with technology, and effective data governance requires an investment of personnel and resources to use this newly available functionality and ensure enforcement of penalties against those who, notwithstanding these controls, manage to violate applicable law, policies or procedures.

Conclusion

The Supreme Court’s decision in Van Buren requires lawyers and policymakers to revisit the roles that law, policy and technology play in deterring, detecting, preventing and punishing misuse of sensitive data. Free societies must find ways to effectively harness increasingly large datasets and advanced powers to analyze and use such data to enhance people’s security and improve their lives while protecting individual privacy, intellectual property and other key interests. Lawyers and legislators need to both fully appreciate the role that nudges, requirements for purpose justification, and management checks on users can play, not just in detecting and deterring abuse but also in supporting future legal action, whether under current laws or future ones.

As policymakers reconsider insider abuse challenges, they should be knowledgeable about emerging privacy and civil liberties-protective technologies and require their use whenever reasonable. The dueling “parades of horribles” posed by Van Buren’s majority and dissent can be addressed through such technology. Employee personal use of a computer system is routinely controlled through blocking websites, locking down the games folder or allowing supervisor review of employee activity to deter misuse. Requiring users to designate a purpose for their activity within the system (such as providing an in-system, audited reason for a search query or for opening a particular analysis tool) would address Justice Clarence Thomas’s hypothetical of a law enforcement officer logging on to a system in the morning for an authorized use and then later conducting an unauthorized search during the same session. Thomas’s credit card company employee looking up his ex-wife’s records could be deterred or prevented through an automated system that scans audit logs for employee searches of prohibited individuals (such as current and former family members, celebrities and more) and then flags such searches to supervisors for review.

While no single approach will prevent all possible misuse, concerns that flow from either an overbroad or an overly narrow interpretation of the CFAA can be addressed through technological audit and access controls or through other civil and criminal enforcement options without—quite literally—making a federal case out of it.

No institution should be able to adopt powerful systems without also investing in available technological controls. The financial and technological burden of implementing such capabilities should rest with data and system holders, the institutions that benefit most directly from data access and use that can, if mismanaged, threaten individual liberties.

All three authors are longtime advisers to or employees of Palantir Technologies but write this post in their personal capacities.


Bryan Cunningham, practicing law at Zweiback, Fiset, & Coleman, LLP, is an international expert in privacy and data protection law, cyber security, trade secrets, employee monitoring, and government surveillance issues. Bryan developed this unique practice through extensive experience in senior U.S. Government intelligence and law enforcement positions. He served as Deputy Legal Adviser to then-National Security Advisor Condoleezza Rice. He also served six years in the Clinton Administration, as a senior CIA officer and federal prosecutor. He drafted significant portions of the Homeland Security Act and related legislation, helping to shepherd them through Congress. He was a principal contributor to the National Strategy to Secure Cyberspace, worked closely with the 9/11 Commission and has provided legal advice to Presidents, National Security Advisors, the National Security Council, and other senior government officials on intelligence, terrorism, cyber security and other related matters. Bryan’s practice has included assisting Fortune 500 and multinational companies to comply with complex, and often conflicting, data protection requirements under U.S. federal law, U.S. state laws, and the numerous specific requirements in the European Union and other overseas jurisdictions. He also has counseled start-ups and other companies in general matters and created and directs a privacy advisory committee for a multi-billion dollar company. As the principal author of legal and ethics chapters in authoritative cyber security textbooks, Bryan is also a frequent media commentator on privacy, cyber security, electronic surveillance, intelligence, and other national security issues. He has appeared on CNN, Bloomberg, ABC, Fox, CNBC, NPR, and PBS, and has been published in numerous national newspapers and overseas publications. Bryan is the current Executive Director of the University of California, Irvine’s Cybersecurity Policy & Research Institute. He was founding vice-chair of the American Bar Association Cyber Security Privacy Task Force and was awarded the National Intelligence Medal of Achievement for his work on information issues. He has served on the National Academy of Sciences Committee on Biodefense Analysis, the Markle Foundation Task Force on National Security in the Information Age, and the Bipartisan Policy Center Cyber Security Task Force.
John joined Palantir Technologies in September 2010 as the company’s first Civil Liberties Engineer. Previously, John served for nearly a decade as an advisor in the United States Senate. He began his career in the Senate as an aide to Senator Peter Fitzgerald before joining the staff of former presidential candidate and member of the Senate Republican leadership, Senator Lamar Alexander. While working for Senator Alexander on issues ranging from the federal budget to homeland security, John attended law school at Georgetown University. He earned his law degree shortly after joining the staff of the Senate Homeland Security and Governmental Affairs Committee. As a Civil Liberties Engineer at Palantir, John has worked with customers all over the world, helping them to develop data protection practices that make the best use of Palantir’s privacy and civil liberties protective capabilities. John is a co-author of "The Architecture of Privacy" (O’Reilly 2015) and co-authored a chapter in The Cambridge Handbook of Consumer Privacy, “A Marketplace for Privacy: Incentives for Privacy Engineering and Innovation.” (Cambridge UP, 2018). John is currently working with the Palantir Learning and Development team on Palantir's Ethics Education Program.
Chris Jay Hoofnagle is Professor of Law in Residence at the University of California, Berkeley, School of Law, where he teaches cybersecurity, programming for lawyers, and torts. He is affiliated faculty with the Simons Institute for the Theory of Computing, an adjunct professor in the School of Information, and a faculty director of the Berkeley Center for Law & Technology. Hoofnagle’s new book, "Law and Policy for the Quantum Age" (with Simson Garfinkel) is forthcoming 2021 from Cambridge University Press, which also published his first book, "Federal Trade Commission Privacy Law and Policy" (2016). He is an elected member of the American Law Institute. Hoofnagle is of counsel to Gunderson Dettmer LLP, and serves on boards for Constella Intelligence and Palantir Technologies.

Subscribe to Lawfare