Congress Cybersecurity & Tech Surveillance & Privacy

A Method for Establishing Liability for Data Breaches

Herb Lin
Tuesday, June 18, 2019, 11:27 AM

Last month, the First American Financial Corporation—which provides title insurance for millions of Americans—acknowledged a cybersecurity vulnerability that potentially exposed 885 million private financial records related to mortgage deals to unauthorized viewers. These records might have revealed bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and driver’s license images to such viewers.

Source: Flickr/First American

Published by The Lawfare Institute
in Cooperation With
Brookings

Last month, the First American Financial Corporation—which provides title insurance for millions of Americans—acknowledged a cybersecurity vulnerability that potentially exposed 885 million private financial records related to mortgage deals to unauthorized viewers. These records might have revealed bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and driver’s license images to such viewers. If history is any guide, not much will happen and companies holding sensitive personal information on individuals will have little incentive to improve their cybersecurity postures. Congress needs to act to provide such incentives.

The story is all too familiar, as news reports of data breaches involving the release of personal information for tens of millions of, or even a hundred million, Americans have become routine. A company (or a government agency) pays insufficient attention to cybersecurity matters despite warnings that the cybersecurity measures it takes are inadequate and therefore fails to prevent a breach that could be remediated by proper attention to such warnings. In the aftermath of such incidents, errant companies are required by law to report breaches to the individuals whose personal information has been potentially compromised. Frequently, these companies also offer free credit monitoring services to affected individuals for a year or two.

What happens afterward? The companies incur some expenses in the notification of the breach and in providing credit monitoring for the fraction of individuals who sign up, and they usually approve spending money to take additional cybersecurity measures. Their insurance rates may increase as well. However, those whose sensitive personal information was compromised are still left out in the cold. Companies responsible for breaches often assert that the mere compromise of personal information is not an injury—the individual must suffer actual loss, financial or otherwise, for the compromise to be considered an injury. Indeed, in a January 2019 motion to dismiss a consumer lawsuit against Equifax for a massive 2017 data breach, the company made exactly this claim.

This argument is, on its face, ridiculous. An individual who receives a letter saying that his or her sensitive personal information has been compromised does not jump for joy—worry and concern are far more likely reactions. And such concerns are likely to be magnified when he or she is informed that credit monitoring may be necessary to forestall any harm that might occur as a result of the compromise. Worry and concern are real harms, even if intangible ones.

Still, the law has difficulty assigning dollar values to intangible harms. One frequently useful technique for ascertaining the value of a quantity about which little is known is to start with the extremes and see if it is possible to narrow the range within which that value is likely to be found. This technique does not yield precise values, but having a range within which the quantity’s value is found is better than assigning a value of zero to that quantity because “nothing is known.”

In the case at hand, we have established that the worried individual suffers a harm that an unworried individual does not. If the worry is caused by the data breach, then the party whose inadequate security led to the breach has some responsibility for that harm. I propose that the appropriate valuation of such harm be assessed by a determination of how much a reasonable individual of reasonable means would pay not to have to worry about the consequences of a data breach.

How should that amount be estimated? Given that it is more than zero, what might it be? A penny or a dime seems very low—after all, pennies and dimes are worth little enough that people often don’t bother to bend over to pick them up on the street, but if picking up a coin off the street would forestall worry over identity theft, most people would probably be happy to do so. How about a cup of coffee? That is, would they forego one cup of coffee—which might cost a few dollars—to avoid worrying? Almost certainly—so a few dollars also seems a bit low.

Now let’s consider the high end. I suggest that for most people a few hundred dollars is a large amount of money; that is, most people would think twice before spending a few hundred dollars. Some might well do it, but most of those affected will not actually suffer from identity theft—and for them a few hundred dollars might well be too much to pay to relieve themselves from worrying about a hypothetical harm. These comments are clearly much more true for an amount of a few thousand dollars.

According to this analysis, the value of not worrying is probably higher than a few dollars and lower than a few hundred dollars. Thus, the analysis suggests that an amount of a few tens of dollars is about right. I’ll peg it at $30, and if someone wants to argue for $10 or $50, I won’t argue very much.

In the future, appropriate legislation could require that with every data breach notification letter sent to individuals, the company responsible must include in the envelope a $30 check payable to the addressee—and as a quid pro quo, the legislation could also explicitly rule out class-action lawsuits. Multiplying $30 per person by a hundred million people implies billion-dollar penalties, a threat that will have a real and substantial impact on incentivizing companies handling sensitive personal information to pay better attention to cybersecurity.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare