Cybersecurity & Tech Surveillance & Privacy

Civil Liability for End-to-End Encryption: Threat or Fantasy? Part I

Zoe Bedell, Benjamin Wittes
Tuesday, July 21, 2015, 7:07 PM

Last week, one of us noted Senator Sheldon Whitehouse’s question to Deputy Attorney General Sally Yates asking whether the manufacturers of encrypted devices might be liable civilly if FBI Director James Comey’s “going-dark” warnings were to come true and public safety were to be harmed as a result.

Sen. Sheldon Whitehouse

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week, one of us noted Senator Sheldon Whitehouse’s question to Deputy Attorney General Sally Yates asking whether the manufacturers of encrypted devices might be liable civilly if FBI Director James Comey’s “going-dark” warnings were to come true and public safety were to be harmed as a result. If law enforcement can’t decrypt and monitor communications, and that inability contributes to terrorist attacks or crimes, might device makers be liable for the damages suffered by the victims of those attacks or crimes, Whitehouse asked?

Yates didn’t know the answer; indeed, she said the Justice Department had not yet undertaken an analysis of the question, the answer to which is not immediately obvious. Given the possible power of civil liability to push manufacturers into addressing law enforcement’s concerns, we thought, as a public service, we would do some preliminary research on the possible exposure manufacturers and service providers might have under current law if they do not retain some ability to assist law enforcement and intelligence agencies by providing decrypted communications in response to lawful investigative requests. We thought we would look as well at what sort of tweaks Congress would need to make if it wished—as Whitehouse contemplated—to hold the threat of liability over the companies’ heads. The answer turns out to be complicated, and we will treat it in a series of posts as we try to think through various possible theories of liability.

To be clear, we are not endorsing any of these theories either for adoption by the courts or for congressional imposition. Rather, thinking through liability can be a useful way of thinking through how society wants to allocate risk. And one way of thinking about the regulation (or lack thereof) of end-to-end encryption is to ask who, if anyone, should pay when things go horribly wrong.

For purposes of these posts, we are going to focus on Apple, for a number of distinct reasons. First, it is the only one of the major companies that is both provider of encrypted hardware and a provider of a fully encrypted message app and video-chat app. In other words, Comey’s nightmare can already take place using Apple products only. That is not true of Facebook (which is not a device-maker) or Google (which retains the ability to decrypt communications so as to target ads at customers).

Moreover, Apple has leaned very far forward in the marketing of its encryption. The company, in its statements about privacy, positively boasts of being law enforcement-proof: "Apple has no way to decrypt iMessage and FaceTime data when it’s in transit between devices. So unlike other companies’ messaging services, Apple doesn’t scan your communications, and we wouldn’t be able to comply with a wiretap order even if we wanted to. While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want. And we don’t store FaceTime calls on any servers." In a letter to consumers, Apple chief Tim Cook declared flatly that “I want to be absolutely clear that we have never worked with any government agency from any country to create a backdoor in any of our products or services. We have also never allowed access to our servers. And we never will.”

For purposes of these posts, we are also going to change Senator Whitehouse’s hypothetical a bit. His question involved distraught parents looking for a missing child with authorities unable to unlock the child’s phone to see what she had been saying to whom. The trouble with this hypothetical is that those parents, in buying her an iPhone, had specifically agreed to Apple’s Privacy Policy, which specifically promises “we wouldn’t be able to comply with a wiretap order even if we wanted to.” And depending on where they live, they may also be barred from recovery as a result of having agreed to Apple’s Terms of Service, which provides that: "TO THE EXTENT NOT PROHIBITED BY APPLICABLE LAW, IN NO EVENT SHALL APPLE, ITS AFFILIATES, AGENTS OR PRINCIPALS BE LIABLE FOR PERSONAL INJURY . . . EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF LIABILITY FOR PERSONAL INJURY, OR OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS LIMITATION MAY NOT APPLY TO YOU."

So let’s consider a different set of facts, facts worse for Apple but better for a hand-rubbing ambulance chaser: After Apple has been publicly and repeatedly warned by law enforcement at the very highest levels that ISIS is recruiting Americans on Twitter and routing them to end-to-end encrypted messaging apps for terrorist tasking, it continues to move aggressively to implement end-to-end encrypted systems, and indeed to boast about them. In other words, in the face of specific warnings that the consequences of its engineering decisions will be dead Americans, Apple has stayed the course. And let’s hypothesize as well that Comey’s nightmare then comes true: An ISIS recruit uses exactly this pattern to kill some Americans. Specifically, the person is recruited on Twitter and then communicates with ISIS only using an encrypted iPhone and Apple messaging services. To make the facts as bad for Apple as possible, let’s imagine that this person is on the FBI’s radar screen, but when he starts using this system, he “goes dark.” The Bureau now has only access to his metadata and location information. It knows he is still in touch with ISIS, because the metadata confirms that he is still sending and receiving communications from ISIS targets overseas. But the bureau cannot discern what he’s saying, or what’s being said to him. And then he kills people, and their families want redress from Apple.

Could they hold the the company liable in court?

There are a variety of theories of liability under which plaintiffs might proceed, but the natural starting point is a negligence tort, under which plaintiffs would have to establish that Apple had a duty to prevent its products from being used in an attack, that the defendants then breached that duty by failing to take steps to prevent “reasonably foreseeable” harm, and that this breach was the proximate cause of the victims’ injuries. Of course, each state’s laws and standards for these elements will be somewhat different, but there are enough similarities across jurisdictions to be able to generalize.

The simple answer to Whitehouse’s question is that the plaintiffs will not have an easy case, but it’s not inconceivable either.

The first challenge for plaintiffs will be to establish that Apple even had a duty, or an obligation, to take steps to prevent their products from being used in an attack in the first place. Plaintiffs might first argue that Apple actually already has a statutory duty to provide communications to government under a variety of laws. While Apple has no express statutory obligation to maintain the ability to provide decrypted information to the FBI, plaintiffs could argue that legal obligations it clearly does have would be meaningless if the communications remained encrypted.

There are a number of these statutes. The Stored Communications Act, with respect to material held on company servers, for example, allows a "governmental entity [to] require the disclosure by a provider of electronic communication service of the contents of a wire or electronic communication” with appropriate legal process. Section 702 of the FISA allows for the intelligence community to target for interception the communications of "persons reasonably believed to be located outside the United States” by “direct[ing], in writing, an electronic communication service provider to . . . immediately provide the Government with all information, facilities, or assistance necessary to accomplish the acquisition. . . .” FISA has other provisions for domestic acquisition in national security cases. And CALEA generally requires telecommunications providers—which Apple is not—to help authorities in “expeditiously isolating and enabling the government, pursuant to a court order or other lawful authorization, to intercept, to the exclusion of any other communications, all wire and electronic communications carried by the carrier,” though it specifically relieves the carrier of any burden with respect to encryption: the carrier “shall not be responsible for decrypting, or ensuring the government’s ability to decrypt, any communication encrypted by a subscriber or customer, unless the encryption was provided by the carrier and the carrier possesses the information necessary to decrypt the communication.”

If plaintiffs could not convince a judge these statutes collectively imply an obligation to provide interpretable signal to law enforcement, they would need to convince the court to create one on its own. The standard for creating such a duty varies significantly across jurisdictions, but often involves such factors as the foreseeability of the harm, the relationship between the defendant and plaintiff, and the public policy considerations and consequences of imposing a new duty. (Professor Jonathan Cardi’s article provides a comprehensive and helpful overview of the different ways the 50 states have approached the duty analysis. He also notes that the foreseeability discussion may sometimes appear in different parts of the negligence claim, particularly in the proximate cause analysis.)

The terrorist attack—an intervening criminal act—also complicates this analysis, as many courts have “recognized [that] there is no duty to act affirmatively to protect another from criminal attack by a third person unless there is some special relationship between the parties.” That quote is from a Colorado district court that dismissed a negligence action against the retailers who sold the Aurora theater shooter his ammunition, high-capacity magazines, and body armor without taking any measures to ensure he was an appropriate or safe buyer. While the plaintiffs claimed the retailers violated their general duty not to expose others to foreseeable risks, the judge found there was no foreseeable risk because the defendants knew nothing about the shooter and didn’t have any reason to believe he would be dangerous. There was no obligation to do any particular research, and the judge declined to create one.

The suits against gun makers and dealers would surely be a major arrow in Apple’s quiver if this issue were ever litigated. The gun market is already relatively heavily regulated, after all, with legislatures having imposed obligations on retailers to screen buyers and eliminate unsuitable candidates. (Congress has also enacted a statute limiting gun makers’ and sellers’ liability, though that statute still contains liability exceptions for negligence and other misconduct.) The computer and cell phone businesses, by contrast, currently have no such regulation of to whom Apple can sell its iPhones. If a gun maker or dealer has no liability after putting a dangerous weapon in a killer’s hands, how can a service provider have liability for making valuable security tools like encryption available to all?

Other courts, however, have been willing to impose liability (or at least let a lawsuit survive a motion to dismiss) in seemingly very similar situations to that of Aurora. For example, after the Columbine shooting, the parents of a victim sued the retailer who sold the shooters one of their shotguns and even taught the shooters how to saw down the gun’s barrel. In refusing to dismiss the case, the court stated that “[t]he intervening or superseding act of a third party, . . . including a third-party's intentionally tortious or criminal conduct[,] does not absolve a defendant from responsibility if the third-party's conduct is reasonably and generally foreseeable.” The facts were different here in some respects—the Columbine shooters were under-age, and notably, they bought their supplies in person, rather than online. But that does not explain how two federal district courts in Colorado ended up selecting and applying two different standards for evaluating the defendant's duty.

In other words, the question of whether a duty exists may be informed by, but it's not entirely controlled by, the question of the underlying regulatory environment. To a legislator like Whitehouse, who is clearly interested in the possibility of using civil liability as a wedge here, creating a clear legal obligation on the part of the companies to maintain the ability to decrypt communications would probably do a lot to convince judges in tort to consider Apple as having a duty. If Congress were to create this sort of legal requirement, it would likely shift the duty question along the way. These sorts of statutory obligations have helped plaintiffs meet the initial thresholds to maintain a suit in cases considering wrongful gun sales. Without that, plaintiffs may have trouble clearing the first hurdle, though it's certainly possible a court could consider Apple as having duty anyway. But of course, if Congress does that, it will have regulated directly what Whitehouse was clearly contemplating using civil liability to regulate indirectly—which kind of moots the liability question.

In any event, liability hinges, as an initial matter, on what standards courts ended up applying with respect to the question of duty.

Imagine now that our plaintiff were to establish Apple’s duty to protect the public from reasonably foreseeable risks. The question would then become one of what the reasonably foreseeable risks are of Apple’s selling iPhones with end-to-end encryption on iMessage and Facetime. It would also raise the question of what steps are reasonable to help protect against the use of these devices by members of ISIS and other dangerous folks.

Lawfare has seen a lot of debate on the risks and benefits of strong encryption, with everyone agreeing that strong encryption is essential to cybersecurity but Director Comey weighing in on the risks, and Susan Landau responding that these dangers are overblown, particularly as compared to the security benefits. Tragic events tend to look more foreseeable in hindsight—exactly when a jury would be considering the situation we’ve sketched out. And to be sure, we have engineered the hypothetical to be consistent with what Comey has, in fact, foreseen. So the argument that an attack of this sort was reasonably foreseeable would almost by definition be pretty good.

Still, Apple would have strong arguments. Its lawyers would likely point out the low number of injuries caused by terrorism in the United States, the huge number of people who use encryption, the many different investigative methods that authorities have available, and the number of successful investigations completed without having to decrypt cell phones. It would use all this to suggest that the loss of one investigative mechanism doesn’t make death imminent—or, for that matter, reasonably foreseeable.

Moreover, as the judge in the Aurora theater shooting case pointed out, it’s not just a question of the harm’s being reasonably foreseeable in the abstract from the aggregated sales of all such products to all people; the question is whether there was reasonably foreseeable harm when the item was sold, or the service made available, to a specific person to whom the company offered it. The vast majority of people don’t want to use encryption to facilitate terrorist or criminal victimizations of third parties. They use it, rather, to protect their own data and communications from interception by hackers and identity thieves. In many parts of the world, it’s also an important protection against repressive governments. And nobody disputes the critical role it plays in these areas, or the legitimacy, therefore, of the average person’s wanting strong encryption systems.

While a plaintiff could plausibly argue that Apple has some obligation to take care to whom it sells its phones, this argument would cover so few situations as to be nearly useless. It might be useful if our perp had walked into an Apple store, asked only about the encryption, and then refused to buy a two-year plan because, as he would helpfully explain, he wouldn’t be around that long to use it. But we live in a world in which we can’t even assume our terrorists prefer brick-and-mortar establishments to ordering online and smartphones are so ubiquitous that it would be entirely unreasonable to expect Apple to pick and choose among its would-be customers.

Rather, the more useful argument would have to be that making devices available with end-to-end encryption in general creates a foreseeable risk in general that some consumers will use secure communications to facilitate acts that will kill or injure people in general—particularly after law enforcement began warning about the concern. A jury could conceivably come to that conclusion (as standard of care is a question for a jury). Product liability principles may be informative here, though they are not directly on point. Products don’t have to be totally safe; guns, chainsaws, and medications with horrendous side effects are sold every day. But manufacturers may be liable if a product’s risk outweighs its benefits when considered in light of design alternatives. A jury might conceivably conclude that it was negligent, particularly in light of Comey’s repeated warnings, for Apple not only not to include a work-around—even if that workaround also created risks—but to bombastically eliminate the ones it has used for years in a specific attempt to make itself law-enforcement proof.

Even if plaintiffs were to get this far, they would still need to establish that this breach was the proximate cause of their injuries. In the case against the retailers who sold the Aurora theater shooter his supplies, the judge stated that “[t]o constitute proximate cause, alleged tortious conduct must be a ‘substantial factor in producing the injury.’ If an event other than the defendant's alleged conduct appears predominant, the defendant's conduct cannot be a substantial factor.”

The decision on proximate cause is often more about “gut feel” than “legal reasoning,” but the court’s subsequent point that it was the shooter’s “deliberate, premeditated criminal acts were the proximate cause” of the deaths is hard to dispute. And it also reflects the reality discussed earlier: courts are often reluctant to hold manufacturers or retailers of legal products responsible when a criminal uses those products to cause harm. If that’s the case for obviously dangerous products like guns and ammunition, it’s a little bit difficult to imagine a different outcome for more innocuous products like iPhones. Plaintiffs will thus likely struggle to establish that the causal relationship between the sale of an encrypted cell phone and the death of a victim of a terror attack is not too attenuated to imply liability.

Doing so would likely require a highly fact-intensive inquiry about the particular case: How close was the FBI to catching the would-be-terrorist before it went dark? Was the surveillance that it then lost proving fruitful? Were there really no other investigative tools available? In fact, plaintiffs are likely to struggle to establish that encrypted communications were the "but-for" cause of the attack; it's hard to say that without the iPhone, this attack couldn't have happened. Is that encrypted cell phone going to be a "substantial factor" in a terrorist victim's death?

To put it simply, a plaintiff in this situation would have a high bar to clear, and Apple would have strong arguments on a motion to dismiss, but the specific facts would matter and it would matter a great deal how a court interpreted duty, foreseeability, and proximate cause in the context of those facts. Another thing that would matter is the optics of the case at the time. Does it look like, and does the zeitgeist of the moment suggest, Apple was being irresponsible and uncooperative with law enforcement at a time of real danger? Or does it look like Apple was reasonably trying to offer security products to its customers that they want and need to protect themselves?

There are also other possible theories of liability available to this plaintiff, including civil liability under federal antiterrorism laws. We will address some of these in a later post.


Zoe Bedell is an attorney in the Washington, D.C., office of the law firm Munger, Tolles & Olson LLP. Her practice focuses on complex commercial litigation, as well as privacy and technology issues. Before joining the firm, Zoe clerked for Justice Elena Kagan of the U.S. Supreme Court and for then-Judge Brett Kavanaugh of the U.S. Court of Appeals for the District of Columbia Circuit. Zoe received her J.D. from Harvard Law School, magna cum laude. Prior to law school, Zoe served as an officer in the U.S. Marine Corps, deploying twice to Afghanistan, and worked at an investment bank for two years.
Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare