Civil Liability for End-to-End Encryption: Threat or Fantasy? Part II
In the first part of this series, we looked at the question of whether Apple could be held liable in a negligence tort for refusing to retain the ability to provide law enforcement with decrypted communications in response to legal process.
Published by The Lawfare Institute
in Cooperation With
In the first part of this series, we looked at the question of whether Apple could be held liable in a negligence tort for refusing to retain the ability to provide law enforcement with decrypted communications in response to legal process. In this post, we take a look at a different possible theory of liability: the civil terrorism remedies provision of the Antiterrorism Act (18 USC § 2333).
The bottom line, as we shall explain, is that Apple may have more serious potential exposure under this theory of liability than in a negligence tort. The reason is surprising: In the facts we considered in our previous post, a court might—believe it or not—consider Apple as having violated the criminal prohibition against material support for terrorism.
Let’s take as our facts the same hypothetical case we posited in the first post, wherein we also explained why we’re picking on Apple in these posts:
After Apple has been publicly and repeatedly warned by law enforcement at the very highest levels that ISIS is recruiting Americans on Twitter and routing them to end-to-end encrypted messaging apps for terrorist tasking, it continues to move aggressively to implement end-to-end encrypted systems, and indeed to boast about them. In other words, in the face of specific warnings that the consequences of its engineering decisions will be dead Americans, Apple has stayed the course. And let’s hypothesize as well that Comey’s nightmare then comes true: An ISIS recruit uses exactly this pattern to kill some Americans. Specifically, the person is recruited on Twitter and then communicates with ISIS only using an encrypted iPhone and Apple messaging services. To make the facts as bad for Apple as possible, let’s imagine that this person is on the FBI’s radar screen, but when he starts using this system, he “goes dark.” The Bureau now has only access to his metadata and location information. It knows he is still in touch with ISIS, because the metadata confirms that he is still sending and receiving communications from ISIS targets overseas. But the bureau cannot discern what he’s saying, or what’s being said to him. And then he kills people, and their families want redress from Apple.
Could they hold the company liable in court?
The theory of liability here is a little bit complicated, and still developing. However, certain principles seem to be generally accepted. Section 2333 allows victims of international terrorism to sue. International terrorism is defined in 18 USC § 2331 and requires an act that violates a criminal law. That requirement, in turn, has been held by courts to be satisfied by material support crimes. So the question ultimately turns on whether Apple’s conduct in providing encryption services could, under any circumstances, be construed as material support, and if so, whether Apple meets the necessary mens rea for providing that support. The answer, as we explain below, may be unnerving to executives at Apple. Their response to potential vulnerability, however, may not be quite what FBI Director James Comey wants the tech companies to do.
Section 2333 reads, in relevant part, as follows:
Any national of the United States injured in his or her person, property, or business by reason of an act of international terrorism, or his or her estate, survivors, or heirs, may sue therefor in any appropriate district court of the United States and shall recover threefold the damages he or she sustains and the cost of the suit, including attorney’s fees.
Section 2333 is a strange statute in that it identifies rather precisely who may file suit (a U.S. national injured in an act of international terrorism) but is silent as to who can be made to answer that suit.
Apple seems, at least at first, an unlikely answer to that question. International terrorism is defined in § 2331 as activities that (1) involve violent acts or acts dangerous to human life that violate a criminal law, (2) appear to be intended to intimidate a civilian population or influence government policy through threats, and (3) occur primarily outside the US or otherwise “transcend national boundaries.” Given that Apple is selling encrypted devices, an act that is neither violent nor illegal, it seems as if a plaintiff would be unable to satisfy even the first prong of this definition of terrorism.
But courts have actually heard several similar cases where the defendant is involved in acts that are not in and of themselves illegal and are, in any event, non-violent. For example, a jury in New York found Arab Bank liable under § 2333 for transferring money from a charity to the family members of Hamas suicide bombers, even though none of the parties to the transactions were listed on terrorist watch-lists. And in another case, discussed more below, the 7th Circuit Court of Appeals upheld liability for charities supporting Hamas, even when the charities were only providing humanitarian assistance.
The reason is that the courts have regarded the definition of terrorism in § 2331 as reaching not just the people committing the acts of terror themselves, but also those who materially support them, and material support for terrorism is illegal under federal law. In an en banc decision from the Seventh Circuit in Boim v. Holy Land Foundation, in which the court considered a charity’s liability for providing money to Hamas, Judge Richard Posner concluded that while giving money is not a violent act, “[g]iving money to Hamas, like giving a loaded gun to a child (which also is not a violent act), is an ‘act dangerous to human life.’”
This is where Apple could conceivably have a problem in our hypothetical. Apple is most likely to be vulnerable under the facts we imagined to a suit alleging liability under § 2333 as a result of having allegedly violated the statute prohibiting the provision of material support to terrorists (18 USC § 2339A). This provision (as well as its sister provision prohibiting the provision of material support to foreign terrorist organizations) makes it a crime to “provide[] material support or resources . . . knowing or intending that they are to be used in preparation for, or in carrying out” a terrorist attack (or one of the listed crimes). The term material support includes both equipment and other tangible and intangible property and services; the statute even specifically mentions communications equipment.
In our scenario, a plaintiff might argue that the material support was either the provision of the cell phone itself, or the provision of the encrypted messaging services that are native on it. Thus, if a jury could find that providing terrorists with encrypted communications services is just asking for trouble, then plaintiffs would have satisfied the first element of the definition of international terrorism in § 2331, a necessary step for making a case for liability under § 2333.
Meeting the second element of the § 2331 terrorism definition would also seem to pose a challenge for plaintiffs suing Apple, as it would appear to be difficult, if not impossible, to prove that Apple intended to intimidate civilians or threaten governments by selling someone an iPhone or making FaceTime available on it.
But again, courts have handled this question in ways that make it feasible for a plaintiff to succeed on this point against Apple. For example, when the judge presiding over the Arab Bank case considered and denied the bank’s motion to dismiss, he shifted the analysis of intimidation and coercion (as well as the question of the violent act and the broken criminal law) from the defendant in the case to the group receiving the assistance. The question for the jury was thus whether the bank was secondarily, rather than primarily, liable for the injuries. The issue was not whether Arab Bank was trying to intimidate civilians or threaten governments. It was whether Hamas was trying to do this, and whether Arab Bank was knowingly helping Hamas.
Judge Posner’s opinion in Boim takes a different route to the same result. Instead of requiring a demonstration of actual intent to coerce or intimidate civilians or a government, Judge Posner essentially permits the inference that when terrorist attacks are a “foreseeable consequence” of providing support, an organization or individual knowingly providing that support can be understood to have intended those consequences. Because Judge Posner concludes that Congress created an intentional tort, § 2333 in his reading requires the plaintiff to prove that the defendant knew it was supporting a terrorist or terrorist organization, or at least that it was deliberately indifferent to that fact. In other words, the terrorist attack must be a foreseeable consequence of the specific act of support, rather than just a general risk of providing a good or service.
Whichever route a court takes, the defense is essentially the same. At least as an initial matter, at the time that it sold the phone itself, Apple could quite plausibly argue that it wasn’t knowingly providing encrypted devices or encryption services to terrorists because, as we discussed in our last post, the company had no way of knowing it was selling to someone potentially dangerous. It was, rather, providing services to anyone who bought its phones. And its intent was certainly not to facilitate violence or coercion or intimidation, but to provide users good cybersecurity. That would seem to be an adequate defense, at least as to the sale itself, whether under Posner’s test or the Arab Bank approach.
This is another way of saying that the sale of an encrypted phone by a major company to the general public cannot plausibly constitute material support for terrorism. Unlike a charity donating to Hamas, Apple in this situation has no intention of supporting or in any way contributing to violent activity. It does not know—and likely could not possibly know—that the person buying the phone is intending to use it for violence or coercion. And the product in question has a million legitimate uses. This is not, in Posner’s words, putting a loaded gun in the hands of child. Rather, it is putting a safety device in the hands of anyone who can buy one, with some knowledge that some small fraction of those people will misuse the product. Holding Apple liable here would make no more sense than holding a car maker liable if one of its vehicles ended up being used in a car bombing.
The trouble for Apple is that our story does not end with the sale of the phone to the person who turns out later to be an ISIS recruit. There is an intermediate step in the story, a step at which Apple’s knowledge dramatically increases, and its conduct arguably comes to look much more like that of someone who—as Posner explains—is recklessly indifferent to the consequences of his actions and thus carries liability for the foreseeable consequences of the aid he gives a bad guy.
That is the point at which the government serves Apple with a warrant—either a Title III warrant or a FISA warrant. In either case, the warrant is issued by a judge and puts Apple on notice that there is probable cause to believe the individual under investigation is engaged in criminal activity or activity of interest for national security reasons and is using Apple’s services and products to help further his aims. Apple, quite reasonably given its technical architecture, informs the FBI at this point that it cannot comply in any useful way with the warrant as to communications content. It can only provide the metadata associated with the communications. But it continues to provide service to the individual in question.
Now let’s return, for a moment, to the material support statute, which criminalizes (1) providing material support or resources to a terrorist group, (2) knowing or intending that they are to be used in preparation for, or in carrying out, an act of terrorism or in preparation for one. Continuing to provide encryption services to a user whom the FBI has specifically warned you is engaged in criminal activity or is operating as an agent of a foreign power comes uncomfortably close to meeting the terms of the statute—particularly if the FBI has made clear that the investigation involves a suspected terrorist plot. Moreover, the provision of this service looks a bit more like a donation than does the sale of a phone, which is a financial transaction between two parties exchanging value. The use of iMessage and FaceTime, after all, requires no payment. So the provision of this service is at least a little bit closer to the donor in the Hamas cases than to the very defensible situation in which Apple finds itself when it sells the phone as an initial matter to the public at large. What’s more, the use of the service for unlawful activity violates Apple’s terms of service, which prohibit the use of its services for unlawful purposes. So by continuing to provide service at this point, Apple would be giving a thing of value (encryption services) to someone it specifically knows it likely engaged in illegal activity that endangers others and that violates its own terms of service.
Apple may now also satisfy the requirement in the international terrorism definition that it knows it is providing support to a terrorist. A jury would then have to decide whether an attack is the foreseeable consequence of this particular type of support. We discussed this in our last post, but keep in mind here that Apple would now have been served with a specific warning that it is providing services to a terrorist — a level of notice it never had in our previous hypothetical.
Apple would certainly be situated uncomfortably close to some of Judge Posner’s language in Boim, where he writes, for example, that:
To give money to an organization to an organization that commits terrorist acts is not intentional misconduct unless one either knows that the organization engages in such acts or is deliberately indifferent to whether it does or not, meaning that one knows there is a substantial probability that the organization engages in terrorism but one does not care. “When the facts known to a person place him on notice of a risk, he cannot ignore the facts and plead ignorance of that risk.” (internal citations omitted)
A court might rephrase Posner’s discussion in this situation along the following lines:
To provide encryption services to a would-be terrorist is not intentional misconduct unless one either knows that the person is engaging in terrorist acts or is deliberately indifferent to whether he is or not—meaning that one knows that there is a substantial probability that he is engaging in terrorism but one does not care. When the facts brought to the attention of a service provider place it on notice of a risk, it cannot ignore those facts and then plead ignorance of that risk.
The irony is that the logical consequence of this analysis is not necessarily that Apple should design its systems so as to facilitate law enforcement access to encrypted communications when presented with a warrant. It may well be, rather, that it should deny service to individuals once it has been put on notice that the government has probable cause that those individuals are engaged in criminal or terrorist activity. That presents a weird kind of due process issue, of course. Those individuals have not yet been charged with any crime. Some may be innocent. And from the Bureau’s point of view, cutting off service may be the last thing investigators want, as it would tip off the suspect that his activity had been noticed. All that said, it’s a bit of a puzzle how a company that knowingly provides encrypted communications services to a specific person identified to it as engaged in terrorist activity escapes liability if and when that person then kills an American in a terrorist incident that relies on that encryption.
Even having creatively cleared these hurdles, a plaintiff is not yet done. The third element in the § 2331 definition of international terrorism is that the activities occur “primarily outside” the United States or “transcend national boundaries in terms of the means by which they are accomplished, the persons they appear intended to intimidate or coerce, or the locale in which their perpetrators operate or seek asylum.” So far, courts have primarily applied the statute to overseas terrorist attacks, and it does not appear that courts have had to flesh this element out with respect to a domestic attack yet. However, even if the attack occurs inside the US, a plaintiff may well be able to establish that ISIS transcended national boundaries by operating primarily in the Middle East, and then reaching out to individuals in the US to recruit the attacker. A US victim overseas, however, would likely have the stronger (or at least more straightforward) case under this ATA theory.
Finally, plaintiffs would have to establish some degree of proximate causation. However, here too Judge Posner has helped make this an attainable standard, as it might not be in the pure negligence world. He notes that a defendant need not establish “but for” causation of the injury—in other words, it’s okay if there were other contributing factors. Moreover, this causation standard becomes even more relaxed because if plaintiffs can’t prove exactly which defendant or other contributing party was responsible, they can instead hold the defendants jointly and severally liable for the injuries. Assuming Apple’s conduct could be regarded as material support, a plaintiff would only have to prove that there was a substantial probability that Apple’s provision of encryption services was a—not the—cause of the injuries. If Apple “helped create a danger,” that would be sufficient for liability under § 2333.