Criminal Justice & the Rule of Law Terrorism & Extremism

Twitter, ISIS, Civil Liability, and Immunity: An Update

Benjamin Wittes
Thursday, May 5, 2016, 2:59 PM

A few months ago, Zoe Bedell and I wrote a series of posts about Twitter and potential liability

Published by The Lawfare Institute
in Cooperation With
Brookings

A few months ago, Zoe Bedell and I wrote a series of posts about Twitter and potential liability—both criminal and civil—for material support for terrorist groups arising out of its provision of service to ISIS members and supporters. The series was prompted by litigation against Twitter filed by the families of people killed in ISIS attacks. In these posts, Zoe and I made a number of judgments, both predictive and normative. These included:

  • We predicted that Twitter would claim immunity from suit under § 230 of the Communications Decency Act;
  • We argued that it should not prevail on those grounds;
  • We argued, however, that it would have a strong defense on grounds that the complaint alleges insufficient causal links to support liability between the use of Twitter services by ISIS and the terrorist act in the case at hand.

​Twitter has now moved to dismiss the case, and the plaintiffs have responded. The arguments on both sides are very much as Zoe and I anticipated. In particular, the plaintiffs have adopted a theory on § 230 immunity that closely tracks the theory Zoe and I outlined as to why Twitter should not be granted blanket immunity. Twitter argues:

Plaintiffs’ claims seek to hold Twitter liable for the content of messages posted to its platform by third parties and are thus barred by Section 230 of the Telecommunications Act of 1996, 47 U.S.C. § 230 (“Section 230”). In enacting Section 230, Congress unequivocally resolved the question whether computer service providers may be held liable for harms arising from content created by third parties. Announcing the policy of the United States to preserve the “free market that presently exists for the Internet . . . unfettered by Federal or State regulation,” 47 U.S.C. § 230(b)(2), Congress broadly immunized entities like Twitter against lawsuits that seek to hold them liable for harmful or unlawful third-party content, including suits alleging that such entities failed to block, remove, or alter such content, id. § 230(c)(1). In the two decades since, courts in the Ninth Circuit and across the country have consistently recognized and broadly construed Section 230’s protections. Indeed, in line with this uniform precedent, the U.S. Court of Appeals for the District of Columbia Circuit recently affirmed the dismissal under Rule 12(b)(6) of a complaint brought against Facebook based on allegations strikingly similar to those alleged here. See Klayman v. Zuckerberg, 753 F.3d 1354 (D.C. Cir. 2014).

Back in January, Zoe and I argued that:

There are many reasons to believe that Twitter has not violated [the material support] law by providing service to ISIS users . . . , but note that if it has violated the law, that offense was completed the moment Twitter knowingly provided service to ISIS. The offense does not depend in any way on what ISIS may have tweeted, or even if ISIS used the service in question. If ISIS operatives tweeted cat videos or they tweeted nothing at all, Twitter still would have violated the statute (assuming it did) the moment it knowingly provided “any property, tangible or intangible, or service, including . . . communications equipment” to operatives of a designated foreign terrorist organization. In other words, one is not imposing liability under the material support laws based on any allegedly offending content. One is imposing liability based on the provision of service as an antecedent matter to a terrorist organization.

Similarly, in their opposition to Twitter's motion to dismiss, the plaintiffs argue:

[T]he CDA’s protections do not apply to this case because Plaintiffs’ cause of action does not seek to treat Defendant as a publisher or speaker of ISIS’s harmful messages, but rather as the provider of the high-tech platform through which ISIS delivered those messages. Plaintiffs’ theory of liability is thus not premised on the content of tweets and does not require that Twitter be held responsible for content created by others, or that it otherwise be treated as a publisher or speaker. Indeed, Twitter’s violation of the ATA occurred even before ISIS issued even a single tweet.

. . .

[T]his case is not about the contents of tweets, the issuing of tweets or the failure to remove tweets. It is about Defendant’s provision of Twitter accounts to ISIS in the first place. In knowingly permitting ISIS to sign up for accounts on its social network, Twitter provided ISIS with material support in violation of the ATA. This violation is not based on the publishing of offensive content. Accordingly, Plaintiffs do not seek to hold Twitter liable as a publisher or speaker and their claims are not barred by the CDA.

Twitter's second argument seems to me far stronger:

The Amended Complaint does not allege that ISIS recruited the attacker, Abu Zaid, through communications via Twitter’s platform. Nor does it allege that Abu Zaid or ISIS used the Twitter platform to plan, carry out, or raise money for that attack. It does not even allege that Abu Zaid ever had a Twitter account or ever accessed the Twitter platform. And certainly the Amended Complaint does not say that Twitter knew of any Tweets or other messages that were in any way connected to the attack and declined to remove or block them. Indeed, the sole message of any sort that the Amended Complaint alleges was sent or received by Abu Zaid was transmitted not through Twitter’s platform, but through a different Internet-based platform—the WhatsApp mobile messaging application. ¶ 77.8

As Zoe and I noted back in January, there is "no particular reason in the complaint itself to connect the specific attack with anything that happened on Twitter." Twitter's brief is persuasive that this is a real gap that may well prove fatal to the litigation. Even under the somewhat relaxed standard of proximate causation laid out by Judge Posner in Boim, the plaintiff still bears the burden of showing a substantial probability that the provision of material support to a terrorist group bears some relationship to the injury. I agree with Twitter that nothing in the complaint even alleges this much.

For this reason, I suspect this is not the case that will ultimately test whether and under what circumstances companies like Twitter have exposure under the antiterrorism act civil liability provisions. It is, however, a case that tests cleanly whether § 230 precludes such litigation entirely and is very worth watching for that reason.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare