Cybersecurity & Tech

Reckless Associations: A New Tort for a New Information Ecosystem

Jane Bambauer
Tuesday, March 15, 2022, 11:47 AM

Courts should craft a narrow form of tort liability that would apply to leaders of online radicalized networks when their persistent communications cause a member of the group to commit an act of violence.

A QAnon flag. (Anthony Crider, https://flic.kr/p/2ihKEzA; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Feb. 24, a poll conducted by the Public Religion Research Institute found that 16 percent of Americans believe the central tenets of the QAnon conspiracy. That figure is larger than it was shortly after the Jan. 6 attack on the U.S. Capitol. But lawmakers are still struggling to figure out how to handle the conspiracy theory problem. Neither policymakers nor large tech companies have figured out how to temper the extremism that breeds on social media without messing up social media. 

Most political leaders want to regulate the largest social media platforms to deal with the current misinformation and bias problems, but a veritable who’s who of internet law scholars (such as Jack Balkin, Daphne Keller, Mark Lemley and Ashutosh Bhagwat) have pointed out that politicians want contradictory things: more content removal and less content removal. This is a perfectly rational and predictable reaction for politicians: They are betting on rules that may give their political tribe an advantage in the information ecosystem. Thus, federal law is unlikely to move past its logjam any time soon, and even if it did, the law coming out of the process might not actually serve the public.

The problem is that laws targeting social media cannot be well tailored to address only harm. The features that make social media so dangerous—the virality of content, the sudden shift in a political narrative and the ability to satisfy obsessive interests in certain topics—are also the features that create the greatest value from social media. Think about this the next time you are doomscrolling to get a handle on what’s going on in Ukraine. 

What can be done, then, about the looming threat of conspiracy theories and the potentially violent actions of the people who believe them? Plaintiffs’ bar take note: I believe the best way to deter the dysfunction of radical online networks is to convince common law courts to recognize and apply secondary liability to the ringleaders of a radical network that produces violence. That is, the central figures who are the most active, trusted and influential nodes in a radicalized network should be held civilly responsible for the physical harm foreseeably caused by other individuals in their social group. Ideally, courts could develop this form of tort liability before there’s another Jan. 6-type incident. The new tort could mature through the process of holding ringleaders responsible for the growing number of “lone-wolf” incidents—kidnappings, murders and train derailings—that stem from online radicalized networks. And although less influential (and, thus, less consequential) in mainstream politics, there are subcultures within the progressive movement (such as the boogaloo movement) that have also led to lone-wolf incidents of murder and violence.

Here is how liability could work in practical terms: Suppose a man were physically attacked and his child were kidnapped by an estranged spouse whose beliefs had been fully warped by conspiracy theories. The victim could file a civil suit against the estranged spouse, of course. But, in addition, he could use the discovery process to access the attacker’s communications metadata and two or three “hops” to produce a network graph (probably in a form that is initially redacted to protect identities). 

A network analysis of this data would reveal the central nodes of the attacker’s radicalizing network—that is, the individuals in the network who are the most connected and the most frequent contributors to the swarm of crazy-making content. There are a number of ways to measure centrality in network and graph theory (for example, degree centrality, eigenvector centrality and closeness centrality), but they may all point to the same one or two ringleaders who persistently share or create content for the network. If different versions of “centrality” diverge in their identification of a network’s de facto leader, then courts and communications experts will have to do more theory work to figure out which measures are the best. For now, I will assume it won’t come to that. I will assume all network analyses will find the same one or two hyperactive users of social networks who are proactively keeping grievances and conspiracy theories alive and circulating. 

Note that the de facto leaders of a radicalized network may or may not be the individuals who are most well known for producing content. For example, there’s been a national obsession with figuring out the identity of “Q,” but that individual (or those individuals) is (are) unlikely to be a central node. The downstream QAnon interpreters who obsessively share and redirect attention to the Q-drops are more responsible for harm. The same is true of individuals who pick up fake news stories generated by a Russian disinformation campaign; typically, the fake accounts supply content but are not central to the ideological echo-chamber network in which the content circulates.

The same network data that can find the central nodes of a radical network can also be used to make the case that the de facto leaders of the network should be held legally responsible for any physical attacks committed by the members of the network, just as bars or social hosts are held responsible (sometimes) for overserving drunk drivers, and just as gun owners are held responsible for negligent entrustment (sometimes) for giving their guns to incompetent users. But liability should pass to the de facto leader only if the plaintiff can prove, by a preponderance of evidence, that the leader’s communications and activities in the social network had a true causal connection to the attack. A Restatement version of the tort might look something like this:

A defendant is subject to liability for a plaintiff if the defendant assumed a position of leadership within an association that recklessly caused a member of the association to intentionally harm the person of the plaintiff.

Each of the italicized phrases represents a limiting principle. (For example, de facto leaders who use their influence to try to temper emotions and violent reactions should not be considered “reckless,” even if their network engagement seems to be a factual cause of the attack.) I’ve done some of the groundwork to lay out their meaning in a new law review article, but fulsome definitions will require the accretive process of the common law. The larger point, though, is that network activity plus some corroborating evidence based on the content of the speech should allow plaintiffs to succeed in many cases where an attacker was playing out the paranoid fantasies of a larger radical network that was effectively under the control of a few key individuals. 

Secondary tort liability of this sort is crucial because right now there is severe under-deterrence. Donald Trump, Alex Jones and the most active influencers in the QAnon community are recklessly indifferent to the cumulative impact of their online activity. They produce and retweet inflammatory content with abandon, even as their followers dip slowly but surely into irreversible paranoia. Their centrality in a radicalizing network ensures that they know about serious risks of violence by other members of the group, yet for the most part these leaders stay safe from liability. It is easy, for those who want to, to stop short of speech that meets the high standards for “incitement” or “conspiracy.” (By contrast, leaders of the Proud Boys and Oath Keepers were true believers, apparently willing to go down with their ships.) This means some of the people with the best knowledge and influence about impending violence—the “cheapest cost avoiders” to use the Calabresian term—have no legal incentive to prevent it. While the earnest, true believers of conspiracy theories go to jail for action taken in the real world, the self-serving leaders of their movements guard themselves in plausible deniability, protected from the justice system.

So why doesn’t this sort of liability exist already? I think there are two explanations: one practical and one constitutional. 

First, the practical explanation: While liability of this sort would have been nearly impossible to prove in earlier eras, today, with the benefit of communications metadata, a person harmed by an adherent to a radicalized group would be able to use network analysis to find the de facto leaders and prove that the leaders cultivated an incessant parade of bullshit (to use the Benklerian term). Thus, what I’m proposing here is a modern form of liability that depends on evidence that would have been impossible to find let alone analyze a generation ago. Relatedly, the real-world risks of casual self-radicalizing networks are also much greater today than they were a generation ago, when conspiracy theorists were essentially outcasts in their communities and had more difficulty finding and encouraging each other. Thus, a network theory-based tort of reckless association is a contemporary solution to a contemporary problem.

The second explanation relates to constitutional avoidance. Liability by this tort turns entirely on either speech or associations—both fully protected under the First Amendment. It is, therefore, a tort that must exist entirely within the constraints of constitutional scrutiny, like the tort of defamation or the tort of public disclosure of private facts. Some observers may believe that courts cannot reach central nodes of a radicalized network because doing so would cause a chilling effect that would inhibit speech and free association. This is true—liability will cause individuals to avoid becoming authority figures in groups that traffic in paranoid theories. But the First Amendment allows the law to penalize expression when the penalty is narrowly tailored to harm. As long as liability puts the burden on a plaintiff (who, by supposition, has already suffered physical injury) to prove causation and a reckless mental state, a new law of this sort can be reconciled with free speech precedent. By analogy to incitement law, this tort would trade the “imminence” requirement (which is the part of the Brandenburg test that is hardest to prove) with two other heightened requirements: evidence of persistence (the sort of dominating and constant stream of messaging that would make a defendant “central” to the network) and physical harm (ex post enforcement only, to avoid the effects of inflated predictions of risk). Thus, reckless association is different from, but nearly as narrow as, the law of incitement. 

Besides, if this form of liability violates the First Amendment, then certainly the popular alternative (imposing legal requirements on major platforms to purge content and users) is even more censorial and offensive to First Amendment values. After all, platforms are guaranteed to overcensor. They will not be highly motivated to keep censorship to a minimum if there’s a significant risk of liability for making content available and no similar consequences for wrongly or mistakenly removing content. Since it’s difficult for platforms to predict which content and users pose a risk—no one platform can see coordination and communications that are taking place in code or over other online and offline fora—there will certainly be errors. A de facto ringleader of a radicalized network, by contrast, has much better information about the intensity and meaning of in-group communications and is also in a much better position to help the group simmer down as compared to the mistrusted and universally loathed Big Tech platforms.

That said, introducing secondary liability of this sort presents some risks. First, courts could create liability with an overbroad scope so that socially valuable expressive activities are deterred. Or even if the tort is narrow and well tailored within the cases that are litigated on the merits, political adversaries and participants in culture wars could harass groups of dissenters by filing frivolous claims and forcing social media companies or activist leaders to incur litigation costs related to motions to dismiss or discovery orders. And even if courts and anti-SLAPP laws effectively guard against this sort of mischief, the mere hypothetical threat of liability could cause people to self-censor and avoid authentic public discourse. 

If these effects are large enough, the tort could cause more harm than benefit. The tort needs to be demanding enough in terms of elements and proof so that an awareness that somebody in the network might do something violent is not enough for liability. Just as public criticism is likely to get some facts wrong but should not easily fall within the scope of defamation, this tort, too, would need to avoid having a scope that would capture, for example, a civil rights leader whose movement produces occasional acts of violence. Indeed, under case law like NAACP v. Claiborne Hardware, courts would have to ensure that leaders of a group are not punished for acts of violence within their protest movement.

But it’s possible for courts to construct a tort that would be strong and clear enough to avoid negative chilling effects and First Amendment conflict. First, the elements of the tort will ensure that it is not sufficient for a plaintiff to show that a defendant has a lot of followers or creates incendiary content that gets circulated a lot. The causation and mental state requirements would reach only individuals who are persistent—who are in the ear, so to speak, of the members of their informal groups on a daily or hourly basis. This is the behavior that keeps members of a loosely affiliated group in a cycle of grievance, and it is a phenomenon unique to, or at least uniquely trackable in, networked communications. And then, even those who are circulating criticism on a daily or hourly basis can keep up their activity as long as they affirmatively dissociate with law-breaking or unprovoked violence. 

In other words, tort law should impose a duty on informal leaders only when the leaders have enough information to recognize a tinderbox situation—when discourse goes well beyond criticism to the sort of rhetoric that channels focus into rage. When there is such a tinderbox, leaders can avoid liability by refraining from adding to the grievance rhetoric for a while or by proactively reminding their group members to avoid unnecessary violence. 

To be clear, I make this recommendation and argument with a good dose of trepidation. I am nervous about the impact that tort liability of this sort could have on uninhibited debate. But given the current state of affairs, and the threat that conspiracy theorists pose to individuals and institutions everywhere, the experiment is worth trying.


Jane Bambauer is a professor of law at the University of Arizona, where she teaches and researches about free speech, privacy and technology policy.

Subscribe to Lawfare