Cybersecurity & Tech Terrorism & Extremism

The Christchurch Report Points to Better Avenues for Internet Reform

Jacob Schulz, Justin Sherman
Friday, March 26, 2021, 9:01 AM

What does the report reveal about online extremism and the efforts to counter it?

YouTube's website (https://www.piqsels.com/en/public-domain-photo-jbvfe/Creative Commons)

Published by The Lawfare Institute
in Cooperation With
Brookings

Two years ago last week, a white supremacist walked into two mosques in Christchurch, New Zealand, and began spraying bullets upon worshippers, killing 51 people. The attack brought into public view a particular brand of 21st century extremism, making it tough to ignore the enduring strength of white extremist ideas, and the ineffectiveness of Western governments in stamping them out. And the shooter—whose manifesto overflowed with internet jokes, and who livestreamed the shooting “as if it were a first-person shooter video game”—forced the general public to come face-to-face with an idea long well-established among extremism researchers: There’s a whole lot of dangerous stuff online, and it will seep out of the four corners of the internet and into the real world.

The most comprehensive look at the shooter, the attack and the bureaucratic failures that preceded it comes from a 792-page report released by the New Zealand government in December 2020. The document charts the brushes the shooter had with Kiwi and Australian authorities, pieces together his financial situation, and chronicles the attack itself. But its most valuable contribution is perhaps that it stitches together, in great detail and for public viewing, the shooter’s online activities in the months and years that preceded the shooting. The report paints a picture of a man enthralled by extremist corners of the internet. He posted in far-right Facebook groups. He inhaled white supremacist YouTube videos. He donated money to his favorite racist internet celebrities. He read manifestos. He wrote one of his own.

A lot has changed in two years. YouTube has become relatively more aggressive in trying to prevent a growing collection of racist creators from making money directly on the platform and now claims it removes five times as many “hate videos” as it did at the time of the attack. Facebook has more rules. The New York Times made a whole podcast series about the YouTube “Rabbit Hole”—people start with videos about manliness or political correctness and end on a treadmill of videos about how white people are being replaced by minorities. But reading through the Christchurch report, what’s most striking is just how many of the problems it documents continue to vex governments and platforms.

While some right-wing influencers have been demonetized on YouTube or kicked off altogether, many others remain, and the presence of such content remains a large problem. Facebook has made a fuss about its Oversight Board and some new rules for platform content, but Facebook groups remain a breeding ground for hate-filled vitriol, extremism and even celebration of violence. Above all, critics’ focus on the big players leads to far less scrutiny of the smaller internet companies and the ones that don’t deal exclusively with user-posted speech. Extremists and their ideas migrate from platform to platform, implicating not just the social media platforms in content moderation but file-sharing websites and payment processors as well.

Facebook Groups and Beyond

The report retreads familiar ground for people who have tracked the content moderation space: Facebook, specifically Facebook’s Groups feature, played an important role in incubating the individual’s extreme ideas.

The individual’s foray into extremist internet communities began well before his use of Facebook; according to the report, “[i]n 2017, he told his mother that he had started using the 4chan internet message board”—a site notorious for hosting hate speech—“when he was 14 years old.” He would also play video games and “openly express racist and far right views” in chats during those sessions. Yet Facebook was a key part of the individual’s online activity, and the report’s account of his Facebook activity charts a trail of overt racism and Islamophobia.

He was one of more than 120,000 followers of the Facebook page for the United Patriots Front, a far-right Australian group, and commented about 30 times on the page between April 2016 and early 2017. He praised the United Patriots Front’s leader on the organization’s page, as well as the leader of the True Blue Crew, another far-right group in Australia. He used Facebook to direct-message a threat to an Australian critic of the former (allegedly reported to authorities with seemingly no action taken). One message to this person read, “I hope one day you meet the rope.”

After Facebook removed the United Patriots Front group in May 2017, members of the United Patriots Front created another far-right group, The Lads Society, to which the individual was invited. Though he declined to join the real-world club, he joined and became an active member of The Lads Society’s Facebook group. The pattern is bleakly familiar: Facebook axes one group, and the group members just create a new one. Over the next several months, according to the report, the individual became “an active contributor [to the new group], posting on topics related to issues occurring in Europe, New Zealand and his own life, far right memes, media articles, YouTube links (many of which have since been removed for breaching YouTube’s content agreements), and posts about people who were either for or against his views.” He also encouraged donations to a far-right politician and cautioned that “[o]ur greatest threat is the non-violent, high fertility, high social cohesion immigrants ... without violence I am not certain if there will be any victory possible at all.” Later, in the months before the 2019 terrorist attack, the individual posted links to extremist content on Facebook and Twitter. He also made a Facebook album titled “Open in case of Saracens,” which included two videos with extreme right-wing views and calls for violence—and a digitally altered image that depicted Masjid an-Nur, one of the ultimate targets in the terrorist attack, on fire.

Facebook Groups in many cases remain cesspools of hate and extremism, and even outright incitements and plotting of violence. The Jan. 6 terror attack on the U.S. Capitol was planned in part in Facebook Groups; immediately following the November 2020 election, Facebook removed a “Stop the Steal” group that had several hundred thousand members, only to let hundreds of other such groups grow in the following weeks. Content on the platform’s News Feed itself, as well as in user photo albums and on users’ “walls,” likewise remains a problem vis-a-vis hate and extremism. Facebook, meanwhile, continues issuing vaguely worded rules about the acceptable parameters of group behavior.

Online Fandom

Backlash after the release of the report centered on one internet platform in particular: YouTube. The report itself made clear that the video-sharing platform played a central role in motivating the shooter, writing that “[t]he individual claimed that he was not a frequent commenter on extreme right-wing sites and that YouTube was, for him, a far more significant source of information and inspiration.” And after the report’s release, extremism researchers and reporters took YouTube to task on familiar grounds: its lax content rules, its aversion to transparency and, of course, its algorithm. YouTube has taken some steps in the right direction on these matters but is nowhere near patching up all the problems.

One of the report’s biggest contributions to the public record about extremism on YouTube and far-right online culture mostly fell under the radar in the weeks that followed the report’s release. The report details that the shooter didn’t just passively watch a lot of bad YouTube videos. He attached himself to a collection of far-right internet sensations.

Prominent racists never limited themselves just to YouTube, but the platform has provided particularly fertile ground for white nationalist “microcelebrities” to lap up devoted fans. Stanford researcher Becca Lewis wrote a 2018 Data & Society report documenting the phenomenon. She argues that YouTube is an ideal fit for “niche celebrities who are well-known within specific communities.” This helps knitting YouTubers or ankle rehabilitation YouTubers to gain loyal followings, but it also boosts George Soros conspiracy YouTubers or “scientific racism” YouTubers. The platform offers certain accounts direct monetary support, and the (often very long) video format lets high-profile others “develop highly intimate and transparent relationships with their audiences.” YouTube influencers who make money from makeup or branded exercise equipment can leverage user-to-poster relationships to sell cosmetics or jump ropes; white nationalist influencers can similarly leverage the parasocial relationships that individuals have with them as a tool to “sell” a “far-right ideology”—and also some T-shirts along the way. It’s a cycle of ideological attraction, the development of audience-to-creator relationships and the deepening of those relationships.

The right-wing internet celebrity who gets the most airtime in the Christchurch report is Blair Cottrell. Two Australian far-right groups—first the United Patriots Front and then the Lads Society—count Cottrell as their founder. Australian courts convicted him in November 2019 on criminal charges related to his decapitating a dummy in protest of a proposed mosque and then uploading that video online. The report details how the Christchurch shooter spent a whole lot of time in Cottrell’s online orbit. From April 2016 to January 2017, for example, the shooter “made approximately 30 comments” on the United Patriots Front Facebook page.

But his adherence to Cottrell ran deeper than just involvement in the organization’s Facebook group. He harassed Cottrell’s detractors. He evangelized Cottrell in a Facebook group for another Aussie far-right group. He posted on Facebook after Donald Trump won the 2016 U.S. presidential election that “globalists and Marxists on suicide watch, patriots and nationalists triumphant—looking forward to Emperor Blair Cottrell coming soon.” And most strikingly of all, he made a 50 Australian dollar donation to the United Patriots Front.

What is striking here is not the magnitude of the donation but the fact that the shooter was literally willing to put his money where his racist mouth was. He donated even more money to other right-wing agitators. The shooter gave $138.06 to the National Policy Institute, which, despite its serious sounding name, is a U.S. white nationalist think tank run by Richard Spencer, described by Emma Grey Ellis in Wired as “an academic version of 4Chan.” He forked over 83 cents more to Freedomain Radio, an enormously popular right-wing (formerly YouTube) channel run by the bloviating Canadian Stefan Molyneux. Lewis tweeted after the report’s release that the donations “reveal how important YouTube *celebrities* were to [the shooter’s] radicalization.” Donations don’t inherently reflect parasocial attachment on behalf of the givergiving money to Doctors Without Borders doesn’t mean someone is enthralled by the charity’s board of directorsbut in each of these cases a charismatic internet attention magnet embodied each of the groups. Giving money to the United Patriots Front was giving money to “Emperor Blair Cottrell”; giving money to the National Policy Institute was cutting a check for Richard Spencer; and contributing to Freedomain Radio was contributing to “Stef.” The shooter didn’t just tether himself to bad ideas, he tethered himself to bad people whom he trusted as his Sherpas to a dark world of ideas.

YouTube has tried to depict itself as cleaning up its act vis-a-vis white nationalist “microcelebrities” in the years since the Christchurch attack. It has “demonetized” certain racist creators, preventing them from dipping directly into the pot of YouTube ad revenue their videos create. It unveiled new hate speech rules in June 2019 and unleashed what NPR referred to as a “purge” of white supremacist videos. Stefan Molyneux and the National Policy Institute—two of the beneficiaries of the shooter’s PayPal donations—have gotten the boot from the platform, and so has Richard Spencer.

But tons of far-right and white nationalist influencers still use YouTube as a home base. Take Steven Crowder, for example. Crowder is a right-wing YouTuber—whose most popular upload is entitled “There Are Only 2 Genders | Change My Mind” (40 million views)—who got “demonetized” in 2019, but kept making money by directing YouTube viewers to buy his tees or other merchandise. In any case, Crowder applied in August 2020 to get back into the monetization program and YouTube acquiesced. Just this month, Crowder released a video where his guest makes jokes about slavery and he mocks Black farmers.

The investigative news outlet Bellingcat published last month a long report on the continued success of British far-right creators in parlaying YouTube traffic into financial success. These micro influencers don’t need direct monetary support from YouTube and often use their wildly popular videos to funnel potential donors to third-party payment services immune from the whims of YouTube’s erratic demonetization campaigns. The Bellingcat report cites, for example, the techniques used by one such YouTuber: “At the end of a YouTube video with nearly 1m views fueling racist caricatures of Muslim refugees as rapists, Paul Joseph Watson, known for his work with conspiracy theorist Alex Jones, implores viewers to send him money via SubscribeStar, a low-moderation fundraising platform favoured by the far-right. He links to his page in the video description, along with cryptocurrency keys, merchandise and alt-tech accounts.” Watson has 1.87 million YouTube subscribers. Two years after the attack, in other words, right-wing extremism continues to inspire fervent online connections between posters and devoted viewersand YouTube continues to be the hub.

Beyond the Facebooks and the YouTubes

But the report also offers as an important reminder of how content moderation decisions extend far beyond the Facebooks and the Twitters of the internet. As Daphne Keller wrote on Twitter last month, there’s a tendency to “write about the Internet as if it were only Facebook, Google, and Twitter.” But this reductive view overlooks the huge range of different companies across the global internet infrastructure that have various levers of control over internet content delivery—what tech and content moderation scholars often call the “stack” of companies that make the internet what it is. And the Christchurch report highlights the big role that smaller companies play in the content delivery ecosystem.

The report details a number of times when the individual engaged in objectionable behavior on lesser-known sites. The individual also sold two firearms on Trade Me, a classified buy/sell Craigslist-esque website in New Zealand, at an unspecified time post-December 2017. On its banned and restricted items list, Trade Me has firearms and ammunition under the “restricted” section, with a few paragraphs of elaborated qualifications beneath it; it does not ban the sale of firearms altogether.

On March 14, 2019, the individual also uploaded his manifesto to MediaFire, a file-hosting service incorporated in the United States. According to the report, “[t]here was no public access to the manifesto on this site until he posted links to it immediately before the terrorist attack.” The following day, he uploaded his manifesto in .pdf and .docx formats to Zippyshare, another file-sharing website. Content-sharing websites are quick to claim that upload filters are a violation of privacy or will add excessive lag time to network responsiveness, yet the fact remains that these websites were means by which the individual preserved his terroristic beliefs for the internet to see. Even though the documents were removed after the terror attack, many individuals had already copied them. Indeed, neo-Nazis the world over have continued disseminating the individual’s manifesto via services like Telegram. The report—with all of its detail about objectionable activity on lesser-known file-sharing and e-commerce websites—offers a stark reminder of the need to broaden the internet reform discourse beyond a few select companies like Facebook and YouTube.

Conclusion

It is disturbing how many of the problems in curbing extremism online persist even years after the Christchurch attack. But the report offers much more than just cause for nihilism. Its findings in the technology sphere point to avenues of reform (many long advocated for elsewhere) that might be most fruitful in stamping out the type of online activity that can hasten racist terrorism.

First, many more companies need to issue much better public reports about their content moderation processes and their content removal decisions. Despite public relations campaigns by many American social media platforms to highlight their transparency reporting, the reports don’t provide anything close to a full picture of how often a company like Facebook chooses to take or not take various enforcement actions against, say, groups of extremists. Further, a range of companies typically not considered as part of the broader online content delivery ecosystem, like payment processors and classifieds-style websites, also make decisions that influence information availability and delivery. But these companies don’t tend to release information detailing their policies about harmful content disseminated on or through the use of their systems and about how they end up implementing those policies. Better disclosure would help make clearer the role that these companies do have in the content moderation ecosystem and help lay the foundation for better public policymaking.

And few platforms are in more need of transparency improvements than YouTube. Much of the platform remains a black box, and as Evelyn Douek and others have noted, the public reports it releases tend to rely on dubious methodology and generally don’t inspire much confidence in their accuracy. Clearer and more reliable public reporting would help answer nagging questions about what YouTube actually polices. Who exactly is getting demonetized for how long? What parts of the platform’s terms of service are used to boot people off?

There’s one other important area in which YouTube escapes transparency. It has become a recurring event for Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey to get dragged over the coals before congressional committees, but YouTube CEO Susan Wojcicki’s invitation always tends to get lost in the mail. Yesterday, for example, the CEOs of Facebook, Twitter and Google appeared before the House Energy and Commerce Committee with Wojcicki conspicuously absent. As Douek has described, Congress’s failure to summon Wojcicki with any regularity plays into YouTube’s public relations strategy to “be more opaque, keep its head down, keep quiet, and let the other platforms take the heat.” Congress should make sure Wojcicki doesn’t get left out next time.

Both Facebook and YouTube can also make discrete policy changes that would ameliorate the situation, though these changes certainly wouldn’t fix it entirely. YouTube, for example, might consider lowering its bar for hate speech and making it more clear when that bar is crossed. As Will Oremus wrote in OneZero, YouTube has set the standard for hate speech so high that “[y]ou’re free to mock, caricature, and belittle people based on their race, just as long as you don’t come right out and say you literally hate them.” This isn’t to suggest that it’s easy to set clear hate speech rules and to decide when to enforce them, but YouTube has significant room for improvement in this regard. Short of that, YouTube ought to be less charitable in letting once-demonetized creators back into its ad-money partner program. It’s important to allow for appeal, but it’s hard to see how certain offenders merit clemency. Crowder, for example, got let back into the program in 2020 and has continued to post reprehensible things on the platform. When YouTube let Crowder back in, it said it would “take appropriate action” if he broke the rules again. No signs of “appropriate action” so far in response to the racist video about Black farmers.

Facebook also can and should get a handle on groups. Despite whatever hand-waving claims its executives make about the difficulty of balancing various speech and rights interests on their platform, the fact remains that removing hate-filled and extremist content, especially that which is conducive to violence, is visibly not a priority for the company. The Jan. 6 coup attempt and its run-up made that fact patently clear. Facebook recently claimed it will begin a more comprehensive crackdown on its groups, but as leadership continues to deny root problems and systemic problems at the company abound, there is plenty of historical reason for skepticism.

None of these fixes is simple. None will solve all the problems spelled out in the report. None can guarantee that there won’t be another Christchurch shooter; racist mass shootings, after all, predate the internet. But at a time when policymakers and platforms are scrounging around for the spots to improve at the margins, the report offers a really helpful index of the places to look.


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.
Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.

Subscribe to Lawfare