Terrorism & Extremism

U.K.’s Southport Riots Show Extremism Is Evolving—Policy Should Too

Milo Comerford, Jacob Davey
Friday, August 23, 2024, 12:00 PM
The riots demonstrate shifting trends in extremist activity, online and offline. U.K. policymakers need to adjust regulation accordingly.
Southport riot. July 30, 2024. (StreetMic LiveStream, https://commons.wikimedia.org/wiki/File:Southport_riot.jpg, CC BY 3.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

The beginning of August 2024 saw some of the worst far-right rioting on the streets of the U.K. in decades. Following a mass stabbing at a Taylor Swift-themed dance class, which left three young girls dead and 10 wounded, a broad range of actors capitalized on the information void immediately following the attack. The dissemination of falsehoods led to violence targeting minority communities and mass disorder on the streets.

While the speed and scale of the Southport riots was shocking, the violence and its broader drivers were unsurprising to those who analyze extremism, consistent with trends observed recently in the U.K. and globally. These patterns challenge traditional policy responses to far-right extremist activity and warrant a reexamination of social media regulation strategies. Far-right extremists exhibit increasing confidence in their activities, coupled with a wider, more diverse base and empowered by growing disinformation. These shifts require an all-of-government policy response as well as adjustments to the U.K.’s current platform regulation mechanism, the Online Safety Act (OSA).

Trends in Far-Right Extremism

The first trend driving violence is greater confidence by the U.K.’s far right both online and offline in recent years. For example, recent rallies by Stephen Yaxley-Lennon (aka Tommy Robinson)—one of the U.K.’s most prominent far-right influencers, and the former leader of the now-defunct far-right street protest movement, the English Defence League (EDL)—attracted thousands of individuals to central London. An increasingly operative U.K. far right is something that has been forewarned by a range of individuals: As early as 2020, Metropolitan Police Assistant Commissioner Neil Basu gave a speech about the increased threat posed by far-right terrorism. While a vast majority of the activity seen in riots this August would not meet the threshold of the terrorism that Basu warned of, his concerns nevertheless indicate that broader mobilization by the far right did not come out of nowhere.  

This offline confidence matches shifts in the ways extremists operate online.  

In particular, changes in policies on X following Elon Musk’s takeover of the platform saw many individuals who had previously been banned for violating platform terms of service around hateful activity reinstated. These individuals—including Yaxley-Lennon—were suddenly able to reach far greater audiences than they had in recent years.

The second trend driving escalating violence is the use of a variety of online platforms and communities and, subsequently, offline actors. An increasingly diverse range of platforms allow networking between different sub-movements and communities, with encrypted apps like Telegram enabling extremist movements and conspiracy theorists to organize, and platforms like TikTok acting as essential venues for propaganda. This multifaceted platform dynamic manifested itself in the violence of Southport; although there was a focus on the role of influencers and disinformation drivers on X in stoking violence, Telegram and TikTok also had an undeniable impact. 

The loose and amorphous nature of these digital ecosystems is reflected in the wide range of actors who were seen on the streets. A number of individuals spotted at the riots, and agitating online, belong to different and at times competing movements: for example, former EDL-affiliated individuals, football hooligans, disparate white supremacist movements, and even communities that grew out of coronavirus conspiracy theorist communities. These were joined by a range of individuals from within local communities who—while not affiliated with extremist movements—were seeking recourse for a range of grievances, as well as those simply looking to take part in violence.  

The opportunistic nature of this coalition is another phenomenon that has recurred in different global contexts in recent years. The 2017 riots in Charlottesville, Virginia, saw a relatively disparate group of far-right actors put aside their differences and come together to maximize their impact. Similar dynamics were at play during the Jan. 6 U.S. Capitol insurrection. In the U.K. this is perhaps observed most starkly during the disorder in Belfast—part of Northern Ireland—where Unionists marched alongside individuals carrying the Irish flag, united by anti-immigrant sentiment. 

The final key trend driving violence is the spread of online disinformation. The Southport riots provide a case study in how the proliferation and amplification of harmful falsehoods on social media platforms can quickly translate into targeted online hate and, ultimately, offline violence. While prominent far-right activists such as Yaxley-Lennon looked to stir the pot online, helping to create a permissive environment for hostility toward migrants and Muslims, arguably the spark that triggered the mass disorder seen offline was viral disinformation. Shortly after the attack, content that misidentified the attacker as a Muslim, a recently arrived refugee, and someone on MI6’s watchlist were shared tens of thousands of times, filling an information void at a crucial moment. ISD analysis showed harmful narratives around the false identity of the attacker being algorithmically amplified, including through TikTok’s search recommendations and X’s “What’s Happening” sidebar.

One of the core outlets responsible for amplifying these claims was an online channel called Channel3Now—a site that acts as a news aggregator, often hosting false or unverified information. Although there was early speculation that this page had ties to the Russian state, an investigation by the BBC suggests that it is more likely a profit-making scheme rather than an ideologically motivated platform. This trend has become a more frequent phenomenon in the spread of online disinformation in recent years, highlighting the fact that extremism can be driven by cynical online grifting.

Despite the unclear ideology behind the site, the claims were nevertheless spread by far-right agitators and in time helped inspire calls for violence. This included amplification by reactionary influencers, as well as more established movements. This quickly spilled over into a range of different grievances, including sexual abuse scandals in the U.K., changing demographics, lack of resources in deprived areas, and challenges in accessing essential services, with the result that the riots became about more than the Southport attacks. These socioeconomic issues are important to recognize: Although far-right extremism had a clear role in driving the riots, many of the rioters were unaffiliated with far-right movements.

This dynamic again matches a familiar pattern seen not just in the U.K. but globally. In November 2023, a similar pattern of activity unfolded in Dublin, Ireland. Following the stabbing of three children, unverified online content blamed the attack on an illegal immigrant. Drawing on simmering anti-immigrant sentiment, violence erupted quickly, bringing a range of actors into the streets to riot. A similar pattern unfolded in Chemnitz, Germany, in 2018 following a fatal stabbing. As in the U.K., a transnational set of far-right agitators sought to stoke tensions based on local tragedy.

How This Wave of Far-Right Violence Challenges Policy Responses

While the far-right extremist violence that swept the U.K. in the wake of the Southport stabbing attack shocked the country, it did not occur in a vacuum. An ISD policy paper on the shifting U.K. extremism threat landscape, released in April 2024, warned that an “increasingly chaotic online environment can catalyze real world threats to public safety, social cohesion and democracy.” Of particular concern was a major rise in organized campaigns violently targeting the accommodation of asylum-seekers, as well as a surge in anti-Muslim hate online and offline in the wake of the Oct. 7 Hamas attack on Israel—factors that helped lay the groundwork for this wave of violence.

Law enforcement responded to the riots rapidly, with perpetrators quickly charged and sentenced. While public order policing is a necessary response to the imminent threat of violence and disorder, these police-led responses cannot be the sole strategy for tackling extremism. Instead, the tensions revealed by these riots, and the unprecedented scale of violence, shows the urgent need for updated, cross-government policy responses to an interconnected set of threats. While the U.K. government has banned half a dozen violent far-right groups over the past decade, most recently the Terrorgram Collective—a network of violent far-right Telegram channels—the recent violence speaks to a post-organizational dynamic, characterized by more amorphous online communities rather than clearly identifiable violent extremist groups. This poses a challenge for a system of prevention, prosecution, and disruption fundamentally rooted in a post-9/11 counterterrorism apparatus originally designed to manage organization-based threats.

Rather than treating the threat as monolithic, policy responses must reflect the increasingly hybridized nature of contemporary far-right extremism. The street movements at the heart of recent violence represented a broad spectrum of individuals, including ideologically committed white supremacists, anti-migration “activists,” and opportunistic football hooligan networks. These diverse coalitions were brought together within ephemeral Telegram channels, echoing the highly heterogeneous conspiracy movements mobilized online during the coronavirus pandemic. Targeted responses would involve close analysis of the diverse motivations and pathways that underpinned these outbreaks of mass violence.

In general, the U.K. counterterrorism system—both its legal thresholds and its referral pathways for those at risk—is rooted in a context that associates risk with a core ideological commitment to extremism. However, recent evidence has pointed to an increasingly ambivalent role played by ideology: 2023 data from the U.K. government’s counterterrorism Prevent scheme showed so-called “mixed, unclear, and unstable” cases (a catch-all category for less clear vulnerabilities) accounting for the highest proportion of referrals, ahead of Islamist and far-right categories. While the U.K. government has since scrapped this category, it speaks to a dynamic whereby individuals are increasingly likely to be motivated by more generalized violent ideation, a blend of diverse online extremist ecosystems, or non-ideological influences such as school shooter fandoms. In this context, policy responses will need to respond to the broader range of factors and grievances that might motivate violent engagement—from poverty to social isolation to mental health challenges—beyond simply tackling extremist ideology. 

Revisiting Emerging Online Regulation

Beyond the need to revisit policy responses geared toward the prevention of future violence, the central role played by social media platforms in inspiring the recent far-right riots has reopened a national conversation about the regulation of these platforms.

The U.K. passed sweeping legislation in October 2023 enshrining a new social media regulation regime. The Online Safety Act (OSA) articulates legal duties for platforms to reduce the risks of illegal content proliferating on their platforms, as well as broader safeguards for children using its services. The regulations are yet to come into force; however, following the riots, the regulator Ofcom urged platforms to “act now,” saying “there is no need to wait for the new laws to come into force before sites and apps are made safer for users.”

At first glance, the applicability of the legislation to the Southport riots is ambiguous. Much of the viral disinformation that helped lay the groundwork for far-right violence would not be in scope of the illegal content proliferations in the regulation, which are geared primarily toward terrorist content, illegal hate speech, and child sexual abuse material (CSAM). This is in contrast to parallel online safety legislation introduced by the EU—the Digital Services Act (DSA)—which explicitly requires platforms to assess and mitigate “systemic risks to civic discourse,” even if such content is not illegal. For example, European Commissioner Thierry Breton—responsible for implementing the DSA—issued a letter of warning to Elon Musk over the potential amplification of harmful content around the riots in the EU on his platform X.

However, the Online Safety Act also contains provisions to ensure platforms are consistently enforcing their declared terms of service, many of which include specific rules around mis- and disinformation. This will provide an opportunity for the regulator to ensure consistent enforcement in crisis events like the aftermath of Southport, while also ensuring that platforms are not over-moderating content that is neither illegal nor in violation of their own rules.  

Relitigating the fundamentals of the OSA risks further disrupting its implementation—something the new government seems to recognize. However, there are some important considerations for policymakers if they choose to retain the existing legislation.

The first is ensuring that the regulator is effectively monitoring not only the larger social media platforms but also high-risk smaller and medium-sized services with a demonstrable link to hate and extremism, such as Telegram. This platform, which was at the heart of recent violent mobilization, has consistently failed to address the proliferation of violent extremist content on its service. This shows the importance of looking more broadly at risks, rather than just the user base of a platform, in prioritizing enforcement, especially when such platforms refuse to engage with regulators.

Second, there is an urgent need for mandated data access from social media platforms for independent researchers, which is essential for creating an ecosystem of scrutiny to systematically hold platforms—and regulators—accountable. While the EU’s Digital Services Act mandates transparency from large platforms, outside the EU the landscape of data access is rapidly diminishing, with the shuttering of tools and the introduction of prohibitive fees. Failing to enshrine this in regulation risks leading to a system of two-tiered data access across different country contexts, with knock-on effects for the safety of users.

Finally, the aftermath of the Southport attack shows the importance of including crisis response in social media regulation, to ensure that platforms have clear and proportionate provisions in place to respond with urgency in situations where online activity is translating into offline violence. The EU’s DSA contains specific provisions around crisis response, but these are largely absent from the OSA. Existing protocols from the Global Internet Forum to Counter-Terrorism (GIFCT) and Christchurch Call tend to be more narrowly focused on the aftermath of terrorist attacks, rather than instances of violence such as the Southport riots.

Such policy shifts, rooted in addressing both online and offline sources of violence, will be essential in preventing future extremism. The U.K. government’s announcement of a rapid internal review to inform a renewed counterextremism strategy will be an important step toward developing a comprehensive policy response. Without policymakers rising to meet this evolving challenge, the extremist violence that followed the Southport attack could shift from a warning to a normality.


Milo Comerford is Director of Policy & Research, Counter Extremism, leading ISD’s work developing innovative research approaches and policy responses to extremism. Milo regularly briefs senior decision makers around the world on the challenge posed by extremist ideologies, and advises governments and international agencies on building effective strategies for countering extremism. His writing and research features frequently in international media and he has made recent appearances on BBC News, CNN and Sky News.
Jacob Davey is the Director of Policy & Research for Counter-Hate at ISD. Jacob leads projects focusing on online hate speech, the international far-right and political violence. He regularly advices national and local policymakers on right-wing extremism and has testified at the Home Affairs Select Committee and the Intelligence and Security Committee of Parliament.

Subscribe to Lawfare