Systemic Risk Assessments Hold Clues for EU Platform Enforcement
The future of online content regulation might rest on largely overlooked EU-required digital risk assessments.
![](https://lawfare-assets-new.azureedge.net/assets/images/default-source/article-images/featured_image/7905.jpg?sfvrsn=c732c46b_1)
Published by The Lawfare Institute
in Cooperation With
In late November 2024, 19 of the EU’s largest internet platforms and search engines—each with more than 45 million EU-based users—began satisfying some of the most innovative and far-reaching elements of the European Union’s flagship platform regulation, the Digital Services Act (DSA). The DSA regulates online intermediaries, seeking to reduce illegal and harmful content. As part of these regulations, companies must report on their assessment and mitigation of “systemic” risks, provide an independent audit of the service’s compliance with its DSA obligations, and develop a response to the audit’s findings.
The first round of required reporting resulted in thousands of pages that, outside expert circles, have been largely ignored. These reports, which include risks assessments, mitigation measures, and audits, will form the basis of future high-profile enforcement actions by the EU.
Meanwhile, a war of words between tech CEOs and EU officials has intensified discussion about whether the DSA is actually a censorship tool or a belated form of accountability for online platforms. Meta CEO Mark Zuckerberg went so far as to suggest that it is “institutionalizing censorship.” In response, the European Commission stated, “We firmly reject any claims of censorship on social media platforms … Nothing in the DSA obliges online intermediaries to remove lawful content.” These reports bring much needed transparency to DSA enforcement, providing the evidence base that could help counter such accusations.
The reports are long, dense, and complicated. Little guidance was published or provided to the companies or their auditors, who largely had to come up with the process from scratch. As a result, even after publication there is still little uniform understanding of what comprises a systemic risk or what a DSA audit should actually entail. This has led observers to ignore the substance of these reports. Doing so is a missed opportunity to improve the process.
How these reports are received, evaluated, and improved upon matters, not just for regulators and the regulated, but for billions of users of these services in Europe and around the world. Without greater understanding of what a good risk assessment entails, and where shortcomings amount to a breach of the DSA, the EU’s regulatory capabilities will be vulnerable to legal challenge, even as good-faith company efforts to comply remain perpetually at risk of selective or subjective enforcement.
The first step to improvement is to establish what to expect from these reports in concrete rather than abstract terms.
What to Expect From a Systemic Risk Assessment
The risk assessments are the first component in the system of risk management created under the DSA. Platforms assess their exposure to four overlapping areas of systemic risk: (a) illegal content, and negative effects on (b) fundamental rights, (c) civic discourse and electoral processes, and (d) gender-based violence, the protection of public health and minors, and negative effects on physical and mental well-being. The platforms are then obliged to adopt proportionate and effective mitigation measures. The independent audits verify that companies have taken these steps, along with their other compliance obligations.
In practice, the metrics of risk assessments are far from clear. First, the reports lack a clear definition of systemic risk. Most people grasp intuitively that social media and other digital services, especially when misused, can cause harm to individuals and communities. But the concept of risks rising to a systemic level was adapted from financial regulation where it has a clear meaning: the risk of an individual bank failure threatening the entire financial system. In the digital world, these metrics are less obvious.
There are several ways that the risks of digital services could be considered systemic. This could entail harmful content and conduct occurring across services, such as terrorist organizations recruiting via social media and encrypted communications, or when a service has an impact on a societal rather than individual level, such as electoral interference in a contested election, as is currently under investigation in Romania. But the DSA does not offer a clear definition of systemic risk, and the European Commission has failed to elaborate on the subject.
Furthermore, the participating companies declined to offer their own definitions of systemic risk, an understandable compliance decision that nonetheless represents a missed opportunity to further develop the concept. While LinkedIn falls short of providing a definition, it is one of the few platforms to explicitly wrestle with the question of systemic risk in its public report. LinkedIn used a model from environmental impact assessments to examine “the systems impacted by the systemic risk area (geographic, political, security, environmental, societal, and wellbeing)” as part of determining the severity of a risk. How useful this model will be remains to be seen.
Second, systemic risk assessments are an EU compliance exercise, not a global, holistic view of how platforms assess risk. These reports were written for European regulators, with the knowledge that they would eventually be scrutinized by other governments, litigants, and campaigners. They are written defensively, rather than as a no-holds-barred examination of the dimensions of risk emanating from each service. This matters for both substance and process. Substantially, the most significant risks for a service may not be related to Europe, as evidenced by the Wikipedia assessment, which pointedly excluded what the platform sees as its biggest risk—country-wide blocking of Wikipedia—because it “do[es] not currently consider this an EU systemic risk, nor is it clearly linked to Wikipedia’s design/functioning/use.” Procedurally, it is important to distinguish between activities pertaining to risk assessment that companies undertake voluntarily and those mandated by the DSA. Although it’s understandable that civil society organizations would push companies to report everything that went into the risk assessment, risk-averse legal departments will seek to limit public reporting to what is necessary.
Finally, the DSA suffers from a timing problem. The statute does not require companies to publish information about their risk assessments until “at the latest three months after the receipt of each audit report.” This means that public risk assessments could be perennially out of date by the time of their publication. That said, companies took a plurality of approaches to publishing their reports, with some companies publishing their 2024 risks assessments even though these were not expected until 2025. Others published only the 2023 risk assessments, and others published both sets of reports. Reportedly, this change was the result of private pressure from the European Commission, which on Nov. 26 published a Q&A document asserting that the reports that companies “must publish, including the risk assessment report, are those of the ongoing year.” The commission effectively jawboned the companies into publishing risk assessments and audits a year before required by the text of the law.
The result is a short-term problem—with companies making public reports covering divergent time periods—that may provide a long-term solution. Some companies that initially published only their 2023 reports—including Google—have now published 2024 reports. This suggests that going forward, teams responsible for DSA implementation may be able to secure the necessary internal approvals to publish risk assessments three months after their completion, so that the public is not a year behind the regulatory conversation.
What Do the Risk Assessments Do Well?
Despite these shortcomings, the risk assessments provide an important starting point for understanding how platforms think about trust and safety, because—quite simply—they articulate how their products work.
Digital users bring personal experiences to bear when thinking about what’s risky on internet searches, social media, or marketplaces. But personal experience is likely insufficient to actually understand the specifics of these products and services. Although it is not a component of the risk assessments that was specifically required by the DSA, most of the risk assessments start with a product overview. It may be tempting to skip this, but as demonstrated by Apple’s report on the App Store, this section can provide insight into the product that most users would not otherwise have access to. By walking the reader through both the user and developer sides of the App Store, the report provides a comprehensive view of how specific risks manifest.
The risk assessments also explain how the four categories of risk specified in the DSA apply to a particular set of products, not just social media. The DSA is the most significant and far-reaching of numerous online safety laws that have been enacted in the past few years. Like many of them, it encompasses various categories of services, with some provisions specific to advertising, marketplaces, or search engines. It was clearly written with social media top of mind, but by taking this broad approach and determining the threshold for its most intensive requirements based purely on the number of European users, it has resulted in risk assessments for products and services that have previously been overlooked.
How to Enforce Risk Assessment and Mitigation Absent Guidance
Despite the urging of industry and civil society, the European Commission declined to provide guidance on how companies should undertake systemic risk assessments, aside from the text of the DSA. The result is a heterogeneous set of reports that all generally cover the same ground, but whose methods and final forms differ substantially.
For more than a year the European Commission has been enforcing the DSA through Requests for Information (RFIs) and investigations into potential breaches. This has resulted in numerous press releases with brief descriptions of the substance of their requests, followed by announcements of investigations that sometimes include a detailed letter but in other cases include only a short public summary. The majority of these requests and investigations reference the risk assessments and mitigations, which before November were inaccessible to the public. While the details of those RFIs remain between the commission and the platforms, the public descriptions provide a road map to the parts of the risk assessments that have drawn the regulator’s attention , and they merit a closer read.
So what do the risk assessments reveal about the areas of greatest enforcement focus?
Unsurprisingly, given the large number of high-profile elections held in 2024, RFIs that referenced risk assessment and mitigation focused heavily on electoral processes. RFIs mentioning risk assessment and mitigation for elections were sent to X, Meta, and TikTok, with formal proceedings opened against all three companies. In December 2023, the European Commission suggested that X came up short on both risk assessment and mitigation, particularly with regard to insufficient assessment of “regional or linguistic aspects” of risks to elections and civic discourse. In other words, X did not sufficiently consider potential electoral manipulation of its platform across all EU languages, and the Community Notes system was not an acceptably robust mitigation of these risks. The commission also raised concerns about insufficient assessment of terrorist and violent content and hate speech, and the design and functioning of what X calls the “Freedom of Speech, Not Freedom of Reach” system, through which violative content is reduced in visibility rather than removed.
A review of X’s 2023 systemic risk assessment report reveals some, but not all, of the factors that led the commission to initiate proceedings. For example, taking the risk assessment methodology at face value, the report concludes that even after applying mitigations, residual risks for both elections and civic discourse and terrorist content remain “high,” which may have been viewed as unacceptable—particularly in light of only a few short bullet points describing the mitigations in place. Although X redacted the entirety of a table on “EU action rates by language,” its content was likely deemed noncompliant with the DSA given the commission’s multiple references to insufficient focus on regional and linguistic aspects of both risk assessment and mitigation.
In July 2024, the commission announced preliminary findings that X was in breach of the DSA, but on matters unrelated to risk assessment or mitigation. Instead, the findings concerned verified accounts, advertising transparency, and access to data for researchers. Although the related investigations for Meta and TikTok are still pending, this initial result suggests that while risk assessments provide a mechanism for deep investigation into platform policies and procedures to assess and mitigate risks, the commission appears to be relying on more objectively verifiable matters when finding violations, rather than subjective assessments of whether a company did enough to assess or mitigate a given risk area.
Elections are the only issue area where the commission has published guidance, specifically on risk mitigation, issuing a report in April 2024. Notably, the commission has referred to this guidance in the public summary of its investigation into Meta to bolster its view that the deprecation of CrowdTangle—a popular tool for tracking public data and monitoring trends on Facebook and Instagram—constituted a failure to diligently assess and adequately mitigate electoral risks.
Because Meta has until now published only its 2024 rather than 2023 risk assessment report, we cannot compare what was prepared in 2023 to the commission’s queries. However, although Meta’s and Instagram’s risk assessments provide many examples of mitigation measures specific to elections—including work with various third parties—they omit mention of the Meta Content Library, which is the successor to CrowdTangle. Whether the commission proceeds and determines that deprecating CrowdTangle amounted to a breach of the risk assessment and mitigation requirements of the DSA will be telling.
The other areas of heightened enforcement attention relate to generative artificial intelligence and the protection of minors, including their mental health and well-being. The quantity of content pertaining to these risks, the quality of analysis prepared by companies in their risk assessments, and the reaction from the commission in the form of RFIs and enforcement actions will provide a sense of whether efforts in these domains are viewed as sufficient. Without authoritative guidance that could provide the basis for noncompliance findings that would hold up in court, it may be that the commission instead uses investigations around risk assessment and mitigation to press companies to enter into binding commitments (as it did in the case of TikTok). This could lead to a whack-a-mole enforcement approach that generates further confusion about what is “good enough” when it comes to compliance.
About the Audits
The risk assessments are paired with the release of the independent audits of each platform, along with the company’s response to the audit. A full analysis of the audits is well outside the scope of this article, but there are a few aspects of the audits that are key to understanding how the components of the DSA interact.
The scope of the audits goes well beyond risk assessment and covers full company compliance with the DSA. This is a good thing, because an audit may be better suited to explore whether companies are objectively and measurably complying with transparency reporting obligations, in contrast with opining on the risk assessments. For instance, each audit evaluated service compliance with transparency reporting requirements, in several cases recommending process improvements to make sure that transparency reports include all the information required by the DSA, and are comprehensive and reliable.
Compliance with the DSA, especially for very large services, requires alterations in the documentation of process, tracking of data for transparency, and other business processes. An audit can be a useful, albeit expensive, tool for ensuring these matters are attended to. Auditors normally apply independently created objective criteria as part of their job. Unfortunately in the case of the DSA generally, and its risk assessment and mitigation requirements in particular, no such criteria exists. The European Commission declined to incorporate such criteria into the delegated act that governs the audits. Companies, auditors, and civil society have been warning the commission about the need for such criteria since long before audit regulations were issued, but in its absence, companies and auditors effectively had to improvise a workable approach. This often meant that the company being audited was responsible for articulating the specific requirements of implementing the DSA, which were agreed upon and evaluated by the auditor.
Although observers have been quick to use the audits to contrast company compliance, differences in how each auditor approached their task belie easy comparisons. This is particularly apparent from the auditing of compliance with Risk Assessment (Article 34) and Mitigation (Article 35) requirements. Wikipedia, the only service that avoided any “negative” conclusions, nonetheless received extensive comments and substantial recommendations from its auditor, who went so far as to say “the uncertainties and formal misalignments noted for the purposes of Article 35 may have been substantial enough to turn the audit conclusion to negative if the audit practice was established by the Commission.” For others, such as Bing, the auditors also provided a “positive with comments” conclusion (in a delegated act on the audits, the European Commission states that such comments “should not concern the assessment of compliance itself”) but noted a lack of “formal mapping of risk mitigants to controls,” which appears to emphasize that the company should incorporate all of the categories of risk mitigations listed in the DSA. It is not clear that other auditors adopted the same approach, illustrating the need for objective criteria that can be applied evenly by all auditors.
One might also think that audit results for the services whose risk assessment and mitigations are under active investigation by the commission would be particularly important. But here, most of the auditors, with the exception of X’s auditor, decided that the presence of these investigations rendered them incapable of providing an opinion. So there is relatively little information in the audit reports about the risk assessments and mitigations that have been flagged for enforcement by the commission.
The Path Forward
A minor piece of news was nearly lost in the tidal wave of responses to Meta’s content moderation changes and the escalation of transatlantic tensions over tech regulation. Meta prepared and shared a risk assessment with the European Commission about its planned shift away from fact checking and toward a “Community Notes”-style system. Ad hoc risk assessments are required by the DSA “prior to deploying functionalities that are likely to have a critical impact” on systemic risks.
That Meta promptly produced a Community Notes risk assessment indicates that, notwithstanding Zuckerberg’s fighting words, compliance with risk assessment requirements remains a fact of life for very large platforms.
The first round of DSA risk reports underscore the need for guidance to ensure greater consistency of approach to risk, mitigation, and audit. But overly prescriptive guidance could create as many challenges as it solves. The commission, having had the opportunity to compare the approaches taken by services ranging from Instagram to Wikipedia, should take care not to directly or indirectly incentivize the costliest forms of compliance as standard operating procedures.
Today, 20 companies with “very large” designated services are preparing for 2025 risk assessments and audits. This group has expanded to include more e-commerce platforms as well as adult sites, all of which face diverse risks. Meanwhile, an estimated 100,000 service providers are in scope for the U.K.’s Online Safety Act and must complete a risk assessment for that law by March 16. In contrast with the EU, the U.K. regulator Ofcom has published lengthy guidance based on extensive consultations, so it is likely that companies will adapt guidance from the U.K. as they iterate on EU assessments in the future.
The next two years will prove crucial to transforming reams of risk assessments and audits from costly paperwork exercises into a meaningful system of continuous improvement, and preventing them from becoming a means of indirect government censorship. Although important analysis has been published, a broader swathe of activists, investors, academics, and policymakers should pay attention. After all, the only way to improve the process is to engage with it.