A French Perspective on Elon Musk’s Twitter
Musk assured European leaders that Twitter will comply with European regulations. However, it is doubtful that he will accept all the constraints of the European rules and that Twitter will have the concrete means to comply with them.
Published by The Lawfare Institute
in Cooperation With
A lot has happened since Elon Musk made a sensational entrance in Twitter headquarters by carrying a sink. As soon as Musk took control of Twitter on Oct. 28, he fired a large part of the company’s workforce and launched a $7.99 pay-for-play blue check system that led to a profusion of “verified” impostor accounts and was later suspended. Musk also announced that major content decisions or account reinstatements would be made by a content moderation council, then abandoned this idea and posted polls asking users to vote on sensitive issues, such as whether Donald Trump’s account should be restored. He subsequently restored Trump’s account and also those of many extremists, including far-right influencers and people associated with the QAnon ideology. Musk also decided to stop enforcing any policy against coronavirus misinformation and made a number of “Twitter files” public in an attempt to demonstrate that the former Twitter management protected Democrats, namely President Joe Biden’s son Hunter, and also engaged in censorship against conservatives. And just recently, Musk abruptly decided to suspend the accounts of half a dozen journalists from CNN, the New York Times, the Washington Post, and other news outlets without warning or explanation, before reinstating those accounts following a Twitter poll. Finally, Musk lost a significant number of advertisers while many experts observe the rise of hate speech on the Twitter platform.
Musk’s erratic and unpredictable behavior, which is increasingly drawing outrage, makes it very difficult to foresee what might happen in the coming months and also predict what Twitter’s content policy might become in the future. In response to Musk’s takeover, European authorities have expressed concerns and insisted that Musk’s Twitter comply with the Digital Services Act (DSA)—a regulation adopted recently by the European Union to ensure an effective fight against illegal and objectionable online content. Recently, Thierry Breton, the European commissioner for internal market, warned Musk in a video call that Twitter could face fines of up to 6 percent of global turnover, or even a Europe-wide ban, if it breached the law.
What likely worries Europeans the most is that Musk describes himself as a “free speech absolutist” who intends to fundamentally change Twitter’s moderation policy, which he has previously accused of having a “strong left-wing bias.” From the moment he announced his acquisition of Twitter, Musk said he wanted to make the platform a “politically neutral” space of freedom, and a “de facto public town square” where the only limits on online speech would be derived from the law. According to a tweet from Musk on April 26, free speech is language that “matches the law.” He continued, “I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people.” In October, Musk reiterated and qualified his position, writing, “Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all.” In recent weeks, he insisted that the site’s new policy would be “freedom of speech, but not freedom of reach” and announced that negative and hate tweets would be “max deboosted and demonetized, so no ads or other revenue to Twitter.”
Even when nuanced, Musk’s free speech ideas are of great concern to Europeans. In France, many have expressed their concern about Musk taking control of Twitter. For example, an op-ed published in May in the French newspaper Le Monde stressed that Musk would have to comply with French law, regardless of his ideas on freedom of expression. The article recalls that French law penalizes racist, anti-semitic, sexist, anti-LGBTQ speech, and anything that is, in general, offensive to human dignity. More recently, French President Emmanuel Macron called Musk’s decision to end efforts to combat coronavirus-related misinformation on Twitter a “big problem.” Macron then met with Musk in New Orleans to express his concerns in what he called a “clear and honest discussion.” After the meeting, Macron tweeted, “Transparent user policies, significant reinforcement of content moderation and protection of freedom of speech: efforts have to be made by Twitter to comply with European regulations.” For his part, Musk confirmed Twitter’s commitment to uphold the interests of the Christchurch Call to eliminate terrorist and violent content online and pledged to improve protections for children on the platform.
Musk’s approach of having the platform ensure strict compliance with the law is most certainly conducive to free speech in a U.S. context that overwhelmingly preserves freedom of expression and reduces platform constraints to their simplest form. However, the same is not true in the European Union, where the law imposes numerous obligations on platforms. In the spirit of prioritizing “free speech”—and as described above—Musk has reinstated many extremist and conspiracy accounts, given up on fighting misinformation about the coronavirus outbreak, decimated Twitter’s staff, and laid off thousands of contractors who used to regulate Twitter’s content moderation. The combination of these factors, therefore, begs these questions: Will Musk be inclined to have Twitter follow Europe’s content moderation rules? And will Twitter even possess the means to do so?
Will Musk Be Inclined to Make Twitter Comply With the Digital Services Act?
As mentioned above, the Digital Services Act is a new EU regulation that is designed to supplement and update the rules under the 2000 E-Commerce Directive, which has provided platforms with an exemption from liability for hosting harmful content online for the past 20 years. The DSA has enshrined the principle that “what is illegal offline is also illegal online” and includes provisions designed to ensure an effective fight against unlawful content. In a way, the DSA’s purpose aligns with precisely what Musk announced he wanted to do with Twitter’s content moderation policies: follow the law and nothing but the law. Yet the DSA also includes obligations that go far beyond the moderation of illegal content, which will likely raise objections from someone like Musk. Admittedly, Musk said in a video in April that he was aligned with Europe’s regulation of online speech. However, despite this claim, Musk’s particularly whimsical character makes his decision-making fairly unpredictable—especially with respect to Twitter content moderation policy. But, for Twitter to operate legally in Europe, Musk must comply with European law, which will likely not be met without some challenges. What follows is an effort to gauge which aspects of the DSA Musk will comply with without objection, and which aspects the new “chief Twit” will be especially reluctant to follow.
What parts of the DSA would Musk likely easily comply with?
The DSA maintains the principle of exemption from liability that has been provided by the E-Commerce Directive for more than 20 years. This exemption benefits all internet service providers (ISPs) that host illegal content or activities but likely do not have knowledge of its existence or its illegal nature. The exemption applies as long as such providers act “expeditiously” to remove access to unlawful content when they learn of it. In this respect, the exemption is somewhat narrower than the general immunity provided in Section 230 of the Communications Act of 1934 (47 U.S.C. § 230), which largely governs online content moderation in the United States. According to § 230, ISPs cannot be made responsible for the content posted by users of the service (§230(c)(1)) regardless of knowledge of its illegality.
In addition, the DSA implements obligations for platforms that are intended to ensure that illegal content is combated effectively, which is a goal that Musk claims to share. The fight against illegal content relies on effective reporting mechanisms—hosting providers must offer accessible mechanisms to report suspicious content (as specified in Article 16) and reports from trusted flaggers must be given priority (as detailed in Article 22). In this regard, the DSA provides that an ISP is deemed to have “actual knowledge” of the unlawful nature of an activity or information when it has been reported and its unlawfulness is clear without the need for detailed legal examination. This was already the case in France, since the French Constitutional Council ruled that hosting providers can be held liable only in cases where they remained inactive while they were aware of the presence of “manifestly illicit” content. It is also likely that Musk will welcome the new “Good Samaritan” rule (as specified in Article 7), which provides that platforms do not lose the benefit of the exemption from liability when they engage, “in good faith and in a diligent manner,” in investigations to identify and remove illegal content. Additionally, because of the announced focus on legality, EU authorities and Twitter users can hope that the now Musk-owned Twitter will follow the DSA’s rules requiring that platforms cooperate with the authorities and promptly inform them of the presence of content that may give rise to suspicions of criminal offenses that pose a threat to the life or security of persons (as required by Article 18).
If Musk maintains consistency with his ideas on freedom of speech, he should have no objection to ensuring that Twitter’s moderation practices are carried out with respect for the fundamental rights of users, in particular freedom of expression. He should have no difficulty requiring Twitter teams to take into account the rights and legitimate interests of all parties concerned in a given issue and to act in a “diligent, objective and proportionate manner” as specified in Article 14. In addition, EU Twitter users could hope that a “free speech advocate” like Musk would provide them with a “clear and specific explanation” of moderation decisions (as outlined in Article 17) and the possibility to appeal such decisions made about their individual accounts (as outlined in Article 20). In fact, Musk announced recently that Twitter is working on a software update that will allow users to view what he calls their “true account status.” The update will reportedly inform users of whether they have been “shadowbanned,” the reason for this action, and options for how users can appeal the decision affecting their account. In the same vein, Musk shouldn’t have any objection to ensuring that Twitter complies with the transparency requirements of the DSA, which require, among other guidelines, that platforms publish clear and precise information about their content moderation. The DSA also dictates that Twitter, as a very large platform, must release information on the parameters of the algorithms used to moderate content and even give access to data to EU regulators and vetted researchers so that they can assess the risks generated by the platform. Within this framework, Musk should be inclined to inform users of the automated nature of certain moderation methods and also to adopt reasonable measures to ensure that the technology is “sufficiently reliable to limit the rate of errors” so that content is not removed without good reason, as specified by Recital 26 of the DSA.
What parts of the DSA would Musk likely object to?
Musk’s Twitter will almost certainly challenge various aspects of the Digital Services Act. First, the regulation does not explicitly define illegal content and must also be combined with the laws of EU member states, which often vary in their definitions of illegal content. Of course, Twitter and other online platforms already must comply with this variety of legislation, which is not easy for international platforms trying to implement their content policies on a global scale.
Second, Musk, who has repeatedly said that he does not want to go beyond the strict fight against illegal content, will likely not be comfortable with the notion of “systemic risks.” According to its Article 34, the DSA requires very large platforms like Twitter—those with more than 45 million active users—to assess systemic risks so that platforms can create and apply appropriate policies to reduce them. These systemic risks result from the dissemination of illicit content, and also from the presence of content that generates “any actual or foreseeable negative effects” for the exercise of fundamental rights (human dignity, private and family life, personal data protection, freedom of expression and information, non-discrimination, rights of the child, consumer protection), on “civic discourse, electoral processes and public security,” in relation to “gender-based violence, the protection of public health, and minors” and to the “person’s physical and mental well-being.” This very broad definition of systemic risks makes it possible to cover all kinds of content that may have problematic effects—such as the promotion of ineffective alternative treatments—but are not, strictly speaking, unlawful—which is often the case with misinformation about public health, climate change, or politics. It is therefore unlikely that Musk, who, remember, has just given up the fight against coronavirus misinformation on Twitter and has recently ordered the removal of suicide prevention features, would be eager to put in place “reasonable, proportionate and effective mitigation measures” to combat systemic risks. It is also doubtful that he would welcome the directions of the European Commission and the national coordinators that are empowered by the DSA to issue guidelines on how to reduce systemic risks, including best practices and possible mitigation measures. His reluctance is all the more probable considering that European regulators will almost certainly favor content moderation practices that extend far beyond the mere removal of illegal content. Moreover, while the DSA often leads to increased dialogue between regulators and providers, it offers little means to compel recalcitrant platforms, which, under Musk, Twitter might become.
Third, will Musk, as a self-proclaimed supporter of freedom of expression, be inclined to comply with the “crisis response mechanism” that allows the European Commission to require the largest platforms to act in accordance with its instructions in the presence of serious circumstances such as war or a pandemic? This mechanism was incorporated into the DSA while the legislation was being negotiated and after the EU suspended the broadcasting activities of five Russian state-owned outlets that allegedly spread propaganda and conducted disinformation campaigns about Russian aggression against Ukraine. In November 2022, Rumble—the conservative video network backed by billionaire Peter Thiel—blocked access to its users in France in response to the French government’s demand that Russian news sources be removed from the platform in compliance with Council Regulation (EU) 2022/350. For the moment, Twitter has blocked European access to the accounts held by these Russian media at the Commission's request. Who is to say that Musk's Twitter will continue to comply with such a request, which could be seen as censorship or as a violation of freedom of expression? After all, from a strict legal standpoint, such a request could be seen as being in tension with both Article 15 of the E-Commerce Directive and Article 8 of the DSA. According to these articles, providers of online intermediary services have no general obligation to monitor the information stored on social networks.
What About Compliance With French Law?
Musk will likely realize quickly that the legislation of foreign countries sometimes includes obligations, prohibitions, and incriminations that are much stricter than in the United States. More specifically, Twitter must follow the various laws of EU member states, some of which impose strict restrictions on hate speech or disinformation online. France is one of these countries.
In August 2021, the French parliament adopted Law No. 2021-1109 “reinforcing the respect of the principles of the Republic” that created a large number of provisions—in order to transpose the DSA “in advance”—in Law No.2004-575 “for confidence in the digital economy.” These provisions are currently in force in France pending the effective implementation of the DSA and will apply until Dec. 31, 2023, at the latest. They include specific obligations for social networking platforms such as Twitter that have more than 10 million unique users per month in France. Therefore, with the establishment of this French law, Twitter is already obliged to facilitate the reporting of illegal content by users, process these reports using “proportionate human and technological resources,” and publish information on the measures adopted to combat illegal content, as required by Article 6-4 of the French law “for confidence in the digital economy.”
French law also provides that social networks must refrain from taking arbitrary, disproportionate, or unappealable measures to remove content or suspend accounts. Twitter’s recent, seemingly random suspension of several American journalists violates this standard. In particular, Article 6-4 of the French law “for confidence in the digital economy” requires that decisions to suspend or terminate user accounts must be proportionate and justified by repeated and serious abuses as specified in Article 6-4 I 9°. All suspensions must be preceded by a prior warning to the user and must last for a “reasonable duration.” Users must always have the option to appeal the platform’s decisions internally. Furthermore, Twitter’s decisions to remove content must be backed by violations of established user policies, as specified in Article 6-4 I 7° of the French Law “for confidence in the digital economy.” It is worth noting that, for now, the French law requires that platforms issue an explanation only if a decision is made to remove or make inaccessible unlawful content, but not in the case of “restriction of visibility” such as shadowbanning. Conversely, Article 17 and Recital 55 of the DSA provide that if a platform decides to restrict the visibility of a specific user’s content (via demotion, shadowbanning, or other practices), then the platform must issue a statement of reasons to the affected user explaining why the decision to restrict their account was made, and how their content is illegal or incompatible with the platform’s terms and conditions. In this case, the current state of French law preserves, for the time being, the ability for Twitter and other platforms to practice shadowbanning without informing users.
Additionally, the French law already provides that the major ISPs must annually assess systemic risks and implement reasonable, effective, and proportionate measures to mitigate such risks while avoiding unjustified content removal. Finally, ARCOM, the French supervisor of social media, has the right to access “the operating principles of automated tools, the parameters used by these tools, the methods and data used to evaluate and improve their performance, and any other information or data to evaluate their effectiveness,” as provided by Article 62 of the French Law No. 86-1067 “related to freedom of communication.”
Within this framework, Musk and his legal team must account for the many incrimination provisions included in this area of French law. For example, the Law on the Freedom of the Press of 1881 punishes defamation, insult, incitement to discrimination, hatred or violence, and denial of the existence of crimes against humanity. Moreover, the French penal code punishes death threats, “revenge porn,” the dissemination of videos showing physical violence against a person, and cyberbullying. Musk and the Twitter team should also understand that French law criminalizes a provocation or apology of terrorism and European Regulation 2021/784 provides that hosts must remove terrorist content within one hour following the request of the competent authorities. Twitter must also take into account the fact that, in France, making pornographic content available to minors is punishable by up to five years of imprisonment and a fine of 75,000 euros. Notably, Twitter is currently the target of various French child protection associations that have already referred the matter to ARCOM and argued that Twitter should be sanctioned on the grounds that it leaves too much pornographic content online.
Twitter will also have to take into consideration the fact that, under French law, false information online can be sanctioned. Admittedly, criminal law punishes only the dissemination, in bad faith, of false news “having disturbed the public peace or being likely to disturb it” or “of a nature to undermine the discipline or morale of the armed forces or to hinder the war effort of the Nation,” according to Article 27 of the Law of 1881. But if false information is disseminated “artificially or automatically” and “massively,” and if this false information is clearly inaccurate or misleading while creating a clear risk of altering the sincerity of elections, then platforms may be ordered by the summary judgment of a judge to remove this content. This possibility, which was introduced in French law by the Law No. 2018-1202 “combating the manipulation of information,” applies only in the three months preceding an election.
Additionally, there is, ironically, only one aspect of French law that Musk seems to have subconsciously taken into account. Indeed, French penal code punishes revealing, disseminating, or transmitting information to locate a person “for the purpose of exposing him or her or the members of his or her family to a direct risk of harm to the person or to property that the perpetrator could not have been unaware of.” This provision could be used to justify the recent update in Twitter’s terms of use, which now prohibits any publication aimed at disseminating location data in real time. On this basis, Twitter first suspended the @ElonJet account, which was using public flight data to share the location of Musk’s private plane. Twitter then suspended a number of journalists following the story for their news organizations. Twitter claimed that these accounts were suspended because they had contributed to the dissemination of location information about Musk, therefore putting him in danger. Musk cited safety concerns, particularly for his family, following an incident involving his two-year-old son. These claims are highly contested. Moreover, while the new rule adopted by Twitter is understandable, it is questionable to apply it arbitrarily at the expense of the freedom of expression that Musk has always committed to defend.
Will Twitter Have the Means to Comply With EU and French Rules?
Not much is left of Twitter teams in Europe , as is the case in other regions of the world, such as Japan, Singapore, or India. According to the Financial Times, Twitter has closed its entire office in Brussels, after Musk imposed “massive layoffs” of its workers. Many employees have also left their positions voluntarily in response to Musk’s ultimatum demanding they commit to an “extremely hardcore” working culture, or quit. This situation is a cause for concern, especially since the executives who left the company—such as Julia Mozer and Dario La Nasa—led Twitter’s effort to comply with the EU’s Code of Practice on Disinformation and to anticipate the upcoming regulations. However, as Twitter’s main establishment (Twitter International Company) is in Ireland, it is, above all, the Irish office that is of major importance for all matters relating to compliance with EU regulations. Indeed, these regulations have set up a “one-stop-shop system,” which provides that competent European regulators for platforms should be from the EU member state where the company is “mainly established.” In the case of Twitter, it is therefore the Irish regulators that have jurisdiction. Yet Twitter’s Dublin office, which had a staff of about 500 people, was affected heavily by the wave of layoffs, including that of Twitter’s global vice president for public policy, who was ultimately reinstated following a court order.
In fact, the Irish Data Protection Commission (DPC) has expressed its concerns after it learned that Twitter no longer had a data protection officer (DPO) responsible for ensuring compliance with the EU’s General Data Protection Regulation (GDPR). The name of an “acting DPO” chosen among existing employees was then disclosed by the company. But the issue of privacy is now crucial in light of Musk’s decision to give broad access to Twitter data to people outside the company investigating the “Twitter files.” Musk’s decision to share data with outside investigators may violate the GDPR, if Twitter is sharing its users data without explicit permission from individuals linked to the data in question. Since then, the Irish DPC has announced the start of an investigation into Twitter’s privacy practices, including the period before Musk’s takeover.
As for content moderation, the French supervisor of social media, ARCOM, has expressed its deep concern about the “ability of Twitter to maintain a safe environment for users of its service.” While some might argue that ARCOM does not have jurisdiction over Twitter (since the company’s main establishment is located in Ireland), the aforementioned French law adopted in 2021 has made a temporary exception to this European one-stop-shop rule and gives jurisdiction to ARCOM until the DSA is effectively implemented. Within this framework, ARCOM has recently issued guidelines for all major platforms and issued a formal notice to Twitter requesting detailed information about its moderation resources. Insofar as Twitter has terminated its contracts with approximately 75 percent of moderation contractors employed worldwide (according to ARCOM’s estimate), French authorities likely fear that these massive layoffs—and therefore lack of content moderation staff—may jeopardize the proper moderation of illegal, hateful, discriminatory, and misleading content. In response to these concerns, Twitter published an official statement entitled “Twitter 2.0,” which reiterated its “ongoing commitment to public debate” and insisted that none of its previous policies had changed. The statement also added that Twitter’s “trust and safety” team—which lost its manager and approximately 15 percent of its workforce after Musk purchased the platform—remained “strong and well-resourced” and highlighted that “automated detection plays an increasingly important role in eliminating abuse.” Since this statement was released, Twitter’s Trust and Safety Council, which had advised Twitter on its content moderation strategy since 2016, has been dissolved.
It is doubtful that a company suddenly deprived of a substantial part of its personnel—which
is also reportedly in the process of reinstating roughly 62,000 previously suspended accounts with more than 10,000 followers—can meet the obligations outlined in French and EU law. Indeed, not only does Twitter have to moderate illegal speech, but it also must ensure that all reports are properly handled, that users are informed effectively of decisions concerning them and given a statement of reasons for why their account was affected, that appeals are processed effectively within a reasonable period of time, and that compliance obligations, which include writing transparency reports and assessing systemic risks, are respected. How can Twitter hope to comply with these obligations with so few employees and a particularly tense atmosphere within the company? This is all the more unlikely considering Twitter’s past failures to comply with the outlined restrictions. For example, in France, Twitter has been sued for refusing to cooperate with judicial authorities to moderate hateful speech online. On July 6, 2021, the Paris Court ordered Twitter to disclose information about the material and human resources used to combat hate speech. In this case, the claimants had produced evidence of numerous racist, homophobic, and anti-semitic tweets, for which account suspension and deletion requests were not met promptly. According to the evidence provided in the judicial proceedings, out of 1,100 “hateful” tweets reported during a period of about six weeks, Twitter had expeditiously removed only 12 percent of them, while reports sent to Facebook resulted in a removal rate of 67.9 percent in the same time frame. In addition, a monitoring operation carried out for about a month by the Specialized Cyber-Activists Network organization on a sample of 484 hateful contents identified on four platforms (YouTube, Facebook, Instagram, and Twitter) revealed that, on Twitter, only 9 percent of the objectionable tweets reported by users had been removed and 5 percent made invisible to users in the same geographic area, while other platforms had removed 58 percent of the objectionable content over the same period.
Could Twitter Be Sanctioned by French or EU Authorities?
The above-mentioned French rules are currently in force, and ARCOM is entitled to take action to ensure compliance. In any case of suspicion or noncompliance, ARCOM can request information from platforms, as it recently did with Twitter. The financial penalty imposed in the event that a company refuses to provide the information requested by ARCOM may not exceed 1 percent of the platform’s total annual revenues for the previous financial year. If a platform continues to refuse to comply with ARCOM’s request, the regulator can issue a formal notice to comply with the law and then impose an additional penalty, which may not exceed 20 million euros or 6 percent of the total annual revenues from the previous financial year. (This second financial penalty issued by ARCOM depends on whichever will draw the most revenue from the company in question.) Therefore, if Twitter repeatedly violates French law to the detriment of French users, ARCOM will almost certainly take action against the company.
But it is especially when the DSA is fully enforceable that Twitter will risk being heavily sanctioned. The DSA grants EU member states the authority to specify the penalties for a violation of the regulation in their territories. According to the DSA, these penalties must be “effective, proportionate and dissuasive.” As already provided by the French law, which introduced these sanctions in advance, the maximum amount of fines that may be imposed for a failure to comply with an obligation specified in the DSA is 6 percent of the provider’s annual worldwide revenues from the previous financial year. These rules can lead to extremely high fines, thus creating an incentive for platforms to comply with the law. Member states will designate national coordinators of digital services that are responsible for enforcing the DSA and the penalties. The European Commission will have the power, in parallel with national coordinators, to enforce the DSA on very large online platforms (VLOPs) such as Twitter. The commission will act in collaboration with the national coordinators to investigate possible infringements and to determine whether to issue fines. National coordinators will gather in a European Board for Digital Services that will advise the national coordinators and the commission.
There has been discussion of a possible ban of Twitter from Europe if it does not comply with DSA rules. Thierry Breton, the European commissioner for internal market, declared that Twitter’s sudden reactivation of accounts that spread disinformation and hate messages was in contradiction to the DSA. In a video call with Musk, Breton allegedly threatened to ban Twitter from operating in Europe. To add some context to Breton’s claim, Article 82 of the DSA provides that the European Commission may, in the event of a serious and persistent breach, ask the relevant regulator (in Twitter’s case, the Irish regulator) to request that the judicial authority order the temporary restriction of users’ access to the service or the online interface. While possible, this can happen only under certain conditions, such as if “the infringement has not been remedied or is continuing and is causing serious harm, and that that infringement entails a criminal offence involving a threat to the life or safety of persons” as specified in the DSA’s Article 51(3) (b). A total ban is therefore unlikely, and noncompliance would, in the worst case, result in only a temporary restriction of access and a fine.
In any case, the DSA is not yet effectively enforced. Online platforms have until Feb. 17, 2023, to notify the European Commission of the total number of active end users on their websites to qualify VLOPs. A platform is classified as a VLOP if it hosts 45 million or more active users, which is the case for Twitter. After platforms earn a formal VLOP designation from the European Commission, they have four months to modify their policies to comply with the DSA. Therefore, since the process of designating VLOPs will not begin until Feb. 17, 2023, and the commission may take some time to make its decisions, it can be assumed that the largest platforms will have to comply with the DSA during the summer of 2023 at the earliest and the end of 2023 at the latest.
For the time being, Musk has agreed with Breton that the commission’s services will conduct a stress test at Twitter’s headquarters in early 2023, which will allow Twitter to assess its compliance with the DSA even before the legal deadlines. In any case, Musk will probably no longer be in his position as CEO at that time. A Twitter poll that he has committed to follow up on has revealed the wishes of the majority of voters to see him leave his position. Following this poll, Musk announced he will resign as Twitter’s chief executive when he finds “someone foolish enough to take the job” but added that he will continue to run the software and servers teams.