Home

Donate
Analysis

What the EU’s X Decision Reveals About How the DSA Is Enforced

Matteo Fabbri / Feb 11, 2026

A hard rain falls on Brussels. Shutterstock

The European Commission’s decision to impose a €120 million fine on X, the first fine imposed on a very large online platform under the Digital Services Act (DSA), was released by the Republican-led US House Judiciary Committee at the end of January. Although the Commission did not publish the decision or comment on its release, the document provides a rare look into how the Commission’s DSA enforcement team builds cases against major platforms.

The Commission has not published the legal records underlying its DSA investigations and applies a general presumption of nondisclosure to the evidence it collects, including requests for information. This analysis examines the decision’s key findings on three alleged violations by X: the deceptive design of blue checkmarks under Article 25, the inaccessibility of its ad repository under Article 39, and restrictions on researchers’ access to public data under Article 40.12. In-text citations refer to the paragraphs of the decision.

Who was fined by the Commission and why it matters

Firstly, the decision is addressed to four entities: X Internet Unlimited Company (XIUC), the main establishment of X in the EU; X Holdings Corp., which owns 100% of XIUC, and X.AI Holdings Corp., which owns 100% of X Holdings Corp.; finally, Elon Musk, the natural person who is the majority shareholder and therefore exercises decisive influence and effective control over X.AI Holdings Corp., X Holdings Corp. and consequently XIUC.

The Commission adopted a functional approach to identify the provider, “which may consist of several legal entities constituting a single economic unit or even a controlling shareholder and those entities” (49). The decision of defining the provider of X as a collective has been based on “the case-law of the CJEU in competition cases, where one company has a 100% shareholding in another company, the former is in a position to exercise decisive influence over the latter and there is a rebuttable presumption that the former does in fact exercise such influence” (50). In doing so, the Commission ensures that all the relevant stakeholders are targeted by its fine.

Key DSA violations cited in the X decision

For what concerns the deceptive use of the blue checkmark, the Commission established that “the provider of X departed from a system of pro-active and ex ante confirmation of identity towards a system under which the ‘verified’ status is distributed to anonymous paying subscribers, with an at least partially post-hoc reactive approach to impersonation abuses of the ‘verified’ status” (90). Through this policy, “the provider of X materially changed the verification process as compared to cross-industry standards, while at the same time maintaining the visual and textual online interface design for ‘verified’ accounts under Twitter’s Verified Accounts program, which was in line with those Standards” (50).

According to the regulator, this policy misappropriates the historical “significance and assurance value of a cross-industry standard for representing accounts whose authenticity and identity have been verified” (50). The Commission calculated that X’s human review process for applications to obtain the verified status averaged only 53-79 seconds per account, a duration insufficient to meaningfully verify identity.

At the same time, X provides premium subscribers with “a ‘reply prioritization’ — i.e., an algorithmically curated higher visibility of their replies to posts on X’s online interface – that increases in magnitude as a function of higher tier subscriptions” (92). This means that “replies to other users’ posts by accounts having the ‘verified’ status in X’s interface design are statistically more visible [...] effectively allow[ing] any subscriber to ‘pay for reach’ or pay for higher visibility” (92). However, to find the true meaning of the blue checkmark, a user must navigate “three clicks, a pop-up window, and a separate help page” (103) away from the timeline. The perception of authoritativeness given by the checkmark, the difficulty of finding information about its meaning and the paid algorithmic promotion of ‘verified’ accounts led the Commission to conclude that this feature violated art. 25.

Regarding the infringement of art. 39 on ad transparency, the Commission found X’s ad repository neither adequately searchable nor reliable because of intentional design barriers. First, to start a search and obtain the ad report file, users must fill in three fields corresponding to: “(i) the Member States where the advertisement was presented (a user must select one Member State, with no possibility to choose the whole of the Union); (ii) the X account of an advertiser (a user must select one advertiser from a drop-down list displayed); and (iii) the time frame within which the advertisement was presented” (179). Users are not able to query the tool based on the ad content, the person who paid for it, or the parameters used to target recipients. This hinders the possibility of discovering unknown advertising campaigns.

Obtaining the ad reports is also time-consuming: the Commission’s trials revealed that X “artificially increased” (189) the response time to 3 minutes and 20 seconds per report, as the search tool waits until the user’s browser checks for updates (each lasting two seconds) exactly 100 times. This delay applies universally, regardless of the report's size: even reports that are entirely empty or contain only a few ads are subject to the same waiting time.

Moreover, a study by the Institut des Systèmes Complexes — Paris Ile de France (mentioned in the document, footnote 191) — later corroborated by the Commission’s own research, found that only 58% of ads shown to a sample of French users between 2023 and 2024 were actually present in X’s repository, likely “due to the inability to retrieve information on advertisements from suspended or deleted accounts” (197). If an advertiser deletes a post, the content becomes permanently unavailable in the repository, violating the requirement to retain the information for one year. The Commission also remarks that API access to the repository is de facto prevented, as the system returns an error message, even when using the 100 USD paid version.

From the perspective of compliance with art. 40.12, between August and November 2023, X did not put in place any dedicated mechanisms for researchers’ data access apart from an expensive (up to 5000 USD) commercial API and an unpublicized email address (EU-Questions@x.com) that was never used to approve a request during that period. Indeed, the free Basic API, yielding a maximum of 1500 posts per month, did not allow researchers to retrieve enough data for research purposes. Even after launching a dedicated form, “not a single application was approved before 26 January 2024” (333) and 95.8% of the applications were rejected as of May 2024.

X rejected some applications “on the unique ground that the applicant’s proposed use of X data was allegedly not solely for ‘performing research that contributes to the detection, identification and understanding of systemic risks in the EU as described by Article 34’, which in fact appeared to fulfil this requirement” (319). Applicants were also rejected exclusively for being established outside the Union, despite their project focusing on risks within the Union.

Successful applicants, instead, are assigned by default to a ‘Pro’ tier (equivalent to the 5000 USD paid version) that allows a six-month access to 1 million tweets per month, a tenfold reduction from Twitter’s academic program. In the meantime, X’s Terms of Service continue to ban “independent access techniques [...] such as scraping and crawling”, which is “necessary to perform research into the design and functioning of X’s recommender systems” (361).

The alleged violations of art. 25, 39 and 40.12 mentioned above give a taste of the type of investigative work done in almost three years of DSA enforcement. To build its case, the Commission relied not only on requests for information to X, whose responses are widely cited in the 183-page decision, but also on independent studies and interviews with experts and “its own evidence, as the alleged infringements were self-explanatory” (566). This statement suggests that more complex endeavors, such as proving the addictive features of TikTok’s feed, may require additional investigative tools that have not been used in this case.

Key takeaways

All in all, a layered analysis of specific aspects of the decision to fine X may indicate the current direction of DSA enforcement. First, the decision shows that enforcing the DSA is not only a matter of platform governance but also of corporate governance: in fact, by relying on the concept of “single economic unit,” which extends liability up the ownership chain to X Holdings, X.AI Holdings, and Musk personally, the Commission uses the tools of EU competition law to address the stakeholders exercising “decisive influence” over platform conduct.

Secondly, the regulator focuses on the sociotechnical context in which the design of interfaces and systems generates dark patterns: contesting a misappropriation of “the historical meaning” of the blue checkmark links the infringement of Article 25 leads to the erosion of a trust signal established in the foundational decade for social media — the 2010s — that had later become just another means for increasing profit.

A complementary perspective can be recognized in the alleged violations of Articles 39 and 40.12, which underline that transparency-washing is not compliant with the DSA. Access to usable and reliable information has to be guaranteed to make platforms publicly accountable through external evidence. In fact, large providers arguably worry more about making their internal dynamics known to users than about suffering the economic burden of (yet small-scale) fines.

The latter aspect is highlighted also by the recent Bits of Freedom vs Meta case, which led to a new interface for Instagram and Facebook being implemented in the Netherlands to comply with requirements of Article 38 (on the option for a recommendation feed not based on profiling): in the first court hearing, Meta did not provide any technical explanation or justification for the design of its interface and, in the appeal, withdrew its grievances about the initial ruling requiring the company to change its platform interfaces. Both judicial and executive enforcement, therefore, align in requiring providers to intervene on the socio-technical infrastructure of their services to enable users’ informed choices and researchers’ scrutiny.

Finally, the conclusions of the Commission decision on X, grounded in a functional reading of the related DSA provisions, were reached through evidence-based resources mostly grounded in scientifically reproducible outcomes: this means that researchers without access to confidential corporate information would, in principle, be able to reach the same results. Therefore, the opacity surrounding the DSA enforcement process, often justified with the confidentiality of the material handled by the regulator, might be more rooted in the antitrust approach framing the investigations than in the effective risk of disclosing business secrets or other IP-protected information.

The Commission still has to explain the legal choices backing the secrecy of its enforcement process: the unexpected disclosure of the X decision may set a milestone for future accountability in this regard.

Authors

Matteo Fabbri
Matteo Fabbri is a postdoctoral researcher at the Institute for Logic, Language and Computation (ILLC) and a fellow at the Institute for Information Law (IViR), University of Amsterdam, where he collaborates with the DSA Observatory. His research concerns platform regulation, AI ethics and cybersecu...

Related

Perspective
How the House Judiciary GOP Misread Europe's €120 million X DecisionFebruary 3, 2026
News
EU Decision Behind €120m Fine on Musk’s X Released by US LawmakersJanuary 30, 2026

Topics