How Has the DSA Performed in Protecting Election Integrity?
Luise Quaritsch / Feb 19, 2026This piece is part of a series with the DSA Observatory, featuring articles adapted from selected papers presented at the second DSA and Platform Regulations Conference, marking two years since the Digital Services Act came into full effect.

Tech billionaire Elon Musk speaks live via a video transmission during a speech by Alice Weidel, chancellor candidate of the far-right Alternative for Germany (AfD) political party, at the AfD election campaign launch rally on January 25, 2025 in Halle, Germany. (Photo by Sean Gallup/Getty Images)
Social media platforms continue to be perceived as a threat to the integrity of democratic elections in Europe. Around every major vote, there are reports of disinformation and foreign interference campaigns. There is also a growing awareness that while online platforms have become a central arena for public discourse, the rules of the game are set by a few private companies. And while Europe remains dependent on US Big Tech, there is a widening transatlantic rift over how to govern those platforms.
Much hope was placed on the EU’s Digital Services Act (DSA) and its systemic risk framework as Europe’s most potent response to platform-driven harm to election integrity. Some commentators even asked whether the DSA could “save democracy.” A look back at three recent elections (Romania’s presidential election in November 2024, Germany’s federal election in February 2025 and Poland’s presidential election in May 2025) offers a sobering picture of the DSA’s effectiveness in safeguarding civic discourse and electoral processes online.
The systemic risk framework is a collaborative process, based on self-assessment by platforms and complemented by input from researchers, civil society and others. To work, it crucially depends on transparency on risks between platforms and the public. There is still insufficient insight into what happens on platforms around elections to assess whether platforms’ mitigation measures are sufficient. The effectiveness of the DSA also hinges on platforms’ willingness to cooperate. US Big Tech platforms have been reluctant to do so, and their incentive to comply may be weakening as the Trump administration signals political backing and threatens trade retaliation should platforms be fined. This could undermine the functioning of the systemic risks framework.
What the DSA requires from platforms on elections
The DSA obligates platforms, under Articles 34 and 35, to identify, assess and mitigate ‘systemic risks’ stemming from their services. This self-assessment has to be carried out and published in an annual report, and is complemented by independent audits and transparency obligations. This process is intended as a collaborative model of knowledge generation: it aims to reduce information asymmetries between platforms, regulators, and the public, and to integrate civil society expertise into public and official oversight.
While the DSA does not provide a general definition of the concept of systemic risks, it does operationalise the concept through a set of risk categories laid out in Article 34. This includes “any actual or foreseeable negative effects on civic discourse and electoral processes […]” as one of the risks platforms have to assess. Article 35 obligates platforms to put “reasonable, proportionate and effective” mitigation measures against the identified risks in place. The Commission can publish guidelines on risk mitigation measures. This is supposed to help platform providers comply with their obligations under the DSA and establish what the Commission would expect as “reasonable, proportionate and effective.” It has published such guidelines in March 2024, leading up to the European Parliament elections, on recommended measures to mitigate systemic risks that could impact the integrity of elections.
The Romanian, German and Polish elections from a systemic risk perspective
Across the three elections, “behavior”-related risks such as covert influence operations and deceptive behavior were widely reported on. For example, Romanian intelligence services uncovered a coordinated influence campaign with over 25,000 accounts participating on TikTok. Platforms clearly recognized this threat and claimed to have effective mitigation measures against the inauthentic use of their services in place. In the Romanian elections, TikTok stated that it had banned over 26,000 accounts that were part of a network. Another recurring issue was the incorrect labelling or verification of political accounts. In the German elections, there was reportedly a lack of election labels on relevant profiles. In the Polish elections, there were observed attempts to impersonate political candidates. Platforms claim that they had effective mitigation measures in place to prevent this.
Content-related risks such as electoral misinformation and disinformation were reported during all three elections but did not appear to be the public’s central concern for election integrity. In the German elections, political advertisements on Facebook contained cases of mis- and disinformation and hate speech. In the Polish elections, TikTok videos claiming election fraud garnered millions of views. In the platforms’ risk reports, content-related risks are covered extensively.
Three issues that were also observed repeatedly appear to be somewhat underappreciated in the identification of risks by platforms: political advertising, influencers and asymmetric amplification of political candidates. In the Romanian elections on TikTok, political content was mislabelled as entertainment. In the Polish elections, a foreign interference campaign apparently used advertisements on Facebook. This is not extensively considered in platforms’ annual risk assessment reports for the relevant period. The issue might have even intensified since Meta and Google stopped serving political ads in October 2025. Further, there could also be an underestimation of the risk around undisclosed advertising by influencers, and their potential impact on elections more generally. In the Romanian elections, for example, some influencers received payment for promoting a political candidate and did not disclose this.
An issue that was reported on by civil society organizations and researchers across all three elections is asymmetric amplification of political candidates and vastly different reaches, with algorithmic bias, for example, suspected on X in the case of the German elections. While the Commission does offer some guidance on how to mitigate biases and susceptibility to manipulation in recommender systems, platforms refer mainly to mitigation measures against manipulation, such as fake engagement. There is no obligation under the DSA for platforms to proactively ensure political candidates have balanced reach, yet the reporting on this suggests that there is high public interest in this issue and the perception that asymmetric amplification could be a risk to election integrity. This could, again, be exacerbated in the future by the fact that political advertising is mostly not allowed on the major social media platforms and political candidates can thus not use paid advertising as a tool to counter asymmetric organic reach.
The effectiveness of the DSA’s systemic risks framework for elections
For the Romanian, German and Polish elections, online platforms stated in their risk assessments that they had effective mitigation measures in place to address most identified risks. However, external observations made by civil society organizations and media often suggested otherwise – pointing out failures by platforms, for example, to detect political advertising or identify election-related misinformation. What’s unclear is whether these apparent failures constitute non-compliance, i.e., insufficiency of mitigations under the DSA. Because the available data offers only an incomplete picture, it can be an indicator but is of limited use for conclusions.
In addition, the DSA and the Commission guidelines do not offer operational benchmarks for what “reasonable, proportionate and effective” mitigation measures need to look like in the context of an election. A non-compliance decision could shed light on the Commission’s approach here. There are already formal proceedings ongoing against X, TikTok, Facebook and Instagram related to systemic risks to civic discourse and electoral processes. The case against X concerns information manipulation and the effectiveness of ‘Community Notes’ as a mitigation measure. Facebook and Instagram are investigated over deceptive advertisements, the visibility of political content, disinformation campaigns and coordinated inauthentic behavior in the EU. The proceedings against TikTok focus on its recommender system and coordinated inauthentic manipulation or automated exploitation of the service, and TikTok's policies on political advertisements and paid-for political content. All the ongoing cases concern highly relevant topics. Setting an official benchmark on sufficient mitigation measures will be an important precedent to safeguard future elections in the EU.
Protecting elections in an increasingly adversarial environment
There is another issue with how the DSA’s systemic risk framework is intended to work that becomes apparent when looking back at recent European elections. Its collaborative regulatory approach reaches a limit when platforms are non-cooperative. For example, X continues to reject researchers’ requests to access platform data to observe elections despite lost court cases and a €40 million fine by the European Commission. This hinders researchers from contributing to the scientific analysis of systemic risks to electoral processes that the DSA is meant to enable. Especially US Big Tech platforms may have less and less incentive to cooperate as they have backing from the Trump administration that threatens trade retaliation should platforms be fined. This could undermine the functioning of the systemic risks framework.
Where the DSA’s collaborative, self-regulatory approach falls short, the European Commission has to take charge as the principal enforcer of the DSA vis-à-vis Big Tech. Commission Executive Vice-President Henna Virkkunen recently promised that there would be more DSA decisions coming in the next months. Any such finding is sure to once again trigger false censorship accusations from US Republicans.
Contrary to US claims that the Commission is interfering in elections across Europe, the Commission has yet to publish a decision related to platforms and electoral processes and civic discourse. To keep the systemic risks framework credible and platforms incentivized to engage, the Commission cannot allow predictable US backlash to chill enforcement.
Authors
