Home

Donate
Analysis

Assessing What an EU Report Says About Systemic Risks Under the Digital Services Act

Mark Scott / Dec 2, 2025

Mark Scott is a contributing editor at Tech Policy Press.

A photo at the European Commission in Brussels. Shutterstock

It has been a year since the likes of TikTok, Amazon and Meta published their first-ever risk assessments and external audits under the European Union’s Digital Services Act. The second round of both internal and external assessments about how tech firms complied with the bloc’s online safety rules will be published in the coming weeks.

Over the last 12 months, the European Commission and national regulators have scoured these documents to identify the most prominent and recurrent risks — as defined by the DSA — to the 27-country bloc, as well as mitigation measures taken by the companies. That encompasses everything from foreign interference threats to the sale of counterfeit products to the protection of children online.

European regulators also surveyed outside groups for their views before publishing a comprehensive report that outlined their collective understanding.

There is a lot to unpack.

Brussels and EU national capitals stressed the overview did not constitute specific enforcement priorities — and cases against Meta, TikTok and X, respectively, are ongoing. Instead, the document should be viewed as an initial take of how all of so-called Very Large Online Platforms and Very Large Online Search Engines assessed external risks to their platforms.

Below are five key takeaways from the report to help you understand so-called systemic risks under the EU’s DSA:

1) This is an inherently political overview of platforms’ performance

Throughout the document, EU regulators made clear they were merely recapping what companies and external groups had said publicly about risk assessments and audits mandated by the DSA.

But the reader should not take such statements merely at face value.

The European Commission is under pressure, both from an internal drive toward deregulation and external public criticism from the US government that the DSA is akin to censorship.

In response, the report explicitly — and repeatedly — reaffirms that Europe’s online safety rules are not a threat to free speech. In fact, the document stresses how the regulation is an enabler for freedom of expression, as well as outlines how the platforms, not regulators, are potentially thwarting people’s ability to speak online.

“Over-moderation [by tech companies] may negatively affect civic discourse and create risks of negative effects on the fundamental right to freedom of expression and information.”

This assessment is a tactical one.

Faced with accusations that EU regulators are hampering free speech, the document reads as a public rebuttal. It highlights repeatedly how the DSA does not make specific rulings on individual posts across social media — and frames the digital rulebook as a protector of people’s freedom of expression.

That should be treated as apolitical regulators attempting to shape the narrative around the DSA.

2) There’s nasty content everywhere online

In a world where social media giants are pulling back from trust and safety, it’s easy to forget how dangerous the online world can be. The EU regulators’ report combines the internal assessments of these companies to paint a stark picture. Everything from terrorist content to child pornography to violent attacks against women remains a near and present danger on almost all online platforms.

“Providers reported risks that their users may be exposed to illegal content that sexualises minors, glorifies or facilitates child abuse, as well as grooming and sextortion,” according to the document.

The agencies broke down the potential harms addressed within the DSA.

When it came to foreign interference, the document said platforms continued to report a significant uptake in covert behavior, especially during national elections, as well as attacks on minority groups and women.

When it came to free speech, the companies collectively raised concerns that ongoing abuse and harassment had often made social media an unwelcoming place for people seeking to express their views online.

In the case of X, that led to “self-censorship from users who experience abuse and harassment on the platform,” based on the company’s regulatory findings.

3) The power of AI-fueled recommender systems

In an era when artificial intelligence has become the digital policymaking topic du jour, the overall assessment of DSA systemic risks made repeated references to the emerging technology.

The focus — both from regulators, platforms and outside groups — was on the power of so-called recommender systems, or automated AI systems that decided which posts ended up in people’s social media feeds and which could promote or demote specific content or online trends.

Companies and civil society groups raised concerns that such recommender systems could potentially promote harmful or illegal content in the name of boosting overall engagement. Previous studies have demonstrated that polarizing content is more likely to attract people’s attention online compared to more moderate material.

These groups specifically highlighted how AI-powered decisions had promoted content associated with gender-based violence.

Many of the companies made reference to their use of artificial intelligence in their content moderation practices to weed out problematic content — especially when related to illegality, such as terrorism or child sexual abuse material.

There were advantages to such AI-fueled content moderation, according to the EU regulators’ review of the platforms’ procedures. But such techniques also led to skewed outcomes and a failure by firms to fully understand why their recommender systems had removed specific pieces of content.

AI in content moderation “may both represent a risk factor and a mitigation measure,” the document concluded.

4) The offline impact of online content

The DSA is not just focused on social media. It also applies to e-commerce giants like Amazon and Temu, the Chinese online marketplace. Already, some of these companies have faced the full force of Europe’s rulebook.

The EU regulators’ report made a direct link between online and offline harm.

Online marketplaces raised concerns that illegal and counterfeit goods — including cosmetics, toys and health remedies — were shared widely online, with some additionally boosted by online advertising. Others said financial scams and other types of fraud were similarly promoted on their platforms.

A particular area of concern was public health, including online remedies targeted at children.

“Providers of online marketplaces noted related systemic risks stemming from the sale of illegal, non-compliant, or unregulated medical products on their marketplaces,” according to the document.

While these goods were sold online, their effects would likely be felt offline. That included the consumption of illicit drugs, harmful counterfeit toys and medical treatments that had not been approved by national authorities.

5) Combating systemic risks was primarily left to platforms

The European Commission has already started a series of investigations into potential DSA violations, including some specifically related to systemic risks highlighted by companies in their annual reports.

Yet the majority of oversight — in terms of content moderation, removal of counterfeit products and the protection of users online — was still in the hands of the companies, not regulators, based on the document.

The platforms outlined internal mechanisms for combating illegal and problematic content.

To reduce the spread of illicit goods, for instance, online marketplaces limited people’s ability to interact online and restricted the spread of user-generated content. To protect children, social media companies said they had updated their terms of service to stop minors from accessing some of their services, as well as implementing age verification checks to shield these users from harm.

Yet more than two years after the DSA came into force, regulators were still dependent on companies to flag potential problems, according to the report. EU officials acknowledged that while they had accessed the most up-to-date information to make their assessments of the companies’ risk assessments, future reports would have to rely on additional information not provided directly from the platforms.

“Developments in the implementation and enforcement of other DSA provisions, such as the (mandatory) data access mechanisms, will contribute to the wealth of information that will feed into an ever better understanding of systemic risks,” the regulators concluded.

Authors

Mark Scott
Mark Scott is a Contributing Editor at Tech Policy Press. He is a senior resident fellow at the Atlantic Council's Digital Forensic Research Lab's Democracy + Tech Initiative, where he focuses on comparative digital regulatory policymaking topics. He is also a research fellow at Hertie School's Cent...

Related

Understanding Systemic Risks under the Digital Services ActSeptember 15, 2024

Topics