Home

Donate
Perspective

Europe’s Regulatory Failure on Generative AI and Mental Health

William Burns / Jun 25, 2025

William Burn is a fellow at Tech Policy Press.

A public health crisis of as yet unknowable scale and scope could be brewing in Europe. In some respects, it is similar to the early days of the pandemic in January 2020, particularly the EU bureaucracy’s struggle to fully grasp the emerging risks and respond with coordinated urgency.

This new blind spot, however, stems not from a virus but from the rapid rise of generative AI and the resulting emergence of negative societal and psychological harms associated with its use. As a report from the World Health Organization (WHO) warned last year, “LMMs could augment technology-facilitated gender-based violence, including cyber-bullying, hate speech…[with] serious negative implications for the health and well-being of populations, especially adolescent girls and women.”

AI companions, chatbots, and agentic AI are potentially detrimental to mood, self-esteem, and state of mind, with consequences for both the individual exposed and those around them, while also posing challenges for governments to regulate. We know there could be a mix of AI systems out there. Some of these products are possibly identifiable as medical devices and, in theory, subject to more conventional oversight. But many others operate in less defined spaces, and therefore appear outside the scope of existing legislation.

Overall, we lack systematic information on the scale of use, the names of the dominant products, the firms involved, and, of course, who is using them and what harm they might do. Adding to this concern are broader geopolitical shifts: President Trump’s attacks on the US Food and Drug Administration (FDA), the cuts to science in US universities, and the withdrawal of American know-how from the World Health Organization potentially compound the problem, given Europe’s traditional reliance on US experts to lead on medical regulation.

In response, the EU could have taken action to gather this information and, if needed, protect the public, as envisaged, for example, by its early warning and response system for cross-border health threats and associated governance mechanisms such as the Health Security Committee. Yet, nothing of that kind has happened so far.

The lack of action in Brussels

How we understand this story, either as an urgent health crisis or some other kind of policy problem, has a big impact on the approach taken and the expertise needed to tackle it. The EU’s most senior officials appear to think of regulation in economic terms as an aid to the flow of products in the market, thereby repeating a pattern dating back to the iconic CE Mark. The mark was introduced in 1987 to certify pressure vessels and later rolled out to a wider range of goods, such as toys, to indicate they conformed to technical standards agreed between the member governments and manufacturers selling in the European market.

Relatedly, CEN and CENELEC, private industrial standards-setting organizations historically linked to that procedure, have been tasked with coming up with “standards” under the AI Act. (The organizations are now due to report back in 2026.) This process has already prompted calls for greater “democratic accountability over standard-setting, for instance through more effective civil society participation.” Yet the CE Mark, itself an acronym of the former French name of the EU, emerged from a world in which European industrialists still believed they called the shots. The whole concept might be outdated and wrought with contradictions that will prove difficult for officials to resolve.

What’s needed to address unpredictable and evolving health threats linked to digital technologies, then, is an entirely different approach. The EU has, in fact, already explored such thinking in some depth. A pair of landmark reports published by the EU in 2001 and 2013, “Late lessons from early warnings, offered a playbook for how to act in such situations. Reviewing 88 contentious historical examples in which regulatory action was later claimed to have been an overreaction, the authors of “Late lessons” substantiated that claim in only four cases. Even then, the impacts of overreaction were found to be relatively minimal. The risk of overreaction was, as such, almost entirely ruled out. The dominant risk instead was underreaction. Such “cost-benefit” thinking has previously allowed hazards to spiral out of control, an issue that newer ideas, such as the “precautionary principle,” were intended to address.

Psychiatry could, logically, stake a claim in shaping precautionary responses to a mental health crisis exacerbated by AI, while acknowledging the profession itself has sometimes been ripe for pseudoscience and human rights abuses. However, the AI Office in the European Commission does not appear to have involved medical experts in any significant way. The European Medicines Agency, while in theory better positioned, has focused mainly on how AI can be used in healthcare. Likewise, the Commission’s science and research program, “Horizon Europe,” has also not been tasked with investigating these scientific unknowns, including those raised by the psychiatrist Katharine A. Smith and colleagues. Instead, EU officials portray the research program as a means of supporting industry rather than a means of independent inquiry or regulatory development.

The Digital Services Act at least gestures toward mental health as a policy concern. But it has not been thought through. Mental illness is barely understood in scientific terms. Nor are there always reliable methods for assessing the degree of ill health and measuring the impact of interventions. Within this contested space, where even credentialed experts struggle, the Act places the onus on tech companies staffed by engineers, as well as users, who are obviously unqualified to address these concerns, even if they have the motivation to do so. An official EU booklet entitled "The Digital Services Act (DSA) explained: Measures to protect children and young people online" instructs “minors…to report and to complain when they discover illegal or other content that should not be online.” It is preposterous to ask children, especially those who are unwell, to serve as the accountability mechanism.

On the lookout for solutions

There are important lessons to be learned from across the Atlantic in the US Food and Drug Administration’s (FDA) efforts to regulate AI in healthcare. FDA regulates AI as part of its broader responsibility for medical devices. According to analysis from Vijaytha Muralidharan and colleagues between 1995 and 2023, the agency approved 950 “medical devices driven by artificial intelligence and machine learning (AI/ML) for potential use in clinical settings.” To be clear, none of these were indicated for mental health. But the authors reported that only 9% of approvals “contained a prospective study for post-market surveillance” and that approval processes were “exacerbating the risk of algorithmic bias and health disparity.” In 2021, the authors further note the FDA made changes to improve its assessment of algorithmic bias, but “despite these efforts…reporting inconsistencies…may continue to propagate racial health disparities.”

Despite having resources that European regulators could only dream of, the FDA has been, on occasion, intellectually under-equipped and too close to the industries it oversees. In the past, it has been described as “neoliberal” and as a promoter of expensive, patented medicines and devices. Its deficiencies were laid bare in major scandals such as the opioid crisis, where regulatory failure had devastating consequences.

Organizing an integrated response in a health crisis requires more than setting standards; it calls for coordinated regulatory and other policy interventions, and public service-oriented scientific research. The first step, of course, is recognizing we are in a health crisis. Yet, the public’s mental health is, unfortunately, never a central policy metric. The European Commission’s “Comprehensive approach to mental health,” published in 2023, and well-intentioned initiatives like “Better Internet for Kids” may be meaningful steps, but they remain peripheral in AI policy discussions and are seldom referenced by politicians.

Addressing the mental health dimensions of the digital age will require more than programs, but rather a redistribution of power in the “clinical, research, and public policy settings.”Hannah van Kolfschooten, a legal scholar, proposed an “EU Charter of Digital Patients’ Rights” that would include “traditional” rights such as privacy and the right to informed consent but also the “novel right…not to be subject to automated medical decision-making” and the right to “meaningful human contact.”

Another imaginative proposal is the creation of a pan-EU “mental health” ombudsman legally independent from health ministries and empowered to raise difficult questions. Encouragingly, the Polish presidency of the EU, which ends in June, referred to the connections between digital technology and mental health as a policy priority. The Danish presidency, which starts in July, has not used the same terms, but Denmark’s digital minister, Caroline Stage Olsen, was reported to be considering “banning social media for kids under 15 years old” while calling for tougher enforcement of the Digital Services Act to protect minors.

Still, officials often seem to be trapped in a constant cycle of rediscovering what is already known while often failing to grasp the systemic nature of mental health. Healthcare regulation and policy coordination are one of several Achilles’ heels in the EU, making it difficult to take concrete steps. The quiet abandonment of the AI Liability Directive is both a telling example of this overall challenge for EU institutions and the lack of progress on AI regulation.

But the tech industry and AI are not standing still, and Europe cannot afford to wait.

Authors

William Burns
William Burns has almost 20 years of experience in science and technology policy at the intersection of health, environment, food, and sustainable energy. His original training was a PhD in malaria biochemistry followed by an MSc in science communication. More recently, he studied the history of sci...

Related

Podcast
AI Companions and the LawJune 15, 2025
Perspective
Intimacy on Autopilot: Why AI Companions Demand Urgent RegulationApril 10, 2025

Topics