EU Weighs Regulating OpenAI’s ChatGPT Under the DSA. What Does That Mean?
Ramsha Jahangir / Oct 29, 2025The European Commission is weighing whether OpenAI’s ChatGPT should be classified as a “Very Large Online Search Engine” (VLOSE) under Europe’s Digital Services Act (DSA)—a move that could set a precedent for the regulation of generative AI chatbots across the bloc.
Last week, OpenAI reported that ChatGPT’s search feature reached an average of 120.4 million monthly users in the EU over the past six months. That figure far exceeds the 45 million-user threshold that triggers extra obligations for “Very Large Online Platforms” and “Very Large Online Search Engines” under the DSA. A Commission spokesperson confirmed to Tech Policy Press that regulators are “currently assessing the information and whether further clarifications are needed” to determine if ChatGPT meets the criteria for designation.
OpenAI’s figures relate solely to ChatGPT’s search feature, which allows users to prompt the chatbot to retrieve live information from the web. But for the Commission, the challenge will be to decide whether to treat that feature as a distinct service or part of the chatbot as a whole. The DSA applies to intermediary services such as search engines, marketplaces, and social networks, categories that don’t easily capture what ChatGPT is or how it works.
What regulators are weighing
The Commission, in its response to Tech Policy Press, noted that, under the DSA, “a Large Language Model, such as ChatGPT, could potentially be in scope of the DSA if it is integrated into a service that is designated under the DSA.” In other words, the assessment hinges on whether ChatGPT’s generative model is considered part of a broader intermediary service, such as its search functionality, or a separate tool altogether.
“Such a designation sounds a bit surprising to me,” said Joan Barata, a Visiting Professor at the School of Law at Católica University in Porto. “A service such as ChatGPT does not seem to fit in either of the three main categories established under the DSA—mere conduit, caching, or hosting—but much depends on this first classification.”
Barata noted that if ChatGPT is treated as a hosting service, regulators could apply a significant number of obligations, but this would raise complex questions. “Terms of service and content policies are relatively straightforward for traditional platforms,” he said. “For an AI chat, they are far more complex, since not all content policies are formulated as rules but implemented at the technical level.”
He added that the Commission was considering designating ChatGPT as a VLOSE rather than a platform. “This may still be problematic since only some specific features of ChatGPT can be strictly considered as search, unless we completely denaturalize the notion of this kind of service,” Barata said. “We would need to differentiate between AI tools that purely assist in search activities from the delivery of elaborated answers, even if originally based on search.”
He warned that such a designation would raise a range of compliance challenges and complicate the interpretation of the DSA and Europe's AI Act. “It would raise questions about transparency, where privacy concerns might be particularly relevant, and about systemic risk requirements, with clear overlaps between the DSA and the AI Act.”
Another key issue is how chatbots like ChatGPT are deployed within broader digital ecosystems. “A different issue would be the deployment of chatbots as specific services provided by platforms or other AI tools that facilitate the creation of content by users,” he added. “In that case, the designation would only apply to the specific chatbot feature independently from other obligations platforms may have under the DSA.”
New obligations for OpenAI
If designated, ChatGPT would become the first AI chatbot formally subject to systemic risk and audit reporting, as well as transparency obligations under the DSA.
According to Laureline Lemoine, a senior associate at law firm and consultancy AWO, this would mean conducting annual risk assessments examining the design of ChatGPT, its algorithms, content moderation, and data practices, and evaluating their impact on fundamental rights, civic discourse, electoral processes, and mental health.
“OpenAI would have to assess whether ChatGPT risks having a negative impact on civic discourse and electoral processes, as well as on minors’ protection and mental health,” said Lemoine. “As part of mitigation measures, OpenAI may need to adapt ChatGPT’s systems, design, features, and functioning based on the assessed risks.”
Importantly, these assessments would not only be annual but also triggered each time OpenAI deploys a new functionality likely to have a “critical impact” on systemic risks. “This might involve more extensive testing and adjustments to their algorithms, as well as providing users with clearer information about the risks and accuracy of ChatGPT,” she noted. “As a result, compliance with the DSA, if not planned in advance by the VLOPs and VLOSEs, might lead to a slower deployment of new features in Europe,” added Lemoine.
The company would also face new data access obligations under Article 40 of the DSA, which would allow vetted researchers to request access to information about systemic risks and mitigation measures.
For Natali Helberger, professor of information law at the University of Amsterdam, the implications go well beyond content moderation. “Classifying ChatGPT as a VLOSE under the DSA will intensify regulatory oversight and expand the focus of scrutiny to a broader set of issues than is currently the case under the AI Act,” she said.
She pointed out that the DSA granted researchers a right of access to data that does not exist under the AI Act, raising novel questions about how far that right might extend. “It will raise interesting questions regarding the extent of data access rights, and whether the access obligation under Article 40 could potentially also include access to training data or model weights if such access is necessary to identify systemic risk or assess mitigation measures,” Helberger explained.
Under the DSA, according to Helberger, OpenAI could also be required to provide transparency about how model outputs are moderated, another aspect not explicitly covered under the AI Act.
“And while OpenAI already faces risk management provisions under the AI Act, the risk management obligations under the DSA can broaden the scope of investigation from any risks from the model itself to the potential use of the model, for example, to amplify and widely disseminate illegal content and content that is incompatible with OpenAI’s Terms of Conditions,” she added.
Raising the stakes for Gen AI oversight
How Brussels classifies ChatGPT may shape the regulatory framework for all large-scale generative AI systems in Europe. ChatGPT’s DSA designation would also establish a new benchmark by bringing a chatbot into the scope of intermediary regulation.
“For an industry used to voluntary AI-safety frameworks and self-defined benchmarks,” Mathias Vermeulen, public policy director at AWO, noted, “the DSA’s legally binding due diligence regime might be a tough reality check. Today’s benchmarks remain limited to narrow risks such as bias or deception, but under the DSA, ‘bias tests’ won’t be enough to pass compliance.”
“OpenAI will have to step up its game significantly and won’t be able to get away with a simple copy/paste job of what it is currently doing," he said.
Authors

