Will Disagreement Over Foundation Models Put the EU AI Act at Risk?

Dr. Benedikt Kohn, Lennart van Neerven / Nov 29, 2023

This article is drawn from an analysis published at the firm’s website and updated to match recent developments.

Flags in front of the European Commission headquarters in Brussels, Belgium. Shutterstock

Ongoing trilogue negotiations between the European Union Council, Parliament and Commission over the EU AI Act are focused on how the law will be governed and enforced, the use of AI technology by law enforcement and – that is the topic of this article – the regulation of basic and general-purpose AI. After initial progress with a graduated regulatory approach that provides for stricter requirements for more powerful AI models, the Spanish Council Presidency met with resistance shortly before the planned end of the negotiation cycle on 6 December 2023. To spare their domestic AI companies from some regulation, Germany and France, supported by Italy, are now rejecting comprehensive regulation and are in favor of self-regulation through a code of conduct. This U-turn could jeopardize the entire effort. The European Commission then proposed a compromise text that retains the tiered approach but weakens regulation overall.

This text now forms the basis for further deliberations, although resistance is to be expected, particularly from the Parliament. Time is pressing, as a final trilogue is due to take place in December before Belgium takes over the Council Presidency and a new European Parliament is elected in the summer of 2024. Disagreement at this stage could considerably delay the legislative process or even cause it to fail. The EU sees itself as a global pioneer in AI regulation, but is under pressure to find a solution in light of global developments and the upcoming European elections.

Background: Previous consideration of foundation models and general-purpose AI in the AI Act

ChatGPT is now familiar to almost everyone, as particularly powerful foundation models and general-purpose AI – such as GPT-3 and GPT-4 from OpenAI, on which ChatGPT is based – have spread rapidly in recent months. According to the definition recently introduced by the EU Council, a foundation model is:

“a large AI model that is trained on a large amount of data, which is capable to competently perform a wide range of distinctive tasks, including, for example generating video, text, images, conversing in lateral language, computing or generating computer code”.

According to the Council, the related term “general-purpose AI” refers to systems that:

“may be based on an AI model, can include additional components such as traditional software and through a user interface has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”.

Accordingly, a general-purpose AI can be based on a foundation model and be an implementation of it, but is therefore downstream of it. However, both terms ultimately refer to AI models or AI systems that can be used as a basis for realizing a wide range of different applications.

At the time of the European Commission’s original proposal for the AI Act in April 2021, foundation models and general-purpose AI were not yet so widely recognized by the general public, which is why the AI Act did not initially contain any provisions in this regard. However, the first voices were already calling for their regulation in August 2021. The Slovenian Council Presidency added Article 52a to the AI Act shortly afterwards, although this still stated that general-purpose AI – the term “foundation model” had not yet been introduced – was not to be covered per se by the provisions of the AI Act. Only the sale or use of a general-purpose AI for a specific application that falls under the AI Act should trigger corresponding obligations.

However, under the French Presidency, the Council significantly modified the provisions on the regulation of general-purpose AI. Certain obligations should now apply to general-purpose AI if they may be used as high-risk AI systems or as components of such high-risk AI systems. The subsequent Czech Council Presidency went one step further and proposed that high-risk general-purpose AI should even fulfill all the obligations of high-risk AI systems.

Finally, the European Parliament (“Parliament”) published its position in June 2023, in which it inserted Article 28b of the AI Act and thus the term “foundation model” into the legal text for the first time. The Parliament provided for a number of obligations for foundation models regardless of their risk category.

Recent developments

The regulation of general-purpose AI and foundation models continues to play a central role in the current trilogue negotiations between the Council, Parliament and the Commission and is the subject of controversial debate. Following the last political trilogue on October 24, 2023, an agreement on a tiered approach to the regulation of foundation models initially appeared to be on the cards. According to this, stricter obligations would apply in particular to the most powerful AI models with a greater impact on society. As a result, these would primarily affect leading – mostly non-European – AI providers. The Parliament thus abandoned its original plan to introduce horizontal rules for all foundation models without exception.

Subsequently, on November 5, 2023, the Council under the Spanish presidency presented a corresponding draft text, which set out a series of obligations – again controversial in detail. According to this, providers of foundation models should fulfill transparency obligations, for example by providing technical documentation on the performance and limits of their systems and proof of compliance with copyright law. Providers of the most powerful foundation models should also register their AI models in the EU’s public database, for example, carry out an assessment of their systemic risks and have auditing obligations. In particular, a debate broke out over the criteria for determining the “most powerful” AI within the tiered approach. While the Council wanted to require the Commission to define these through secondary legislation within 18 months of the AI Act coming into force, the Parliament called for a regulation in the AI Act itself in order to be able to legislate on such an important decision.

The Spanish Council Presidency also laid down obligations for general-purpose AI in the published text in the event that the provider of such general-purpose AI systems concludes license agreements with downstream economic operators that use the AI system for purposes classified as risky. The provider would then have to specify possible high-risk areas of use and provide information to enable the downstream actor to fulfill the requirements of the AI Act.

Headwind from the Member States

The trilogue negotiations therefore progressed, even though there were still heated discussions on key points. On November 9, 2023, however, the negotiations then unexpectedly experienced a major setback. At a meeting of the Telecommunications Working Group, a technical committee of the Council, voices were raised against any plans to regulate foundation models. Parliament representatives reportedly ended the meeting two hours earlier than planned as a result. There was nothing left to discuss. This is because political heavyweights such as Germany, France and Italy in particular – under pressure from national AI companies – made a sudden U-turn regarding the regulation of foundation models.

German company Aleph Alpha (an Open AI competitor) and French start-up Mistral, in particular, fear that excessive regulation of foundation models in the EU could put them at a massive competitive disadvantage compared to their American and Chinese competitors. It is true that non-European AI giants – including US firms such as OpenAI, Meta and Google – are already far ahead of EU companies in terms of computing resources, funding, data and talent. However, European companies are gaining ground: Aleph Alpha, for example, recently received a funding commitment totaling $500 million dollars. Regulation of foundation models in the EU would slow down the race to catch up at a crucial time and cause the EU to fall further behind the global AI leaders. In this view, the tiered approach for foundation models is a “regulation within regulation” and jeopardizes both innovation and the risk-based approach on which the AI Act is based.

Instead of a binding regulation, corresponding obligations and sanctions, France, Germany and Italy are now in favor of self-regulation based on a code of conduct for foundation models.

Is the AI Act in danger?

What does the sudden change of direction by influential member states regarding the regulation of foundation models mean for the AI Act? Is it possibly in danger?

The outcome is currently difficult to predict. The Spanish Council Presidency now wants to reconsider the regulatory plans for foundation models due to the strong dissenting votes and endeavor to find an acceptable solution directly with the Member States concerned. This is because the regulation of foundation models is a central aspect of the AI Act.

Due to the current situation, the Commission presented a possible compromise text on November 19, 2023. Although it maintained Parliament’s tiered approach, it also significantly softened the regulation. Firstly, the term “foundation model” no longer appears in the text. Instead, the Commission distinguishes between “general-purpose AI models” and “general-purpose AI systems” – according to the Commission’s definition, however, these terms continue to correspond to the terms “foundation model” and “general-purpose AI” introduced by the Parliament. According to the proposal, providers of general-purpose AI models should, among other things, be obliged to document the functionality of their AI models by means of so-called “model cards.” If the AI model poses a systemic risk – which should initially be measured in terms of computing power – they are subject to additional monitoring obligations. The text also contains an article according to which the Commission is to draw up non-binding codes of practice. This refers to practical guidelines, for example, on the implementation of model cards, on the basis of which players can ensure their compliance with the AI Act. However, possible sanctions are not mentioned.

On November 21, MEPs and representatives of the Council and Commission then met to discuss the Commission’s proposal. Although no agreement has yet been reached, it appears that the text is now the new basis for negotiations. However, resistance from Parliament, which has called for much stricter rules, is to be expected.

It remains to be seen what the outcome of further negotiations will be; legislators were optimistic regarding a deal at the end of November 2023. In any case, if the legislation is to move ahead, an agreement should be reached as soon as possible, as time is pressing. The next – and, according to the original plan, final – trilogue will take place on December 6. After that, the Spanish Council Presidency will only have a short time left before Belgium takes over the presidency in January 2024. Under Belgian leadership, there would then be particular pressure to reach an agreement. This is because the European elections are due in June 2024, which will result in a new Parliament. But because of the duration of the procedure to pass a Regulation the deadline for an agreement is around February 2024.

A failure of the “AI Act” project would probably be a bitter blow for everyone involved, as the EU has long seen itself as a global pioneer with its plans to regulate artificial intelligence. However, since the Commission’s draft in April 2021, other countries have also taken steps to regulate AI. Just a few weeks ago, US President Joe Biden issued an executive order on AI, the United Kingdom organized the AI Safety Summit and the G7 countries published an AI Code of Conduct. It therefore remains to be seen whether, how and when those responsible in the EU will be able to overcome their differences and fulfill their pioneering role. In any case, the negotiations surrounding a compromise on the regulation of foundation models are continuing at full speed; the outcome may become clear in a week’s time.


Dr. Benedikt Kohn
Benedikt Kohn is an Associate in the Practice Area “Technology, Media and Telecoms” and Certified Information Privacy Professional Europe (CIPP/E). He has particular expertise in legal issues related to digitization and artificial intelligence. His consulting focus includes IT-legal contract draftin...
Lennart van Neerven
Lennart van Neerven is a Paralegal at Taylor Wessing Germany. After finishing his A-levels and a subsequent year abroad, he passed his first state examination in law at the University of Münster in 2021. He then specialised in the LLM “Law and Technology” at Tilburg University, particularly in the a...