A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps
Kristina Khutsishvili / Sep 10, 2025On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a “code designed to help industry comply with the AI Act's rules.” The Code has been under development since October 2024, when the iterative drafting process began after a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025, and the subsequent delay has sparked widespread speculation – ranging from concerns about industry lobbying to deeper, more ideological tensions between proponents of innovation and regulation.
However, beyond these narratives, a more fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act (EU AI Act), particularly in its approach to “general-purpose AI” (GPAI). The current version, which includes three chapters covering “Transparency,” “Copyright,” and “Safety and Security,” does not address these core problems.
The legal invention of general-purpose AI
According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is:
“an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”
What the Act refers to here aligns with what the AI research community refers to as a foundation model – a term that, while not perfect, is widely used to describe large-scale models trained on broad datasets to support multiple tasks. Examples of such foundation models include OpenAI’s ChatGPT series, Microsoft’s Magma, Google’s Gemini, and BERT. These models act as bases that can be fine-tuned or adapted for particular use cases.
Crucially, the term “general-purpose AI” (GPAI) did not emerge from within the AI research community. It is a legal construct introduced by the EU AI Act to retroactively define certain types of AI systems. Prior to this, ‘GPAI’ had little to no presence in scholarly discourse. In this sense, the Act not only assigned a regulatory meaning to the term, but also effectively created a new category – one that risks distorting how such systems are actually understood and developed in practice.
The key concepts embedded in the Act reflect a certain epistemic confidence. However, as is the case with GPAI, the terminology does not emerge organically from within the AI field. Instead, GPAI, as defined in the Act, represents an external attempt to impose legal clarity onto a domain that is continuing to evolve. This definitional approach offers a false sense of epistemic certainty and stability, implying that AI systems can be easily classified, analyzed, and understood.
By creating a regulatory label with a largely fixed meaning, the Act constructs a category that may never have been epistemologically coherent to begin with. In doing so, it imposes a rigid legal structure onto a technological landscape characterized by ongoing transformation and ontological and epistemological uncertainty.
The limits of a risk-based framework
The EU AI Act takes a risk-based regulatory approach. Article 3(2) defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” This definition draws from classical legal and actuarial traditions, where it is assumed that harms are foreseeable, probabilities can be reasonably assigned, and risks can be assessed accordingly. Yet AI – particularly foundation models – complicates this framework.
The foundation models are characterized by features that are difficult to quantify, including their probabilistic and augmentative nature and interaction with complex socio-technical environments, where harms cannot be clearly predicted. As a result, traditional risk assessment approaches are unable to adequately account for their behavior or impact and can produce a false sense of confidence for regulators.
The legal and epistemic tension is immediately apparent. Law requires a level of certainty – but the nature of AI strongly challenges that very prerequisite. Yet the logic of law remains orthodox, producing an epistemic mismatch between the assumptions embedded in legal instruments and the realities of the technologies they seek to govern.
The EU AI Act’s treatment of “systemic risk” also reflects the influence of the contemporary AI Safety discourse. The very existence of a dedicated “Safety and Security” chapter in the GPAI Code of Practice signals an awareness of debates around the so-called “long-term risks” of advanced models. Terms like systemic risk echo concerns raised by AI Safety researchers: worries about uncontrollable systems, cascading failures, and potential large-scale harms. Yet, crucially, the Act stops short of engaging with the more fundamental concepts of this discourse – such as alignment, control, or corrigibility. Instead, systemic risk is invoked as if it were already a stable regulatory concept, when in reality it is still contested in both technical and governance circles.
The Act’s description of systemic risk, as provided in Art. 3(65) and Recital 110 highlight this conceptual ambiguity. The Act’s definition refers to risks stemming from the “high-impact capabilities of general-purpose AI models,” but both its origins and implications remain unclear. As presented in the Act, systemic risk seems to be rooted in the AI model’s technical attributes: “Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors.”
The passage borrows the vocabulary of AI Safety, but without clarifying how these terms connect to actual deployment and socio-technical contexts. The causal relationship between a model’s “capabilities” and systemic risk is assumed rather than demonstrated. By framing systemic risk primarily as a technical property of models, the Act overlooks the crucial role of deployment environments, institutional oversight, and collective governance in shaping real-world harms. This matters because a regulation that treats systemic risk as an intrinsic property of models will default to technical compliance measures, while overlooking the institutional and societal conditions that actually determine whether risks materialize.
The bureaucratic trap of legal certainty and the need for anticipatory governance of emerging technologies
Max Weber’s analysis of bureaucracy helps to explain why the aforementioned mismatch between the assumptions embedded in legal instruments and the realities of the technologies was to be expected. Weber described bureaucracy as an “iron cage” of rationalization, reliant on formal rules, hierarchies, and categorical clarity. Bureaucracies require clear categorization, otherwise they cannot function effectively.
The EU AI Act’s precise definitions – such as those for “provider” (Art. 3(3)), “deployer” (Art. 3(4)), and especially “general-purpose AI model” (Art. 3(63)) – reflect this bureaucratic logic. Yet, as Weber warned, this form of rationality can lead to overly rigid and formalized patterns of thought. In treating AI categories as scientifically settled, the Act exemplifies legal formalism that may hinder adaptive governance. The bureaucratic need for clear rules at this stage essentially opposes the anticipated regulatory clarity and instead creates an epistemic gap between the law itself and the state of the art in the area it aims to regulate. For policymakers, the problem is not an academic one: rules that freeze categories too early risk locking Europe into an outdated conceptual framework that will be difficult to revise as AI research advances.
Thomas Kuhn’s theory of scientific revolutions offers further insight. Kuhn described “normal science” as puzzle-solving within “paradigms” – established frameworks that define what counts as a valid question or method. Paradigm shifts occur only when anomalies accumulate and existing frameworks collapse. Today, AI research is undergoing similar developments, with innovations like large language models disrupting prior paradigms. Legal systems, however, operate within their own paradigms, which prioritize stability and continuity. As such, they necessarily lag behind the rapidly evolving world of AI.
Kuhn observed that paradigm shifts are disruptive, unsettling established categories and methods. Law, by contrast, is conservative and resistant to epistemic upheaval. Thus, the scientific paradigm in flux collides with legal orthodoxy’s demand for stable definitions. Although terms like general-purpose AI and systemic risk, and many others, appear fixed within the EU AI Act, they remain unsettled, contested, and context-dependent in practice.
A revealing example comes from a recent talk at the University of Cambridge, where Professor Stuart Russell defined GPAI not as a present reality but as an aspirational concept – a model capable of quickly learning high-quality behavior in any task environment. His description aligns more closely with the notion of “Artificial General Intelligence” than with foundation models such as the GPT series. This diverges sharply from the EU AI Act’s framing, highlighting the epistemic gap between regulatory and scientific domains.
The lesson here is that the Act risks legislating yesterday’s paradigm into tomorrow’s world. Instead of anchoring regulation in fixed categories, policymakers need governance mechanisms that anticipate conceptual change and allow for iterative revision, relying on multidisciplinary monitoring bodies rather than static – and in this case problematic – definitions. Ambiguity in core concepts and definitions, the fragmented character, and an often unconvincing discourse reveal the limits of conventional regulatory logic when applied to emerging technologies. Neither the EU AI Act nor the GPAI Code of Practice was developed within an anticipatory governance framework, which would better accommodate AI’s continuously evolving, transformative nature.
The OECD’s work on anticipatory innovation governance illustrates how such frameworks can function: by combining foresight, experimentation, and adaptive regulation to prepare for multiple possible futures. Experiments in Finland, conducted in collaboration with the OECD and the European Commission, show that anticipatory innovation governance can be embedded directly into core policymaking processes such as budgeting, strategy, and regulatory design, rather than treated as a peripheral exercise. This approach stands in sharp contrast to the EU AI Act’s reliance on fixed categories and definitions: instead of legislating conceptual closure too early, it builds flexibility and iterative review into the very processes of governance. In the AI domain, the OECD’s paper Steering AI’s Future applies these anticipatory principles directly to questions of AI governance.
From this perspective, the delay in releasing the GPAI Code of Practice should not have been seen as a moment of conflict, but rather as an opportunity to consider a more appropriate framework for governing the emerging technologies – one that accepts uncertainty as the norm, relies on adaptive oversight, and treats categories as provisional rather than definitive.
Authors
