Home

On the AI Act’s Passage and Lessons for US Policymakers

David Morar / Feb 7, 2024

The EU's AI Act is the first major comprehensive AI governance text, and its recently-public final version includes a lot of complexity and caveats that merit deeper study, but some of its lessons to US policymakers are clear, says David Morar.

The AI Act recently cleared one of the final hurdles of the European Union’s complicated policy-making process, and it’s now much closer to officially being passed as legislation. A long-time coming–the first version was proposed by the EU Commission back in 2021– the AI Act represents a major milestone in global AI governance. The United States can stand to learn some lessons from the process as it ambles along its own legislative journey to regulate artificial intelligence.

The AI Act started off in 2021 as a wholly risk-based proposal, designed on top of the EU’s existing product safety legislation. However, in late 2022, as the EU Parliament was readying its own version of the Act, generative AI emerged onto the scene with the launch of OpenAI’s ChatGPT. This global phenomenon focused the EU on the need for more responsibilities for foundation models and generative AI systems that go beyond the risk-based perspective. Even so, the main framework proposed by the Parliament remained fundamentally designed around tiers of risks, from unacceptable to minimal risk—despite EU civil society organizations arguing strongly, even before the Commission’s introduction of the text, that the risk-based perspective was not doing enough to center human rights in its approach.

The other main European institution, the Council of the European Union—which consists of national government ministers—joined the Parliament and Commission in a series of meetings called the trilogues throughout 2023. These meetings were meant to hammer out a final version of the bill that they would all agree to and thus pass the AI Act. This final draft version provisionally came together in December 2023, after a few record-breaking marathon trilogue sessions resulted in a “political” agreement. In practice, this means that the EU bodies agreed in principle on broader general terms and allowed so-called “technical meetings” to take place afterwards to hammer out details within that agreement. Although trilogues are largely secret proceedings, journalists like Euractiv’s Luca Bertuzzi gained impressive access to the AI Act’s final chapter, and they were able to cover the proceedings in real time on socia media. Even so, no actual text from the AI Act was available until very recently, when Bertuzzi’s leak prompted Parliament staff to release clean versions. On January 21st, the EU Council released an official text, in advance of member states voting on the AI Act.

The Text

The scope of the AI Act’s text makes clear that it does not deal with national security, models and systems used for personal non-professional activities research and development, or AI systems released under free or open-source licenses (before placed on the market, unless it qualifies for any of the risk tiers or is GPAI) However, the AI Act will apply to AI systems’ providers and deployers from outside of the EU whose outputs are accessible to or used by the EU public.

The text’s broad definition of AI is similar to the Organization for Economic Co-operation and Development’s (OECD) definition. It encompasses autonomous systems as well as systems with human inputs, which adapt based on those inputs and learn how to generate outputs. While broad, the definition does not include simple rules-based systems. The text makes distinctions between the providers—those who develop the system—and those who deploy it, with the bulk of the responsibilities falling on the former.

Risk Tiers

The risk tiers in the AI Act’s leaked text have remained mostly the same, with both unacceptable risk and high risk being explicitly defined and a set of AI systems with transparency requirements, which are colloquially referred to as limited risk. The tiers also indirectly include AI systems that are unregulated, which by definition would pose minimal risk. General purpose AI (GPAI) models are treated separately from the risk tiers.

Limited Risk

Systems in the limited risk category have to be transparent in both machine and human readable forms, be transparent that the end-user is interacting with an AI system, and disclose that outputs are AI-generated, especially for artifacts such as deep fakes.

High Risk

The high risk category contains a long list of specific systems, including those used in critical infrastructures, education, employment, essential private and public services, law enforcement, migration, and administration of justice. Systems are left out of this category if they’re carrying out a narrow procedural task, improving the result of a human, preparing for an assessment, or detecting decision-making patterns while not replacing a human. Beyond this list, there are other characteristics that would qualify systems to be placed in this tier, including if the system is used to profile individuals, used as a safety component, or already covered by specific EU product safety laws that require it to undergo a conformity assessment. If the providers believe that their system that’s included in the list mentioned above is not high risk, they can perform and document an assessment before putting it on the market. If the system is mischaracterized, however, the provider is potentially subject to fines.

If a system is high risk, there are several obligations for that provider to fulfill: ensure that the AI system includes a risk management system, data governance, technical documentation designed for record-keeping that provides deployers with instructions for use, a design that lets deployers implement human oversight and achieve certain levels of accuracy, robustness and cybersecurity, as well as a quality management system. However, for financial institutions, these obligations are limited to only quality management. Similarly, deployers have obligations that flow from provider obligations, but in specific cases, such as bodies governed by public law or entities providing a public service, banking, or insurance, deployers must undergo a fundamental rights impact assessment.

Unacceptable Risk

Several types of AI systems are considered to be prohibited from the European Union, including those that do social scoring or untargeted scraping to create facial recognition databases. Other systems in this tier are those that exploit vulnerabilities that lead to harm or distort human behavior, profile people for risk of criminal offenses (except for systems that augment human assessments based on objective facts linked to criminal activity), conduct emotion recognition in a workplace or educational setting (with exceptions for medical or safety reasons), conduct biometric categorization inferring sensitive attributes (except for systems used to label or filter lawfully acquired biometric datasets, or law enforcement categorizing), and conduct real-time remote biometric identification (RBI). The inclusion of real time RBI in the tier comes with a complex set of exceptions. Systems that conduct real time RBI aren’t prohibited if they are used to search for missing persons, prevent threat to life or terrorist attack, or identify suspects in a serious crime. and within certain parameters: not using real time RBI would cause more harm; registration in the EU database, and authorization from a judicial or independent administrative authority (at the latest 24 hours after deployment). These exceptions, the main catalyst for the record-breaking length of the final trilogs, are incredibly wide, even with the parameters described, and have been roundly criticized by EU civil society.

How the AI Act’s Text Approaches General Purpose AI

A GPAI model is one trained with a large amount of data, self-supervised at scale showing significant generality and performing a wide range of distinct tasks. GPAI model providers are subject to specific horizontal obligations. They must provide technical documentation for authorities and downstream providers, comply with the EU’s copyright directive, and provide a summary of the content used for training the model. GPAI models are deemed to pose systemic risk when the cumulative amount of compute used for their training, measured in floating point operations (FLOPs), is greater than 10^25. Providers of GPAI models with systemic risk have additional obligations that include: performing model evaluations; assessing and mitigating systemic risks; and tracking, documenting, and reporting serious incidents. GPAI models with systemic risk must also meet cybersecurity protection requirements.

Unless deemed systemic, free and open license AI models must only adhere to copyright and training data summary obligations. The Commission must be notified within 2 weeks of when GPAI models hit the systemic risk threshold, but providers may argue that their models don’t present systemic risk. Separately, the Commission can also decide when a model has reached systemic level. One temporary way to show compliance, before the creation of European standards, is adhering to codes of practice, which would have, as a minimum, the obligations listed above and would be created through the AI Office by GPAI model providers and national authorities, with other stakeholders including civil society and academia supporting the process. It’s important to note that the AI Act is focused on GPAI models rather than systems, as systems which have GPAI models can be characterized as high risk separately.

Governance

Beyond the notification and market surveillance authorities at the level of the EU member states, the AI Act also includes the creation of new government entities. These include the AI Office within the Commission, for oversight and enforcement; the AI Board, which is composed of member state representatives and primarily serves a coordination role; as well as two advisory bodies: a scientific panel for technical advice, and an advisory forum for providing input from all other non-governmental stakeholders.

The Lessons for US Policymakers

At this preliminary stage, at least 6 lessons are apparent:

  • The EU already has legislation that tackles basic data-related issues: the General Data Protection Regulation (GDPR). Regardless of your perspective on the GDPR, the AI Act is relying on the existence of basic protections. The US has no such analogue at the federal level, and thus lacks a foundation for AI legislation. One solution to this issue would be to pass the American Data Privacy and Protection Act (ADPPA), comprehensive data privacy legislation that advanced out of the House Energy and Commerce Committee in 2022.
  • Like the AI Act, the US government’s definition of AI should be one that is either also similar to that of the OECD or does not significantly depart from it, as international cooperation across jurisdictions will likely be very important. Thus developers and deployers would have certainty about whether they are in scope or not, across the pond.
  • The US also will have to consider whether to prohibit certain AI systems or uses, particularly those that would violate fundamental rights in light of the risk of use and abuse of similar tech by US law enforcement and intelligence agencies. While the final AI Act text created substantial loopholes for real-time remote biometric identification, a similar bill passing in the US without at least the same set of prohibitions could further shift the balance of power in favor of US law enforcement and intelligence agencies. US lawmakers will have to assess the impact of national security exceptions on EU-US data transfers, which remain on unclear footing.
  • Any US legislation that mandates similar sets of obligations would also need to appropriate funds to either expand the scope of a current administrative agency (most likely the Federal Trade Commission (FTC) or create an entirely new body to deal with enforcement.
  • The AI Act’s application beyond the EU means that a number of US companies working on AI systems and models will likely be included in the legislation’s scope. Thus, the Brussels effect may mean the AI Act becomes the floor for any comprehensive US bill.
  • There is a clear understanding that GPAI models will lead to considerably different concerns than all other AI systems, given separate rules for the two, and US lawmakers should take heed.

The AI Act is the first major comprehensive AI governance text, and its recently-public final version includes a lot of complexity and caveats that merit deeper study, but some of its lessons to US policymakers are clear. Fundamentally, they boil down to a need to balance international harmonization between two close allies while also respecting national contexts, when one of the two has already built a mostly coherent legislative framework and the other is still attempting to build consistent data privacy protections across all states.

To better understand these intricacies and what else we can learn from this landmark bill, join New America’s Open Technology Institute on February 27 at 11 am ET for the online event The EU AI Act: Lessons for US Policymakers. A keynote by Dr. Gabriele Mazzini, architect and lead author of the proposal on the AI Act by the European Commission, will be followed by a panel discussion moderated by Axios' Maria Curi.

Authors

David Morar
David Morar, PhD is Senior Policy Analyst at New America's Open Technology Institute, focusing on data privacy and protection, and platform governance. His academic research focuses on private governance structures.

Topics