Home

Tracker Detail

AI Act

Return to policy tracker

Name
Type
Government
Date Initiated
Status
Last Updated

Summary

The AI Act proposes a risk-based approach to regulation, focused on identifying potentially harmful uses of AI systems and placing requirements and obligations on companies to take steps to minimize the risk to the public. A presentation from the European Commission visualized the AI Act’s regulatory structure as a pyramid with a small handful of banned uses at the top. These uses, such as social scoring or predictive policing, pose an unacceptable risk to the public and are therefore prohibited. One level down, high-risk uses, including medical devices and uses of AI in essential government services, are permitted but with requirements to establish and implement risk management processes. Further down, lower-risk uses like consumer-facing services are allowed, but subject to transparency obligations, including notifying users they are interacting with an AI system and labeling deepfakes. And finally, at the bottom, minimal or no-risk uses are permitted with no restrictions.

Updates

May 9, 2023 - The European Parliament considers draft amendments to the law.

June 14, 2023 - The European Parliament passed an amended version of the AI Act. Talks now begin with EU countries in the European Council on the final form of the law.

December 6, 2023. The expected deadline to complete Trilogue negotiations between the European Parliament, the Council of the European Union, and the European Commission on the Act’s final language. Reports indicate there is disagreement among member states about how to regulate generative AI services such as ChatGPT.

December 7, 2023. Officials call for a recess in the Trilogue negotiations. According to reporting from EURACTIV, EU policymakers reached a provisional agreement on how to regulate foundational AI models, but could not agree on what kinds of uses of AI (incl. by law enforcement officials) should be prohibited, and if AI systems used for military purposes fall under the Act. Areas of agreement include the following:

  • Free and open-source models are exempt from regulation unless they involve a high-risk system, prohibited applications, or AI solution at risk of causing manipulation.
  • Regulation of foundational models (like ChatGPT) will follow a tiered approach where all foundational models will be subject to transparency requirements that document the modeling and training process, including a detailed summary of training data “without prejudice of trade secrets” and evaluate established benchmarks before launch. Foundation models that meet certain technical thresholds must also assess and document systemic risks and cybersecurity protections and report on the model’s energy consumption.
  • Enforcement of the act is delegated to national authorities, except for foundational models, which will fall under the supervision of the European Commission. National and Commission authorities will gather as part of the European Artificial Intelligence Board to ensure consistent application of the law. An advisory forum and scientific panel will advise on the Act’s enforcement, flag potential risks, and inform the classification of AI models with systemic risks.

December 8, 2023. Trilogue negotiations between the European Parliament, the Council of the European Union, and the European Commission conclude with an agreement on the Act, including what use cases for AI are prohibited. Parties also agreed to a set of narrow exemptions for law enforcement purposes and a blanket exemption for AI systems used for national security or military purposes.

January 22, 2024. EURACTIV releases a document comparing the different versions of the Act. The European Commission releases an updated version of the regulation.

January 26, 2024. EURACTIV releases a European Commission document that provides an analysis of the final compromise text.

February 2, 2023. EURACTIV reports that the Committee of Permanent Representatives approved the act. The European Parliament’s Internal Market and Civil Liberties Committees is expected to adopt the AI rulebook on February 13, followed by a plenary vote provisionally scheduled for April 10 -11. The act will enter into force 20 days after publication in the official journal. The bans on the prohibited practices will start applying after six months, whereas the obligations on AI models will start after one year.

March 13, 2024. The European Parliament votes to pass the regulation on artificial intelligence. Full implementation is set for 2026, but AI systems already on the market may have a longer compliance deadline.

Further reading