Home

Donate

Experts Urge EU to Regard General Purpose AI as Serious Risk

Justin Hendrix / Apr 13, 2023

Justin Hendrix is CEO and Editor of Tech Policy PressViews expressed here are his own.

Fritzchens Fritz / Better Images of AI / GPU shot etched 3 / CC-BY 4.0

One of the points of contention in the drafting of the European Union’s AI Act is how to classify risk across the chain of actors involved in developing and deploying systems that incorporate artificial intelligence. Last year, the Council of the European Union introduced new language into the proposed Act taking into account “where AI systems can be used for many different purposes (general purpose AI), and where there may be circumstances where general purpose AI technology gets integrated into another system which may become high-risk.”

Now, even as EU policymakers continue to refine versions of the AI Act, a group of international AI experts have published a joint policy brief arguing that GPAI systems carry serious risks and must not be exempt under the EU legislation.

The experts, including Amba Kak and Dr. Sarah Myers West from the AI Now Institute, Dr. Alex Hanna and Dr. Timnit Gebru from the Distributed AI Research Institute, Maximilian Gahntz of the Mozilla Foundation, Irene Solaiman from Hugging Face, Dr. Mehtab Khan from the Yale Law School Information Society Project (ISP), and independent researcher Dr. Zeerak Talat, are joined in sum by more than 50 institutional and individual signatories.

The brief makes five main points:

  1. GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
  2. GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
  3. GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
  4. Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
  5. Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardized documentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.

Yesterday, I spoke with Dr. Myers West, Managing Director of the AI Now Institute, and Dr. Khan, a Resident Fellow at Yale ISP, about the brief, and what the signatories hope to accomplish.

West told me that despite the recent attention to another joint letter by AI experts, the discussion around this policy brief started even before the launch of OpenAI’s ChatGPT and the surge in interest in generative AI. West emphasized that while the definition remains contested, GPAI encompasses a wide range of technologies, not just chatbots or LLMs, but also facial recognition APIs like Amazon's Rekognition. She said the EU AI Act should be future-proof and consider this broader category rather than just focusing on recent trends and hype cycles.

And, it’s important to look beyond the application layer, Khan explained, because focusing only on the application stage ignores other stages where subjective decisions are made that can lead to harm. By considering the development process for GPAI, regulators can better address potential issues before AI systems become widely available and the outcomes uncontrollable. Examining design decisions, rather than just applications, can help provide a more comprehensive regulatory approach, said Khan.

“So for example, suppose lawmakers are concerned about mass copyright infringement, or there's risk that Microsoft or whatever company is developing a model that would allow anyone to create music that mimics the style and music produced by artists without any compensation. You would go back and look at how the company's collecting data, what are the sources, to what extent they're using copyrighted information, what are the licensing terms if at all that they're adhering to,” said Khan, illustrating the type of scrutiny regulators might apply to a GPAI developer.

Focusing regulation solely on application developers puts the responsibility for mitigating risk at the final stage of a system’s development, even if an application developer may not have sufficient knowledge about the AI system's training data, model architecture, or other design choices made earlier in the development process. West acknowledged this situation might allow the parties involved to have a kind of plausible deniability about harms.

The current focus on generative AI provides an opportunity to discuss both procedural and substantive concerns that policymakers should consider, said Khan. But it’s broader than just generative AI, she said.

“It's important because this is going to stand the test of time in the sense that there are other kinds of applications that will need attention and still do need attention right now, in addition to generative AI,” said Khan. The brief points to “models already commonly offered via large cloud services such as Amazon Web Services (AWS) or Microsoft Azure,” including for applications in health and medicine, or the provision of public services, such as welfare benefits.

Currently, the European Parliament, European Council, and other parties are negotiating final versions of the Act during trilogue negotiations. The process is expected to continue until the end of the year. Once the Act is fully implemented, companies will be subject to non-compliance penalties up to €30 million or 6% of global income.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics