Europe Advances Its AI Act
Justin Hendrix / Dec 10, 2023Audio of this conversation is available via your favorite podcast service.
In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. This Friday, after a marathon set of negotiations, EU policymakers reached a political consensus on the details of the legislation. This AI Act represents the most significant comprehensive effort in the world’s democracies to regulate a technology that promises major social and economic impact. While the AI Act will still have to go through a few final procedural steps before its enactment, the contours of it are now set.
To find out more about what was decided, Justin Hendrix spoke to one journalist who reported directly on the negotiations in Brussels: Luca Bertuzzi, technology editor at EURACTIV.
What follows is a lightly edited transcript of the discussion.
Luca Bertuzzi:
I'm Luca Bertuzzi. I'm technology editor at EURACTIV, which is a media specialized in European affairs.
Justin Hendrix:
And you have been covering these trilogue negotiations in Brussels, around the AI Act, which was an intense couple, three days. I understand at least one 22-hour session that started Wednesday, going into Thursday, just wrapped up late Friday night. How do you feel?
Luca Bertuzzi:
Tired, of course. Yeah. I mean, to be honest, I've been following the AI Act since it was first presented as a law, so it has been quite a long ride. You'll see a lot of late-coming commentary now because AI has become such a hot topic, and I can tell you, also, that the whole mood has changed around this legislation. At the beginning, everyone was like, "What are you doing? This technology is not there yet." And now the mood was, "Oh, you're not fast enough. This technology's moving too fast for you now." So yeah, these very intense negotiations, 36 hours across three days, they showed the level of commitments EU policymakers had to close what was perhaps the most important tech file under this legislative mandate.
Justin Hendrix:
There is a photo accompanying your piece on EURACTIV, of European policymakers crowded together, what almost looks like a photo from a holiday party or something, many of them on a knee, crouched down in front of a little box that holds a couple of plants. And you've got Thierry Breton flashing a thumbs up. Was there that sense of a celebration at the end? Esprit de corps?
Luca Bertuzzi:
Yeah, I mean, of course these are politicians, so they all... for PR success. But I think here, there was a reason for celebration. Because, I mean, this was the first AI law in the Western world. Of course, there is China too, but at the international level, this is most likely going to become the international benchmark, just like the GDPR became the benchmark for data protection. And I can already tell you that there are governments across the world getting in touch with the Commission, discussing this law and how they can replicate it in their jurisdiction.
So, I mean, of course there is a human factor there, too. Because when I talk to sources, some of them, they were so exhausted that I was asking for details that they couldn't remember, because it was discussed maybe 12 hours before and they had such a long agenda. But, at the end of the day, there was a very high political pressure to close this fight, also for its international relevance.
Justin Hendrix:
So things were potentially on the rocks, it seemed like, coming into this. And there were even headlines that we had, one on tech policy press about the possibility that the AI Act might even fail over disagreements, in particular, over how to regulate foundation models. How did that end up shaking out in these negotiations?
Luca Bertuzzi:
Yeah. So, it is true. I think it came very close to that. We have to keep in mind that this was presented in April, 2021. So, of course, the whole hype around ChatGPT was not there yet. It was the European Parliament that introduced a strong regime for foundation models. At the end, they found a compromise, having two tiers, so horizontal rules for all models, plus some extra requirements for models that have a systemic risk. There was skepticism from countries, like France and Germany, because they have their own AI startups that are trying to compete with big tech, like Mistral AI and Aleph Alpha.
And there was a clear message, also, coming from ministers from these two countries earlier this week that they consider the tax was not mature yet and they asked not to rush. So usually, these messages come when you want to derail a legislation. When you hear there is no impact assessment or let's not rush it, it's usually buying time means that you don't want it. And you don't want it now, why would you want it later? So I wouldn't say we are out of the woods yet, because the agreement needs to go through the council, which represents EU member states.
There is still a possibility of France and Germany speaking out against the agreement. I do not think they will go as far as voting against the AI Act, but I do think that, of course, that there was so much discussed during the political negotiation, that a lot of details will have to be fine-tuned at the technical level, but that also means that things that are also political could change. And for all the declarations that we heard, all, even the details that we have reported, I mean, this is a provisional agreement. We should keep in mind that. So it is still a moving target.
Justin Hendrix:
So, just to be clear for my listeners, we haven't seen a document yet. There's not a new draft or something that we're able to review?
Luca Bertuzzi:
No. I mean, I've seen a couple of working text and that's what I've based my reporting on. On some articles, there isn't even an agreed text yet. So I think we will see a consolidated version maybe in January. As I said, there will be a lot of technical work ongoing for the next at least four weeks.
Justin Hendrix:
Okay. So let's go through what you have reported, because you've given us a great deal of detail on a range of the different points that the negotiations address. First off, a big one, this exemption for national security.
Luca Bertuzzi:
Yeah. So this is becoming a recurring thing in EU legislation, where countries like France, they want to, yeah, basically, they don't want their hands tied. And this broad exemption, well, basically, what is national security? It's left to the government to decide. By the EU treaties, national security is already a national competence. So EU law shouldn't regulate these matters.
What is happening here is, first of all, it's much broader than the definition of national security you have in the treaties. And secondly, I mean, I won't bore you with the details, but basically, to enforce the treaties, you need the EU Court of Justice, which is a much higher level of judicial review. If you have a national exemption directly in the law, it means that national tribunals can apply it. So, for the US, it's a bit like the federal judges versus the state judges.
Justin Hendrix:
So this exemption would also apply to companies, external contractors, not just to militaries.
Luca Bertuzzi:
Exactly. Well, militaries, contractors in the military and defense field. Yes.
Justin Hendrix:
Let's talk about the prohibited practices. This is one of the, I guess, key pieces of this. Essentially, what aspects of possible use cases of AI are simply off the table in the European Union? Let's go through them.
Luca Bertuzzi:
Sure. I mean, there were some that were not controversial, and these were social scoring, which is a practice that we have seen in China, a system that exploit vulnerabilities, like people with disabilities, manipulative techniques. These were, from the start they were banned. Then MEPs introduced a band for systems, like Clearview AI, that basically scrub the internet for your facial images and create a database.
Then there were more controversial ones, like emotional recognition. Here, the Parliament wanted to ban this application in the workplace, in schools, in law enforcement, and in border control. So, the member states throughout the entire negotiations pushed back against requirements for law enforcement, because they want to keep a room of maneuver for police forces to use these tools. So eventually, it was only banned in the workplace and in education, with a caveat that, for example, this is a system that is meant to prevent a driver from falling asleep, and they can do that.
Another ban was on predictive policing. This is considered against the presumption of innocence, which is a basic fundamental right, and it will be based so that the provision is on systems based on individual risks that want to assess... Sorry, I'll repeat. The prohibition is based on individual assessments of personal trait to infer if an individual will commit crimes in the future. There was no ban for prime analytics or aggregate data.
Finally, a big point of contention was on biometric categorization, and here, there was quite a strong clash with the member states. But eventually the MEPs also managed to ban any system that tries to categorize people based on traits, such as race, political opinions, and religious beliefs. So another big chapter was on biometric remote identification. This is a technology that is seen as potentially leading to mass surveillance. Again, strong pushback from European governments to keep this, to prevent terrorist attacks and this sort of heinous crimes.
So eventually, there were some narrow exemptions for law enforcement to use these technologies. As I said, to prevent a risk of attacks and to identify the victims of kidnapping, for example, or the suspects are of very serious crimes, like murder. This is for real time. Now, I have to tell you, we haven't seen the text on this, so there will be a lot of details, but there should be a similar regime between real time and ex-post use of RBI.
This is important because, I mean, it could have been a strong loophole if you didn't have similar conditions for ex-post. Because what happens if you just post the system for a minute and then rewatch it, and then all the safeguards that you put in place are gone. So what MEPs confirmed during the press conference is that these exceptions for law enforcement will have to be based on national law, and the use will have to be strictly necessary. So this is, again, to avoid some generalized surveillance.
Justin Hendrix:
I'm sure that there are many of these that will have to be really finely tuned. It'd be about the details. I mean, even manipulative techniques, where does sales stop and start, and where does manipulation start? These are things that are difficult to discern even before you add in the complexity of AI. So I assume we'll see a lot of going back and forth over the details here.
Luca Bertuzzi:
Yes, I think so, and I think a big part of this will have to do with technical standards and finding a common consensus on what is acceptable and what not.
Justin Hendrix:
Another area is high-risk use cases. So what fits that taxonomy? Let's talk about that. What did they agree with regard to what is high risk?
Luca Bertuzzi:
Yeah. So maybe this deserves to take a step back. So the AI Act is risk-based, which means that those applications that are deemed to have an unacceptable risk, they are banned. Then there are applications that imply significant risk for people's safety, health, and fundamental rights. These are considered high risk and they will have to undergo a specific regime of risk management and other governance. So how will you know if your system is high risk or not? First of all, you have a list of high-risk use cases and then you'll have to fulfill certain conditions. The list was controversial, again, because where do you draw the line? It's quite an important discussion in terms of also, what society you want to be?
So the sensitive areas are, for example, in the field of employment. If you use a system to select candidates, you want to make sure that that system doesn't discriminate against people of color, for example. If you have something for the administration of justice, again, you need to be sure that that system is solid enough that there are not some weaknesses that you have not considered. So this is just to make sure that if your system is used in a sensitive area, it undergoes some due diligence. There was some discussion about having social media's recommender system in there or not. Eventually, it's not included, because this is already in the scope of the Digital Services Act. So they didn't want to create an overlap.
Justin Hendrix:
Yeah, that was interesting to me as well. And the extent to which that interacts with this notion of systemic risk in the Digital Services Act, which itself is something that is still yet to be well defined.
Luca Bertuzzi:
Yeah. I think, I mean the DSA is also in its early stages. There, as in the AI Act, there is a lot of self-assessment for companies to conduct, and there will be also some regulatory dialogue with the European Commission. So this is not the regulation that enters into force and everything changes from overnight. It will take some time for behavioral changes, from companies, and also the idea of how enforceable they will be, will also determine to what length companies and platforms will take it seriously.
Justin Hendrix:
You've already mentioned some aspects of the way that the AI Act will interact with law enforcement, but there are some exemptions here. Also, there's a lot to do with border control and migration, the ways in which it interacts with those issues. Can you speak to that for a moment? Just some of these, I suppose, domestic security concerns.
Luca Bertuzzi:
Yeah. So this is what I was referring to before. The Council of Ministers, which is one of the co-legislators, introduced some significant carve-outs for law enforcement. So, for example, one of the principles is if there is a high-risk application and based on the system, they need to take a decision affecting significantly the life of a person, there should be at least two people to review that system. That won't be the case for law enforcement when national governments consider this is disproportionate. I guess with that, they mean situations of emergency or where the time doesn't allow it.
Another important carve-out regards the public database. So basically all public bodies in the EU that use a high-risk system will have to register into a public database, but for police and migration control agencies, there will be a non-public section that will only be accessible by an independent authority.
Justin Hendrix:
Let's talk about the fines and the penalties. So if you run afoul of this thing… appears to be a couple of different ways you can do that. You can maybe launch a prohibited application. You could fail to meet the obligations of the AI Act. You could potentially fail to provide the EU accurate information about some system you're operating. What types of money could companies expect to have to pay if in fact, they're found in violation?
Luca Bertuzzi:
Yep. So here I should make a caveat, because on the figures and the percentages, there might still be some changes, but, of course, the most severe violation is that of using an application that is banned. And the idea there is that you have a minimum fine, which is provisionally 35 million euros, but it can increase up to 6.5% of the global turnover of the company, not of the European subsidiary of the company global turnover. Then, if there are violations of obligations for high-risk systems, the minimum is 15 million euros and/or 3% of the global turnover, and the lesser violations, let's say, are half a million euros and 1.5%. So overall, 6.5 is quite a significant number for a company.
Justin Hendrix:
Are there other types of, essentially, abilities for the European Union to come in and shut down a company entirely or to otherwise police it if in fact it's found to be... Let's say it's invented some runaway AI that there's a need to pull the plug on the servers immediately?
Luca Bertuzzi:
Yep. In the most severe cases where there is a risk for the European market, the Commission can take an emergency decision and basically ban the system from the EU market.
Justin Hendrix:
So, I understand from your reporting that once this act comes into law or becomes law, it would apply still 24 months afterwards, possibly six months for the bans. What can we expect in terms of timing? What remainder of process do we have to go through before this would essentially come into force?
Luca Bertuzzi:
Yeah. Also on the timeline, this could still move a bit. But yeah, indeed it should start to apply two years after it enters into force. So now, what we are going to see is formal adoption. First of all, the consolidated text probably early next year. Then formal adoption could take two, three months, then it's published on the official journal. So I would say at one point in spring, this will be published and will enter into force after 21 days. After that, it'll enter into applications after two years. So we are talking 2026.
Justin Hendrix:
So still some time ahead and not impossible to imagine more technological surprises that may come along even in that time. I mean, that was one of the things that occurred, which led to some of the, I suppose, shakiness on the AI Act was just literally the launch of ChatGPT and the recognition that these generative models would be so important. So I suppose we'll see what happens with the tech as well.
Luca Bertuzzi:
Yeah. But, I mean, you also need to give companies time to adopt and understand what the compliance means for them. Technical standards will play a big role as well. So, of course the technology won't stop for the AI Act. We always knew that. I think that the biggest test, in Brussels, we use this terminology future proofness. I find it terrible, but it gives you an idea that there is also a sense that these things should stand the test of time, and the governance model will really be the most important aspect of the law, in the long run.
Justin Hendrix:
Do you have a sense of whether industry had a big impact on this final language? Do you have any sense of the lobbying effort from companies, including Western ones? You've mentioned a couple of EU startups, in particular, that clearly played a role, especially in this conversation over foundation models. But do you have a sense of what role industry played in these final negotiations?
Luca Bertuzzi:
Well, I can tell you that, by now, big tech has such a reputation in Brussels that if they lobby against something, they managed to unite the most politicians against them. I think that they surely looked with great attention at what Mistral AI and Aleph Alpha have been trying to do. I mean, you have to keep in mind that, of course, lobbyists are the heart of Brussels and the heart of policymaking, but the bigger lobbyists in Brussels are not companies, they are national governments.
And when you have a company like Mistral AI whose co-founder was state secretary and has direct access to Emmanuel Macron, the French president, and manages to shape the French position, that is the most powerful lobbying you can have. And that is also why this could have derailed and might still derail. I think it's unlikely, but there is still this small possibility.
Other than that, I mean, of course, this was the biggest concern for big tech, the foundation model part, and it was, since we started hearing about obligations for general purpose AI, for their misfortune, the idea has matured inside the Parliament that not only you should regulate at the system level, so the concrete application, but also at the model level. And I think there was a lot of discussion also, with researchers from the University of Stanford, that played a big role in developing a good understanding of how you should go to the source of the problem rather than just regulating the output.
Justin Hendrix:
Well, I appreciate so much you joining me on a Saturday, with the EU AI Act hangover that I'm sure you share with many of the policymakers who were stuck there in Brussels, working this out over the last few days. Thank you so much, Luca, for spending the time.
Luca Bertuzzi:
My pleasure. Thank you.