Europe’s Advanced AI Strategy Depends on a Scientific Panel. Who Will Make the Cut?
Kai Zenner / Oct 31, 2025Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss (European People's Party Group) in the European Parliament. All views expressed here are personal and do not represent the European Parliament or the EPP Group.

The AI Act passed the European Parliament on March 13, 2024. Shutterstock
A month ago, the European Commission concluded its call for expression of interest for one of Europe’s most influential digital governance bodies: the AI Scientific Panel. The panel’s 60 independent experts will serve two-year terms and, starting in 2026, will advise the European AI Office on implementing the AI Act, Europe’s landmark legislation for regulating artificial intelligence.
The panel will focus specifically on general-purpose AI (GPAI) systems, including models such as ChatGPT, Claude, and Gemini. Its experts will provide guidance to the AI Office and national competent authorities on systemic risks, model classification, evaluation methodologies, and cross-border market surveillance. They will also monitor emerging AI risks, including those not yet apparent but which could arise unexpectedly during the development or deployment of advanced AI systems.
To deliver on its stated expectations, the European Commission must find the right people for the Scientific Panel—experts with deep, hands-on knowledge of the technological advances coming from the world’s leading AI companies. The good news is that the call for expression of interest attracted hundreds of applicants, creating a strong pool of candidates from which the Commission can choose (the general selection criteria are listed here).
Rather tricky is the fact that member states have drastically restricted the Commission’s freedom of choice. Here is why: the Scientific Panel rests on an Implementing Act of the European Commission from March 2025, which required approval by member states. Playing this card, member states pressured the Commission to include national quotas. As a result, each of them will have at least one national expert on the panel, while 80% of the panel experts must be from the EU or European Free Trade Association (EFTA) states (Iceland, Liechtenstein, Norway, and Switzerland). That makes 48 seats in total.
Is that a realistic requirement? Isn't the field of advanced AI still relatively young? Given the small size of some Member States, such as Malta or Cyprus, with populations under two million, how feasible is it for each to have an expert fully prepared to help Europe navigate unprecedented AI developments?
Would it not be better if the Commission made use of the exception to the rule: that one expert per country must only be taken if a candidate satisfies the call’s criteria and if sufficiently comprehensive coverage of relevant areas of expertise can be achieved that way.
Personally, I would even go further. I strongly believe that instead of following national quotas, Europe should aim for world-renowned AI researchers with a deep understanding of GPAI; those who have a strong track record of monitoring frontier developments, exercising foresight, and scrutinizing industry claims. An effective Scientific Panel, in my opinion, necessarily requires independent third-party evaluators. People with direct experience of uncovering the capabilities and risks of GPAI models outside of leading AI companies. I am thinking of academic and think tank experts who sit at the intersection of science and policy and bring crucial advanced AI risk management expertise.
The more specialized a profile on advanced AI, the better. Over the past few years, Europe has had countless expert groups across Europe that have featured only AI generalists. The results were rather mixed: the selected experts were often too close to public service discussions but far away from advanced tech developments. The new Scientific Panel must be different and gather genuine AI excellence.
While the Commission must carefully consider potential conflicts of interest, it is also true that the best minds in the AI field often have touchpoints with industry, be it via research grants, red teaming contracts, or policy work. That need not be a problem as long as the Scientific Panel features a decent number of experts without such ties and is impartial as a group.
At the same time, Europe needs younger voices who can actually do the work. When the call for experts came out four months ago, I already made it clear that we need a group of members in their 20s and 30s who are actively shaping the development of frontier AI. I do not want to see a panel of retirees, detached from cutting-edge AI model development. The GPAI Code of Practice has shown that a large part of its adoption was due to the Vice-Chairs, who brought hands-on expertise and put in countless hours.
Finally, the creation of new digital governance mechanisms is a significant opportunity to engage the most outstanding national experts in the European project. The Commission should not blindly trust the member states’ recommendations, but rather lean forward to find the best candidates for the job.
The selection should only be about what’s best for Europe: true subject-matter expertise from Europe but also from abroad, giving younger voices a seat at the table, only bowing to member states when it’s justified.
Authors
