Home

Donate
Perspective

How the EU and UK Can Learn From Anthropic's Mythos

Jimmy Farrell / Apr 24, 2026

The Claude logo is displayed on a computer screen photographed using a kaleidoscopic filter in Creteil, France, on April 21, 2026. (Photo by Samuel Boivin/NurPhoto via AP)

Two weeks on from Anthropic’s announcement of its highly cyber-capable AI model Claude Mythos Preview (shortened to Mythos hereafter), the discourse on implications for Europe has mostly agreed on three key points: large-scale systemic risks from AI have unmistakably arrived, they will likely get worse in the near future, and the EU needs to prepare accordingly. The EU and UK’s approach combined can serve as a model for public sector response.

Mythos, Anthropic’s largest and most capable model ever, represents a significant jump in the ability of AI to automate complex cyber-attacks, in particular the two crucial steps of identifying vulnerabilities and generating exploits. It has found thousands of vulnerabilities in crucial infrastructure making up the internet and is the first model to autonomously complete all 32 steps of the UK AI Security Institute’s (UK AISI) corporate network attack simulation. Choosing not to release the model publicly, Anthropic has instead partnered with a select group of US companies operating critical technological infrastructure to patch vulnerabilities, whilst the White House is reportedly seeking access for government agencies despite its supply-chain risk designation earlier in the year.

Within six days of Anthropic’s announcement, UK AISI, the world’s leading public sector hub of frontier AI expertise, published the results of comprehensive cyber-capability testing of Mythos. Two days later, the UK government sent an open letter to UK business leaders regarding the heightened cybersecurity risks and recommended course of action. Few governments have moved this fast in response to a frontier AI development. However, the response still misses a crucial step, one which EU leaders can leverage: regulatory enforcement power. Combining the best of both the UK and EU’s comparative advantages will keep consequential frontier AI decisions, such as the deployment of Mythos, under public oversight.

Despite sitting within the UK’s Government Department of Science, Innovation and Technology (DSIT), UK AISI has been set-up uniquely to match the pace and highly specific technical expertise required of frontier AI. Key examples of this included administrative tweaks, such as offering double regular civil servant salaries to technical experts, by counting the costs as research and development (R&D) under the heading of capital expenditure, and implementing a streamlined ‘class approvals’ mechanism to significantly speed up recruitment.

With leading AI companies paying significantly higher than the public sector, the European Commission should explore similar administrative solutions (as well as general resourcing increases) to ensure it can attract the continent’s top talent. This includes moving towards contract permanence, instead of the precarious yearly renewal and maximum six-year term offered to AI Office technology specialists under the current FG IV Contract Agent system.

Another benefit of the UK’s unique structural approach to frontier AI expertise is the proximity of technical know-how to the highest levels of political power, as reported by Politico. In the EU, several levels removed from political decision-makers, the position of the Lead Scientific Adviser to the AI Office (having opened for applications in late 2024) remains unfilled, likely due to the extremely high-stakes responsibility of the role.

The current experts at the AI Office, however, do offer unique frontier AI expertise already, and direct channels could be established from these experts to the Cabinet level of Commission Executive Vice-President Henna Virkkunen and President Ursula Von der Leyen in cases of rapid capability jumps like Mythos. Ideally, President Von der Leyen and Member State Heads of State would have their own AI Advisors, similar to that of the UK’s Prime Minister Keir Starmer, with specific expertise in the most transformative frontier AI.

However, as mentioned above, the UK approach does not have it all and still relies entirely on the voluntary goodwill of frontier AI providers. In contrast, the developments around Mythos reflect the foresight of the EU and the prescience of the AI Act, with the systemic risks its general purpose AI model section (and accompanying Code of Practice) seeks to address now abundantly clear.

Even before a model like Mythos is officially deployed on the EU market, legal opportunities exist for the AI Office to gain some level of oversight over models like Mythos, if it is planned to be deployed in our market. The AI Act addresses systemic risks arising from high-impact capabilities of the most advanced models, arising “along the entire lifecycle of the model”, and including “offensive cyber capabilities, such as the ways in which vulnerability discovery, exploitation, or operational use can be enabled” (Recital 110). With Mythos potentially posing systemic risks to EU public security, and the AI Act applying as early as the pre-training phase for models planned for the EU market, the EU is equipped with a regulatory option space not possessed by the UK.

In addition to its legal mechanisms, the EU has also announced a promising array of institutional tools such as the Scientific Panel, Advisory Forum, Frontier AI Initiative, and a €9 million tender for third-party model evaluations to boost its frontier AI expertise and crisis-response capacity. Continuing this momentum of setting up rich multi-stakeholder and technically expert external forums will be essential to catching the EU up with the resources and expertise available to UK AISI. These forums should also include rapid response systems (akin to those used in the DSA) that allow information to flow quickly and precisely from technical expertise to political decision makers.

Finally, although the AI Act in its current form is a good start, Mythos and its foreshadowing of the AI models of 2-3 years from now make clear the need for stronger rules and robust international standards. Anthropic’s choice not to publicly release Mythos and the likely near-future scenario in which competing companies don’t apply the same caution expose the degree to which public oversight over frontier AI has so far fallen short. Just last week, OpenAI announced a similarly capable model, GPT-5.4-Cyber, that will be released far less restrictively than Mythos.

From pharmaceuticals to planes, chemicals and medical devices, critical deployment decisions are usually not left to private companies in EU law. The EU's AI Act even requires pre-market conformity assessment for AI systems (not the same as GPAI models) used in high-risk cases such as employment and law enforcement. But importantly, this does not exist for general-purpose AI models with systemic risk; the legal categorization that Mythos would fall under. In the context of Anthropic's own CEO Dario Amodei calling for a slowing down of the frontier AI race at Davos earlier this year, this regulatory gap could be plugged by introducing pre-market authorization for GPAI models with systemic risk in future revisions of the AI Act, bringing the rules back up to speed with the destabilizing impact of frontier AI.

The most important takeaway of Mythos, even more so than the immediate cybersecurity implications, is that systemic risks from frontier AI models have arrived, and will almost certainly get significantly worse in the near future. While the approaches across the Channel have different strengths, the EU will be most prepared for this uncertain future by adopting the commitment to resourcing and administrative innovation pursued by the UK, and pressing ahead with its world-leading regulatory toolkit. Europe’s security and sovereignty cannot be left to chance.

Authors

Jimmy Farrell
Jimmy Farrell is the EU AI Policy Co-Lead for Pour Demain, a think-tank working at the interface between technology and policy across national, regional, and international fora. Jimmy is currently working on policy recommendations for the EU to ensure the responsible development and deployment of ge...

Related

Perspective
Anthropic Warned Big Companies About Mythos. Workers and Watchdogs Need a Seat at the Table.April 22, 2026
News
Europe’s AI Act Leaves a Gap for Military AI Entering Civilian LifeMarch 10, 2026

Topics