Back to stories
Policy

ESMA Tells Financial Firms to Brace for Mythos-Era AI Cyberattacks

Michael Ouroumis3 min read
ESMA Tells Financial Firms to Brace for Mythos-Era AI Cyberattacks

Europe's securities watchdog has put financial firms on notice that the speed and sophistication of cyberattacks are climbing, and that frontier AI models — Anthropic's Mythos in particular — are part of the reason. Speaking to reporters in Paris this week, European Securities and Markets Authority Chair Verena Ross said ESMA has been reaching out directly to supervised entities to test how prepared they are for an AI-accelerated threat landscape.

A Regulator-Led Stress Test

"We are closely watching how bringing AI models into this could increase the potential speed with which such attacks could happen," Ross said, framing the issue as a joint problem for national authorities and the EU. She added that supervisors "collectively between the national and the EU level need to up our game to try to ensure that we have the capability to properly look at what financial entities are doing in this space."

The warning landed at a delicate moment for European markets. Ross flagged that equity valuations remain "very, very high," driven heavily by large technology names, while geopolitical shocks — most recently oil-price volatility — continue to expose the financial system to abrupt repricing events. ESMA has also opened insider-trading reviews tied to recent volatile sessions, and crypto firms operating in the bloc face a 1 July deadline to secure MiCA licensing or wind down.

Why Mythos Has Supervisors Worried

The sharpest tooth in the warning was the explicit reference to Anthropic's Mythos model. Anthropic has said Mythos can autonomously discover previously unknown software vulnerabilities, generate working exploits and chain them into complex cyber operations with minimal human guidance. Reporting from Fortune, Euronews and CBC over the past two weeks has detailed how former cyber officials and bank security teams view the system as a step-change in offensive capability — one that resets assumptions about how quickly an attacker can move from zero-day discovery to large-scale exploitation.

For financial regulators, that compresses the windows they have relied on to coordinate disclosure, patching and incident response. ESMA's contact campaign is pushing firms to demonstrate, in concrete terms, that their detection, segmentation and recovery playbooks can absorb a faster adversary.

Building on the Critical Third-Party Regime

ESMA's move builds on regulatory groundwork already laid. In November, the agency, alongside the European Banking Authority and EIOPA, designated 19 technology companies as critical third-party providers to the EU finance industry — the first set under a new oversight regime aimed at tech resilience. The 2026 work programme of the European Supervisory Authorities' Joint Committee scales up coordinated supervision of those providers, with cybersecurity and AI named as cross-cutting priorities.

The pressure will only intensify on 2 August, when the central enforcement provisions of the EU AI Act come into force, introducing the bloc's strict risk hierarchy and compliance obligations for high-risk systems. Ross herself will not be at ESMA to see that next chapter through — she is set to step down on 31 October — but the supervisory posture she is setting now is likely to define how European banks, asset managers and market infrastructure providers approach AI-era cyber risk for years.

Implications

The message to boards is direct: assume an attacker with Mythos-class tooling, and prove your controls hold. For AI labs selling into financial services, it is a reminder that Brussels intends to supervise model deployments alongside the firms that adopt them — not merely the products themselves.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read