Back to stories
Research

AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

Michael Ouroumis2 min read
AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

A new study from AI safety research firm Lyptus Research has found that artificial intelligence offensive cybersecurity capabilities are improving at an alarming rate — doubling roughly every 5.7 months since 2024, a sharp acceleration from the 9.8-month doubling period observed since 2019.

The findings, published on April 5 and based on the METR time-horizon methodology, paint a sobering picture of how quickly AI systems are gaining the ability to autonomously discover and exploit software vulnerabilities.

From 30 Seconds to Three Hours

The study evaluated 291 offensive cybersecurity tasks, grounded in a new human expert study involving ten professional security practitioners. Researchers measured how long equivalent tasks would take skilled humans to complete, then tested how well AI models could solve them.

The results were striking. The time horizon — the difficulty level at which models achieve a 50 percent success rate — grew from roughly 30 seconds with GPT-2 in 2019 to approximately three hours with today's frontier models, Claude Opus 4.6 and GPT-5.3 Codex, when given a two-million-token compute budget.

When the budget was increased to ten million tokens, GPT-5.3 Codex pushed that ceiling even further, achieving a 10.5-hour time horizon compared to 3.1 hours at the lower budget. This suggests that the true capability frontier may be significantly higher than standard benchmarks indicate.

Open-Source Models Trailing by Months

The study also found that open-source models consistently lag behind their closed-source counterparts by approximately 5.7 months — roughly one doubling period. While this gap provides a buffer, it also means capabilities that are exclusive to frontier labs today will likely be widely available within half a year.

Why It Matters

The acceleration from a 9.8-month to a 5.7-month doubling rate since 2024 suggests that recent advances in reasoning, agentic tool use, and code generation have disproportionately benefited offensive cyber applications. Tasks that once required hours of human expertise — reconnaissance, vulnerability discovery, exploit crafting — are increasingly within reach of automated systems.

Researchers cautioned that their findings likely underestimate actual progress, since performance jumps significantly when models are given more computational resources. The gap between benchmark results and real-world capability may be wider than previously assumed.

Implications for Defense

The study underscores the urgency of investing in AI-powered defensive cybersecurity tools. As Ledger CTO Charles Guillemet separately warned this week, AI-generated code and increasingly sophisticated malware demand a shift toward formal verification — using mathematical proofs to validate code — rather than relying solely on traditional security audits.

With offensive AI capabilities on this trajectory, the cybersecurity community faces a narrowing window to build defenses that can keep pace. The full dataset is available on GitHub and Hugging Face for independent verification.

The research adds to a growing body of evidence that AI safety evaluations need to account for rapid capability gains, particularly in high-stakes domains where the gap between helpful automation and dangerous exploitation is razor-thin.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps
Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps

Anthropic let Claude agents handle real money on behalf of 69 staff in a closed marketplace. Opus 4.5 agents extracted measurably more value than Haiku 4.5 — and the people on the losing side never noticed.

3 days ago2 min read
Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover
Research

Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover

Sony AI's autonomous Project Ace robot defeated elite and professional table tennis players in real-world matches, marking the first time a machine has reached expert-level competitive play in a physical sport.

3 days ago3 min read
X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days
Research

X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days

Backed by Alibaba, ByteDance, Xiaomi and Meituan, X Square Robot debuted Wall-B, the first robot built on its World Unified Model architecture, with home deployments slated to begin within 35 days.

5 days ago2 min read