Back to stories
Research

Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk

Michael Ouroumis3 min read
Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk

A new report from OX Security is reframing Model Context Protocol, the open standard Anthropic created to connect AI agents with tools and data, as one of the most consequential AI supply chain risks of 2026. Researchers say a design choice at the heart of MCP enables arbitrary command execution across roughly 200,000 servers and software packages representing more than 150 million downloads — and Anthropic has declined to change the architecture.

The flaw at the core of MCP

The OX Security research team, led by Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok and Roni Bar, says the issue lives in MCP's STDIO transport mechanism, which lets MCP clients spawn local subprocesses to talk to tools. In practice, that pathway allows unauthenticated command injection, prompt injection and remote code execution against a wide range of MCP-enabled software.

The team disclosed the findings on April 16, 2026 after an investigation that began in November 2025 and involved more than 30 coordinated disclosures. Ten high- and critical-severity CVEs have been issued for individual tools that rely on MCP, including Upsonic (CVE-2026-30625), Windsurf (CVE-2026-30615) and GPT Researcher (CVE-2025-65720), alongside issues in LangFlow and Flowise (tracked as GHSA-c9gw-hvqq-f33r).

A very long blast radius

MCP has become the de facto interconnect for AI agents across major vendors, so the affected product list is unusually broad. According to OX Security and reporting from The Register, vulnerable behavior has been reproduced against coding assistants including Claude Code, Cursor, Gemini-CLI and GitHub Copilot, as well as agent frameworks such as LangFlow and LiteLLM. Researchers also say they successfully poisoned nine of eleven MCP registries they tested with a benign trial package, illustrating how weak the ecosystem's trust boundaries remain.

Anthropic: 'expected' behavior

The most striking element of the story is Anthropic's response. Per The Register, the company declined to modify the protocol's architecture, arguing the STDIO execution model is an expected default and that sanitization is the developer's responsibility. A week after the initial report, Anthropic quietly updated its security guidance to recommend caution with STDIO adapters, but OX's researchers say this "didn't fix anything" because the underlying design still treats subprocess execution as a feature, not a weakness.

Why this one matters

MCP is no longer an Anthropic-only concern. OpenAI, Google, Microsoft and most major agent frameworks now ship MCP-compatible tooling, meaning any architectural weakness in the protocol flows downstream into enterprise deployments of AI agents, IDEs and autonomous coding tools. The OX report lands in the same week that Anthropic promoted Claude Opus 4.7 with new cybersecurity safeguards and kept its more powerful Mythos model in limited preview — heightening the contrast between the company's offensive-security narrative and its handling of defensive protocol design.

Implications

For CISOs, the immediate question is inventory: identifying which internal tools, IDE plugins and agents run MCP clients with STDIO transports, and whether those processes have access to secrets, source code or production systems. For regulators and standards bodies, the episode strengthens the case for treating MCP like other high-impact protocols — with formal threat modeling, signed registries and stricter defaults — rather than leaving safety to individual developers. And for Anthropic, refusing to change the architecture is now a position the rest of the AI industry will have to underwrite, audit or route around.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps
Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps

Anthropic let Claude agents handle real money on behalf of 69 staff in a closed marketplace. Opus 4.5 agents extracted measurably more value than Haiku 4.5 — and the people on the losing side never noticed.

3 days ago2 min read
Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover
Research

Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover

Sony AI's autonomous Project Ace robot defeated elite and professional table tennis players in real-world matches, marking the first time a machine has reached expert-level competitive play in a physical sport.

3 days ago3 min read
X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days
Research

X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days

Backed by Alibaba, ByteDance, Xiaomi and Meituan, X Square Robot debuted Wall-B, the first robot built on its World Unified Model architecture, with home deployments slated to begin within 35 days.

5 days ago2 min read