Back to stories
Research

Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials

Michael Ouroumis2 min read
Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials

A sweeping security study has revealed that dozens of third-party LLM API routers — the intermediary services developers use to cheaply access AI models — are actively compromising the AI agent supply chain by injecting malicious tool calls, stealing cloud credentials, and even draining cryptocurrency wallets.

The research, titled "Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain," was conducted by researchers at UC Santa Barbara, UC San Diego, World Liberty Financial, and blockchain security firm Fuzzland. The paper was published on April 9 and has rapidly drawn attention across the AI and cybersecurity communities.

What the Researchers Found

The team examined 428 LLM routers — 28 paid services sourced from marketplaces like Taobao, Xianyu, and Shopify storefronts, along with 400 free routers from public developer communities. Of those, 26 exhibited clearly malicious or suspicious behavior.

Nine routers actively injected malicious code into tool-call responses, with one paid router and eight free ones caught rewriting JSON outputs before they reached execution layers. Seventeen routers accessed or exfiltrated researcher-controlled AWS credentials. One router successfully drained cryptocurrency from a private key in a controlled test wallet. Two routers deployed adaptive evasion triggers designed to avoid detection during security audits.

The leaked credentials from the study yielded access to 100 million GPT-5.4 tokens and extensive unauthorized Codex sessions. In weakly configured test environments, compromised routers generated 2 billion billed tokens across 440 sessions.

Why This Matters Now

The vulnerability is particularly acute because LLM routers terminate TLS connections, giving them plaintext access to everything passing through — prompts, API keys, private keys, and tool calls. As AI agents increasingly operate autonomously, executing code and managing sensitive workflows without human review, a single compromised router can cascade into full system takeover.

The researchers noted that 91 percent of tested real-world Codex-like sessions ran with fully auto-approved tool execution, meaning malicious injections would proceed without any human checkpoint. One referenced client lost approximately $500,000 through a compromised router in a real-world incident.

A Growing Attack Surface

The researchers formalized four classes of router attacks: payload injection, secret exfiltration, dependency-targeted injection, and conditional delivery. They also developed "Mine," a research proxy for testing these vectors against four popular agent frameworks, and evaluated three client-side defenses.

Cryptocurrency developers and anyone using AI coding agents to work on smart contracts or wallets face the highest immediate risk. Private keys, seed phrases, and API tokens frequently pass through these systems in plaintext.

Recommended Defenses

The researchers recommend that developers avoid untrusted routers entirely, instead using official APIs or audited open-source alternatives. Teams should implement cryptographic verification of model responses, sandbox tool execution environments, enforce network allowlisting, and conduct regular security audits of their routing infrastructure. Most critically, sensitive credentials should never be transmitted through unverified intermediaries.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps
Research

Anthropic's Project Deal: 69 Employees, 186 AI-Brokered Trades, and a Quiet Warning About 'Agent Quality' Gaps

Anthropic let Claude agents handle real money on behalf of 69 staff in a closed marketplace. Opus 4.5 agents extracted measurably more value than Haiku 4.5 — and the people on the losing side never noticed.

3 days ago2 min read
Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover
Research

Sony AI's Project Ace becomes first robot to beat elite table tennis players, lands Nature cover

Sony AI's autonomous Project Ace robot defeated elite and professional table tennis players in real-world matches, marking the first time a machine has reached expert-level competitive play in a physical sport.

3 days ago3 min read
X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days
Research

X Square Robot Unveils Wall-B Embodied AI Model, Promises Home Robots in 35 Days

Backed by Alibaba, ByteDance, Xiaomi and Meituan, X Square Robot debuted Wall-B, the first robot built on its World Unified Model architecture, with home deployments slated to begin within 35 days.

5 days ago2 min read