Back to stories
Policy

Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea

Michael Ouroumis3 min read
Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea

Anthropic has started requiring some Claude users to submit government-issued photo IDs and live selfies in an effort to shut out access from US adversaries including China, Russia, and North Korea, according to a report by Juro Osawa in The Information published April 21, 2026. The policy escalates a months-long effort by the AI lab to enforce its geographic restrictions — and arrives as evidence mounts that Chinese firms have been routing around those rules at scale.

From quiet rollout to hard border

The identity checks first surfaced publicly in mid-April when screenshots of the verification screen spread on X, making Claude the first major consumer AI chatbot to demand passport-grade verification to access certain capabilities. Anthropic framed the step as necessary to "prevent abuse, enforce our usage policies, and comply with legal obligations," according to statements reported by the South China Morning Post.

The Information's reporting today adds a sharper national-security frame: Anthropic is specifically trying to block users connected to countries the US government treats as adversaries. It is also acknowledgement, in effect, that Claude's official ban in China, Hong Kong, and Macau has not kept the product out of those markets.

The workaround economy

Despite the ban, Anthropic has "quietly flourished" in China through businesses that resell or relay API access, the Information reported. The South China Morning Post profiled one such service, AICodeMirror, which claims more than 10,000 registered users and over 200 institutional clients. VPN-based access and third-party wrappers have been the norm for Chinese developers who view Claude — and particularly Claude Code — as a top tool for software engineering tasks.

The new ID requirement is expected to sharply narrow that gray market. Early reporting suggests China-issued national ID cards are not accepted by Anthropic's verification partner, meaning users without a passport could be locked out entirely. Black-market vendors are already advertising workarounds, according to SCMP.

Part of a wider frontier-model crackdown

The verification push follows a coordinated move earlier this month in which OpenAI, Anthropic, and Google began sharing intelligence through the Frontier Model Forum to detect and disrupt attempts by Chinese AI firms to distill their models. Anthropic has said it documented 16 million unauthorized API exchanges tied to three named Chinese companies.

Implications

For enterprises, the change turns identity verification into a gating function for certain Claude capabilities — a shift that will ripple into procurement, compliance reviews, and data-residency conversations. For developers in restricted regions, it tightens an already narrow door. And for the broader AI industry, it sets a precedent: frontier labs are now willing to demand biometric identity checks to enforce export-style controls, even at the cost of user friction and a vocal backlash on social platforms.

The open question is enforcement. If relay platforms and forged-document markets can stay a step ahead of verification, Anthropic's wall will leak. If they cannot, Claude could become the first major US AI product to meaningfully wall itself off from one of the world's largest developer populations.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read