Back to stories
Models

Anthropic Ships Claude Computer Use: Claude Can Now Control Your Mac While You Do Something Else

Michael Ouroumis3 min read
Anthropic Ships Claude Computer Use: Claude Can Now Control Your Mac While You Do Something Else

Anthropic has shipped Claude Computer Use, a research preview that hands Claude the wheel of your actual Mac. Keyboard, mouse, browser, spreadsheets — all of it. You assign tasks from your iPhone using the Dispatch app, and Claude autonomously works through them on your desktop while you step away.

The launch has been a long time coming, and the timing — in the same week that Anthropic published research showing Claude can complete a year's worth of theoretical physics work in two weeks — says something deliberate about how Anthropic is positioning itself.

How It Works

The setup is simpler than it sounds. Claude Pro or Max subscribers install the Dispatch iOS app, pair it with their Mac, and start delegating tasks. Claude gets direct access to the computer and uses it the way a person would — moving the mouse, clicking through interfaces, opening and switching between applications, filling out forms, reading content on-screen.

The system includes prompt-injection scanning, a safeguard designed to catch cases where malicious content on a webpage or in a document tries to redirect Claude's behavior mid-task. It's an important protection: as Claude navigates the web and interacts with third-party content during an autonomous session, those surfaces become potential attack vectors.

The current release is macOS only, Pro/Max tier only, and labeled a research preview — which typically means Anthropic expects to learn from real-world usage before expanding access.

Why Now

The launch of Claude Computer Use is part of a broader capability expansion Anthropic has been building toward. Boris Cherny, Claude Code's product manager, reflected publicly that a handful of people at Anthropic Labs shipped MCP, Skills, Desktop app, Claude Code, and now full Computer Use — all starting from rough prototypes of Sonnet 3.6, betting that the model would improve fast enough to make the investments worthwhile.

That bet has paid off. The same week Computer Use launched, Anthropic published research showing Claude Opus 4.6 functioning as a second-year graduate student in theoretical physics, completing a publishable paper on particle physics in roughly two weeks. Combined with desktop autonomy, Anthropic is building something that looks increasingly like a general-purpose AI colleague: one that can reason at an advanced level and now also take direct action on your computer.

The competitive framing is clear. OpenAI has Operator. Platforms like OpenClaw are building out agent ecosystems. Google has its own agentic push. Anthropic's entry into true desktop autonomy — not just text generation that requires a human to copy-paste the output — closes a gap that competitors had been exploiting.

What This Means for Users

For power users, this is a meaningful shift in what Claude can actually do for them day-to-day. Scheduling, research, filling in forms, running repetitive browser-based workflows, drafting and sending documents — tasks that previously required a human to do the clicking can now be delegated.

The practical limits will become clear quickly. Autonomous computer use is still prone to errors on complex multi-step tasks, and the safeguards around prompt injection are a recognition that the attack surface is real. But the direction is unambiguous: AI assistants are moving from advisors that produce text to agents that take direct action.

Claude Computer Use's research preview status means Anthropic is explicitly treating this as a learning phase. Expect capability expansions, bug fixes, and probably some genuinely surprising use cases to emerge from early adopters before a broader rollout.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support
Models

xAI Launches Grok Voice Think Fast 1.0, Tops τ-Voice Bench and Powers Starlink Support

xAI's new voice model scored 67.3% on the τ-voice Bench — well ahead of Gemini 3.1 Flash Live and GPT Realtime — and is now powering Starlink's phone sales and support with a 70% autonomous resolution rate.

2 days ago2 min read
Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao
Models

Tencent Drops Hy3 Preview: 295B Open-Source MoE Model Kicks DeepSeek Out of Yuanbao

Tencent has open-sourced Hy3 Preview, a 295B/21B-activated mixture-of-experts model built in under three months. The Yuanbao chatbot is switching its primary engine from DeepSeek to the new in-house model.

4 days ago2 min read
DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M
Models

DeepSeek V4 Preview Lands: 1.6T-Parameter Open Model With 1M Context, Flash Pricing at $0.14/M

DeepSeek on April 24 released preview versions of V4-Pro and V4-Flash, an open-weight MoE family with a 1M-token context window and pricing that undercuts Western frontier labs.

4 days ago2 min read