Back to stories
Policy

White House Issues Executive Order on AI Safety Standards

Michael Ouroumis2 min read
White House Issues Executive Order on AI Safety Standards

The White House has issued a new executive order establishing mandatory safety testing requirements for AI models that exceed certain capability thresholds. The order represents the most significant federal action on AI safety to date.

Key Provisions

Mandatory Safety Testing

AI developers must conduct and report results from a standardized battery of safety evaluations before deploying models that meet or exceed defined capability thresholds. These evaluations cover areas including:

Reporting Requirements

Companies developing frontier AI models must notify the government when beginning training runs that exceed certain compute thresholds. They must also share safety evaluation results within 30 days of completing testing.

Red-Teaming Standards

The order establishes standardized red-teaming protocols that must be followed before deployment. These include both automated testing and human evaluation by independent third parties.

Industry Response

The major AI labs have generally responded positively, noting that many of the requirements align with voluntary commitments they made previously. However, some smaller companies have expressed concern about the compliance burden.

Supporters

Critics

Implementation Timeline

The executive order takes effect in phases:

  1. Immediate — Reporting requirements for training runs exceeding compute thresholds
  2. 90 days — Publication of detailed safety testing protocols by NIST
  3. 180 days — Full compliance with safety testing requirements
  4. 1 year — First annual review and potential updates to capability thresholds

International Coordination

The order includes provisions for coordinating with allies on AI safety standards, building on the Bletchley Declaration and subsequent international agreements. The UK AI Safety Institute's Alignment Project, which now includes OpenAI and Microsoft, represents one concrete example of this coordination in action. The goal is to prevent a race to the bottom where companies relocate to jurisdictions with weaker oversight.

What It Means

The executive order signals that AI regulation in the United States is moving from voluntary commitments to enforceable requirements. While the scope is currently limited to frontier models, the framework could be expanded as AI capabilities continue to advance. Globally, the EU AI Act takes a broader approach with its risk-based classification system, while China mandates government review for all models before public release.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read