Back to stories
Policy

White House Unveils National AI Framework, Urges Congress to Preempt State Laws

Michael Ouroumis2 min read
White House Unveils National AI Framework, Urges Congress to Preempt State Laws

The Trump administration on Friday released a four-page legislative blueprint laying out its vision for national AI regulation, urging Congress to create a single federal framework that would preempt the growing patchwork of state-level AI laws.

The document, formally titled the National Policy Framework for Artificial Intelligence: Legislative Recommendations, arrives as state legislatures across the country race to regulate artificial intelligence on their own terms — with over 1,500 AI-related bills already introduced in 45 states during the 2026 session alone.

Six Guiding Principles

The framework is organized around six core priorities for lawmakers: protecting children online, keeping electricity costs in check as data centers proliferate, safeguarding intellectual property rights, preventing AI-driven censorship, educating Americans on how to use AI responsibly, and removing barriers to innovation.

"Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones," the framework states. States like Washington have already enacted their own AI regulations — here is what Washington's AI laws mean for businesses operating there today.

Child Safety and Consumer Protection

On child protection, the blueprint calls for parental controls over accounts and devices to safeguard children's privacy. It also supports features designed to combat potential sexual exploitation and self-harm facilitated by AI systems.

The framework recommends that Congress address the use of AI replicas that simulate a person's likeness or voice — a growing concern as deepfake technology becomes more accessible.

Energy and Infrastructure

Recognizing the massive energy demands of AI infrastructure, the document calls on Congress to streamline permitting so data centers can generate their own power on-site. It also proposes codifying a requirement that tech companies pay for their increased energy consumption rather than passing costs on to ratepayers.

Copyright and Innovation

In one of its most closely watched positions, the administration stated it "believes that training of AI models on copyrighted material does not violate copyright laws," while acknowledging that "arguments to the contrary exist" and supporting the courts in resolving the matter.

To encourage experimentation, the framework proposes establishing "regulatory sandboxes" that would allow developers to test AI systems under relaxed rules.

Limits on Preemption

Notably, the administration carved out several exceptions. It does not seek to preempt state enforcement of general laws against AI developers for fraud, consumer protection, or child safety. Local authorities would retain control over data center siting decisions and how states procure AI tools for education and law enforcement.

What Comes Next

The administration called on Congress to convert the framework into legislation "in the coming months" — a timeline that puts pressure on lawmakers to act before the 2026 midterm elections shift political dynamics. Whether the blueprint can bridge the gap between pro-innovation and safety-focused factions in Congress remains an open question.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read