Back to stories
Policy

OpenAI Quietly Removes 'Safety' From Its Mission Statement

Michael Ouroumis2 min read
OpenAI Quietly Removes 'Safety' From Its Mission Statement

OpenAI has altered its mission statement, removing the word "safely" from its commitment to developing artificial general intelligence. The company previously pledged to build AGI that is "safe and beneficial to humanity." The updated language drops the safety qualifier entirely.

What Changed

The original OpenAI charter, published in 2018, centered safety as a core principle. The company was founded explicitly as a counterweight to unchecked AI development, with the stated goal of ensuring powerful AI systems would be developed responsibly.

The revised mission statement now focuses on making AGI "beneficial to humanity" without the safety modifier. While OpenAI has not issued a public statement explaining the change, it was noticed by researchers and policy advocates who monitor the company's governance documents.

The Context

The timing is significant. OpenAI is in the process of restructuring from its unusual capped-profit model into a fully for-profit corporation. This transition has been accompanied by:

Paradoxically, OpenAI and Microsoft simultaneously joined the UK AI Safety Institute's Alignment Project, suggesting the company's relationship with safety is more complex than the mission statement change alone implies.

Critics argue the mission change reflects a company that has systematically deprioritized safety in favor of growth and market dominance. Supporters counter that safety work continues internally regardless of the mission statement's wording.

Industry Reaction

The AI safety community responded with alarm. Multiple researchers pointed out that OpenAI's original appeal — the reason many top scientists joined the company — was its explicit commitment to cautious, safety-first development.

Several former OpenAI employees posted on social media noting the contrast between the company's founding principles and its current trajectory. One former researcher described it as "the final page turn in a story that's been unfolding for two years."

What It Means

Mission statements are symbolic, but symbols matter. OpenAI's original safety commitment served as a benchmark against which its actions could be measured. Removing that language reduces external accountability at precisely the moment the company is building its most powerful systems yet.

Whether OpenAI's actual safety practices have changed is a separate question — but the willingness to drop the word from its public-facing mission suggests where the company's priorities now lie. Meanwhile, the Pentagon has fast-tracked competitor xAI's Grok for classified systems, showing that the military establishment is not waiting for the safety debate to be resolved.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read