Back to stories
Industry

Bombshell New Yorker Investigation Accuses Sam Altman of 'Consistent Pattern of Lying'

Michael Ouroumis3 min read
Bombshell New Yorker Investigation Accuses Sam Altman of 'Consistent Pattern of Lying'

A sweeping investigation published by The New Yorker on April 6 — and dominating industry discussion today — paints a damning portrait of OpenAI CEO Sam Altman, alleging years of deception toward board members, co-founders, and safety researchers at the company building what many consider the world's most consequential technology.

The report, authored by Ronan Farrow and Andrew Marantz over an 18-month period, draws on interviews with more than 100 sources and over 200 pages of previously undisclosed internal documents.

The Memos

At the heart of the investigation are internal memos compiled by two of OpenAI's most prominent former leaders. Ilya Sutskever, OpenAI's former chief scientist, reportedly spent weeks in fall 2023 assembling roughly 70 pages of Slack messages, HR documents, and analysis. One memo states bluntly: "Sam exhibits a consistent pattern of lying."

Separate notes from Dario Amodei, who left OpenAI to co-found Anthropic — and who stood awkwardly alongside Altman at India's AI summit just weeks before — are equally direct: "The problem with OpenAI is Sam himself."

Safety Infrastructure Gutted

The investigation traces a systematic dismantling of OpenAI's safety apparatus. The superalignment team — announced in mid-2023 with a pledge of 20% of the company's compute — was dissolved in May 2024 after co-leads Sutskever and Jan Leike departed. Insiders told The New Yorker that the team's actual compute allocation was "between one and two per cent," far short of the promised resources.

The AGI-readiness team followed in October 2024 when leader Miles Brundage left. The Mission Alignment team, superalignment's successor, was disbanded in February 2026 after just 16 months. Most strikingly, the investigation found that OpenAI removed the word "safely" from its mission statement in IRS filings and dropped safety from its list of significant activities.

Broader Allegations

The report extends well beyond safety concerns. It alleges that former Y Combinator partners reportedly told co-founder Paul Graham that Altman "had been lying to us all the time" before his 2019 departure from the accelerator. Board member Helen Toner reportedly discovered that GPT-4 features Altman claimed were safety-approved "had never been approved."

The investigation also examines Altman's pursuit of up to $50 billion in funding from Saudi Arabia and the UAE, and a Pentagon partnership that came after rival Anthropic was blacklisted for refusing to remove autonomous weapons restrictions.

OpenAI's Response

Altman told The New Yorker that "my vibes don't really fit with a lot of this traditional A.I.-safety stuff" and that his positions have evolved in good faith. In what critics called a conspicuously timed move, OpenAI announced a new Safety Fellowship for external researchers hours after the investigation went live. The fellowship will run from September 2026 through February 2027 and offer stipends, compute support, and mentorship — though fellows will not have access to OpenAI's internal systems.

What It Means

The investigation lands at a pivotal moment for OpenAI, which is generating $2 billion in monthly revenue and preparing for a potential IPO as early as Q4 2026. Whether these allegations materially affect investor confidence, regulatory scrutiny, or public trust remains to be seen — but they ensure that questions about who should be trusted to steward the most powerful AI systems on Earth will only grow louder.

Learn AI for Free — FreeAcademy.ai

Take "AI for Business: Practical Implementation" — a free course with certificate to master the skills behind this story.

More in Industry

Eli Lilly Bets $2.25B on Profluent's AI-Designed Gene Editors in Beyond-CRISPR Deal
Industry

Eli Lilly Bets $2.25B on Profluent's AI-Designed Gene Editors in Beyond-CRISPR Deal

Eli Lilly inked a research collaboration worth up to $2.25 billion with Bezos-backed AI biotech Profluent to develop custom site-specific recombinases — enzymes designed by generative models to perform large-scale DNA editing that current CRISPR tools cannot.

6 min ago2 min read
AWS Unveils Amazon Quick, Connect Agentic AI Suite, and Bedrock Managed Agents Powered by OpenAI
Industry

AWS Unveils Amazon Quick, Connect Agentic AI Suite, and Bedrock Managed Agents Powered by OpenAI

At its April 28 'What's Next with AWS' event, Amazon turned Connect into a four-product agentic AI family, debuted desktop assistant Amazon Quick, and previewed Bedrock Managed Agents running OpenAI's frontier models on AWS infrastructure.

3 hours ago2 min read
Anthropic Opens Sydney Office, Builds on Australian Government MOU as Hourmouzis Takes ANZ Helm
Industry

Anthropic Opens Sydney Office, Builds on Australian Government MOU as Hourmouzis Takes ANZ Helm

Anthropic officially opened its Sydney office this week, naming former Snowflake executive Theo Hourmouzis as General Manager for Australia and New Zealand and reinforcing an earlier-April memorandum of understanding with the Australian government on AI deployment.

4 hours ago3 min read