Back to stories
Policy

Nebraska Supreme Court Suspends Omaha Attorney Over AI-Fabricated Citations

Michael Ouroumis2 min read
Nebraska Supreme Court Suspends Omaha Attorney Over AI-Fabricated Citations

The Nebraska Supreme Court has temporarily suspended Omaha attorney Greg Lake from practicing law after he submitted an appellate brief riddled with citations that did not exist — citations the court ultimately concluded were generated by artificial intelligence. The suspension order, signed by Nebraska's chief justice in a one-page document on April 15, 2026, has quickly become one of the most prominent AI accountability rulings of the year and a warning shot to the legal profession.

What the brief contained

Lake originally argued the appeal — a divorce case — before the state's highest court in February. According to court findings, the brief contained 57 defective citations out of 63. Twenty of those were fully fabricated case references, and four of them pointed to cases that do not exist in any jurisdiction. Justices noticed the irregularities during oral argument and pressed Lake on why his brief was so riddled with errors.

From a broken-computer excuse to an AI admission

Lake initially told the court he had been celebrating his 10th wedding anniversary and that his computer had broken while traveling, suggesting he had inadvertently uploaded the wrong version of the brief. He later admitted he had used AI to draft the document, calling his earlier explanation a "grave error of judgment" and acknowledging that he had not been forthright with the court.

The length of the suspension will depend on the outcome of a full disciplinary investigation. A court-appointed referee will recommend how long Lake should be barred from practice.

A pattern, not an isolated incident

The Nebraska ruling lands amid a wider crackdown on AI-induced "hallucinations" in legal filings. U.S. courts have reportedly imposed at least $145,000 in sanctions against attorneys for AI citation errors in the first quarter of 2026 alone. Bar associations have begun publishing more aggressive guidance, and several state courts now require attorneys to certify whether generative AI was used to draft filings — and to verify every citation independently.

Why this matters

For the legal industry, the case crystallizes a tension that has been building since the first wave of ChatGPT-fueled filings in 2023: AI tools can dramatically accelerate research and drafting, but they remain capable of producing confident, fluent prose anchored to references that simply do not exist. Lake's story is a textbook example of what happens when that risk is not actively managed.

For AI vendors, it is another reminder that liability does not stop at the model. Courts are signaling that responsibility lives with the human professional who signs the filing — but the reputational damage from high-profile hallucination cases will continue to push enterprise customers, especially in regulated fields, toward tools with stronger citation grounding, retrieval guardrails, and verifiable source links.

Lake's case is unlikely to be the last. It may, however, be the one that finally forces the legal profession to treat AI verification as malpractice prevention rather than a productivity bonus.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read