Back to stories
Policy

Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

Michael Ouroumis2 min read
Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

A North Carolina man has pleaded guilty to one of the most brazen AI-assisted fraud cases in the music industry's history — using artificial intelligence to generate hundreds of thousands of songs and bots to stream them billions of times, fraudulently collecting more than $8 million in royalties.

Michael Smith's scheme, confirmed by the Department of Justice's Southern District of New York, represents a new frontier in AI-enabled fraud: systematically exploiting the economics of streaming royalty systems at a scale that was previously impossible without AI.

How the Scheme Worked

The mechanics were straightforward and scalable:

  1. Generate content at scale — Smith used AI tools to produce hundreds of thousands of songs. These weren't high-quality productions — quantity was the point, not artistry.

  2. Upload everywhere — the AI-generated tracks were distributed across major streaming platforms including Spotify, Apple Music, and Amazon Music.

  3. Bot the streams — automated bots were deployed to stream the songs "billions" of times, according to the DOJ. Streaming platforms calculate royalty payments based on play counts, so artificially inflated streams translate directly into real money.

  4. Collect the royalties — the fraudulent stream counts generated over $8 million in royalty payments that Smith received.

Why This Case Matters

Smith's case isn't just about one person's fraud — it's a preview of a systemic vulnerability in how the music industry monetizes streaming.

Streaming royalty systems were designed for a world where creating and uploading music had meaningful friction. AI has eliminated that friction entirely. Anyone with access to a music generation AI and basic technical knowledge can now produce thousands of "songs" in hours. Combine that with bot infrastructure for artificial streaming, and the fraud economics are compelling.

The scale Smith achieved — billions of streams, $8 million in royalties — required AI. A human couldn't manually create and upload hundreds of thousands of tracks. That's what makes this case a landmark: it demonstrates that AI doesn't just assist fraud, it enables fraud at a scale that wasn't previously feasible.

The Industry Response

Streaming platforms have been fighting bot-driven streaming fraud for years, but the AI content generation layer adds a new dimension. Previously, fraudsters needed some amount of real music. Now they need none.

Spotify, Apple Music, and Amazon have not commented specifically on this case. The broader industry body representing performance rights organizations has acknowledged that AI-generated content is complicating existing royalty frameworks — though the focus has been on compensation questions for human artists, not fraud prevention.

What Comes Next

Smith's guilty plea is significant but unlikely to deter others. The technical barrier to replicating his scheme is low, and the potential upside — millions in fraudulent royalties — is high. Until streaming platforms develop more robust detection systems capable of identifying AI-generated content and distinguishing genuine listeners from bots at scale, the vulnerability remains.

The music industry spent years dealing with stream manipulation fraud. AI just made the problem orders of magnitude worse.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read