Back to stories
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

Michael Ouroumis2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

The Linux kernel project has ended a months-long argument over artificial intelligence by doing something characteristically Torvalds: refusing to ban the tools, refusing to romanticize them, and making humans eat every mistake the machines make. On April 12, kernel maintainers agreed on a formal, project-wide policy that explicitly allows AI-assisted code contributions, provided submitters follow strict new disclosure rules and accept full accountability for what they ship.

Linus Torvalds ultimately cut the debate short, reportedly dismissing calls for an outright ban as "pointless posturing" and framing AI as just another tool in the developer's belt. The decision closes a fight that had been running since at least January, as maintainers wrestled with a flood of low-quality, machine-generated patches — what detractors called "AI slop" — showing up on kernel mailing lists.

What the policy actually says

The new rules permit developers to use systems such as GitHub Copilot while insisting that human contributors remain fully accountable for every line they submit. That includes code quality, licence compliance, and any bugs or security problems that emerge downstream. A developer can prompt Copilot for a suggestion, but the moment they add their Signed-off-by line, they are personally attesting to its correctness.

To make the provenance visible, the kernel is introducing a new "Assisted-by" tag for patches that involved AI. The tag is meant to identify which model and which tools were used, giving maintainers and reviewers a clearer view of how a submission was produced. Crucially, AI agents themselves are forbidden from adding Signed-off-by tags — only humans can take the legal step of certifying a patch.

Why this matters beyond the kernel

The Linux kernel is not just another open-source project. Its contribution norms — the Developer Certificate of Origin, the Signed-off-by workflow, the maintainer hierarchy — have been copied across thousands of downstream projects for two decades. When the kernel adopts a stance on AI, it becomes the de facto template for Git-based open-source governance.

The "humans pay for every mistake" framing also sends a clear signal to enterprises now deploying coding agents at scale. As AI-generated patches proliferate across GitHub, GitLab, and internal repos, kernel-style accountability rules give legal and security teams something concrete to point to. Expect the Assisted-by tag, or close cousins of it, to spread quickly.

The middle ground

The most notable aspect of the decision may be how unremarkable it looks in hindsight. Rather than adopting either of the extremes — ban AI contributions outright, or treat them like any other patch — Torvalds and his maintainers picked a middle path: transparency plus human liability. It is a bet that the kernel's decades-old discipline of individual responsibility can absorb a new class of tool without losing its character.

For now, that bet holds. Whether it survives the first major AI-introduced CVE is a different question.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'
Policy

Google Signs Classified AI Deal With Pentagon for 'Any Lawful Government Purpose'

Google has entered a classified agreement allowing the US Department of Defense to deploy its AI models for any lawful government purpose, with non-binding limits on mass surveillance and autonomous weapons.

6 hours ago2 min read
EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027
Policy

EU Heads to Trilogue April 28 With Plan to Delay High-Risk AI Rules to 2027

Brussels negotiators meet today aiming for a political deal on the AI Omnibus that would push the high-risk AI Act deadline back to December 2027 and lock in firm watermarking rules for synthetic content.

19 hours ago2 min read
State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft
Policy

State Department Orders Global Push to Warn Allies About Alleged Chinese AI Theft

A U.S. State Department diplomatic cable instructs embassies worldwide to raise concerns with foreign governments about Chinese AI firms — DeepSeek, Moonshot AI, and MiniMax — allegedly extracting and distilling models from American labs.

1 day ago3 min read