Updated
Updated · GIGAZINE(ギガジン) · Apr 13
Linux Kernel Embraces AI Code—But Developers Held Fully Accountable
Updated
Updated · GIGAZINE(ギガジン) · Apr 13

Linux Kernel Embraces AI Code—But Developers Held Fully Accountable

5 articles · Updated · GIGAZINE(ギガジン) · Apr 13
  • The Linux kernel project has formally approved the use of AI-generated code, placing full legal responsibility on the human submitter for any issues.
  • AI tools may assist with code, but only humans can certify contributions with a 'Signed-off-by' tag and must review and ensure license compliance.
  • This policy aims to maintain code quality and transparency, responding to concerns over 'AI slop' and legal risks from unclear code provenance.
Is the Linux kernel's new AI policy a pragmatic solution or a dangerous gamble on critical global infrastructure?
How will projects enforce AI disclosure when AI-written code is becoming undetectable?
Can a simple disclosure tag truly fix the 'AI slop' quality crisis overwhelming open-source maintainers?
Developers are now liable for AI bugs. What will the first major lawsuit over an AI-generated flaw look like?
If AI writes the code, is the most valuable human skill left in software engineering simply the ability to say no?
AI promises hyper-efficient 'one-pizza teams.' Is this the future of development or a recipe for faster disasters?

How the Linux Kernel’s 2026 AI Policy Mandates Human Liability to Combat Low-Quality AI Code

Overview

In early 2026, the Linux kernel project introduced a formal AI policy requiring developers to disclose AI assistance with an 'Assisted-by' tag while reserving legal responsibility exclusively for human contributors. This policy responded to growing AI use, concerns about low-quality 'AI slop,' and early experiments with AI-assisted review tools. While AI-generated code demands thorough human auditing due to licensing and security risks, AI has proven especially effective in enhancing code review and security analysis. The policy also influenced other major open-source projects and prompted investments in AI tooling to help maintainers manage increasing AI-generated contributions, emphasizing transparency and human accountability as essential safeguards.

...