Pentagon says humans make lethal decisions, not AI, in warfare
Updated
Updated · CNN · May 7
Pentagon says humans make lethal decisions, not AI, in warfare
9 articles · Updated · CNN · May 7
The stance follows scrutiny after a February US strike hit an Iranian elementary school, with Iranian state media saying at least 168 children were killed.
Officials said commanders remain legally responsible for targeting, but current law and Pentagon policy set no explicit limits on where AI can be used in the kill chain.
The US military is using AI tools from firms including Palantir and Anthropic to speed target selection in Iran, while the Pentagon investigates the school strike and faces growing ethical and legal questions.
When an AI's targeting error kills civilians, who is truly held responsible: the human, the commander, or the software developer?
With a global AI arms race now underway, is a future of fully autonomous warfare becoming inevitable?
The 2026 Pentagon-Anthropic Standoff: AI Ethics, Autonomy, and National Security at War
Overview
In 2026, the Pentagon demanded that Anthropic remove ethical restrictions on its AI model Claude, which prohibited use in autonomous weapons and mass surveillance. Anthropic refused, leading President Trump to order a phase-out of Claude and the Defense Secretary to blacklist Anthropic as a supply chain risk. Despite this, Claude was used in military operations in Iran, accelerating targeting but exposing risks like automation bias and security vulnerabilities. The Pentagon responded by diversifying AI suppliers, including OpenAI, while the dispute sparked legal challenges and raised global concerns about unchecked military AI use. This conflict highlights the urgent need for clear regulations, reliable AI systems, and ethical safeguards in warfare.