US military strikes over 1,000 targets in Iran using AI Maven system
Updated
Updated · The Verge · Apr 24
US military strikes over 1,000 targets in Iran using AI Maven system
10 articles · Updated · The Verge · Apr 24
In the first 24 hours, US forces hit more than 1,000 targets in Iran, enabled by the Maven Smart System and large language models like Claude.
The rapid targeting led to incidents such as a strike on a girls’ school, killing over 150 people, mostly children, after the site was misidentified due to outdated data.
Project Maven, developed with tech firms including Palantir, Microsoft, and Amazon, has accelerated warfare pace, raising concerns about data reliability, civilian harm, and the diminishing role of human oversight in military decision-making.
Project Maven's founder aimed to save lives, so how did his creation become a tool for mass-scale industrial warfare?
When an AI's error leads to a tragedy like the Minab school strike, who is held accountable: the operator, the coder, or the machine?
If AI targeting accuracy is as low as 30% in snow, are we accepting machine error as the new cost of modern war?
As the Pentagon blacklists AI firms over ethics, can any tech company truly prevent its technology from being used for any lawful purpose?
With AI now in submarines and autonomous drones, is a fully automated war, free of human control, now inevitable?