Fortune Authors Propose 3-Stage Test for 1,200-Plus U.S. AI Bills
Updated
Updated · Fortune · May 15
Fortune Authors Propose 3-Stage Test for 1,200-Plus U.S. AI Bills
3 articles · Updated · Fortune · May 15
A three-stage framework would screen AI bills by asking first whether existing law already covers the harm, then weighing four cost-benefit dimensions, and finally testing design, durability, adaptation and enforceability.
More than 1,200 AI-related bills were introduced in state legislatures in 2025 and just under 150 enacted, the authors say, leaving companies to navigate a fast-growing patchwork with no shared policy standard.
The proposal argues many state measures duplicate consumer-protection, civil-rights and privacy law, while broad federal preemption and mandatory frontier-model approval are also misdirected or too hard to enforce.
It points instead to narrow state laws on harms such as deepfakes, election fraud and child sexual abuse material, while reserving federal action for frontier-model cyber or CBRN risks highlighted by Anthropic's Mythos disclosure.
The authors frame the next 12 months as pivotal as California and New York laws take effect, Texas runs a 36-month sandbox, Connecticut enacts SB 5, and Washington weighs tougher national AI oversight.
While the EU and China have clear AI strategies, is America's fragmented approach a competitive weakness or a source of innovation?
As autonomous AI agents begin to act independently, who is legally responsible when their decisions cause unintended harm?