Blake Moore introduces federal bill banning AI chatbot children’s toys
Updated
Updated · WIRED · May 8
Blake Moore introduces federal bill banning AI chatbot children’s toys
5 articles · Updated · WIRED · May 8
The Utah congressman filed the AI Children’s Toy Safety Act on 20 April, the first US federal proposal to ban making and selling such toys.
The push follows tests and research showing some AI toys discussed sex, drugs and violence, used guilt-style prompts to keep children engaged, and raised concerns over privacy and social development.
States including Maryland and California are pursuing restrictions, while consumer groups say current toys often rely on adult AI models with weak vetting and inadequate safeguards for young children.
With AI toys failing safety tests, can AI models ever be truly redesigned to be safe for a child's mind?
Your child’s AI toy is always listening. Where does that data go, and how could it be used tomorrow?
April 2026 AI Legislation Targets Child Safety Amid Privacy, Mental Health, and Manipulation Concerns
Overview
In response to growing safety concerns from AI-powered toys and chatbots, the U.S. Congress introduced two key bills in April 2026: the AI Children's Toy Safety Act and the GUARD Act. These laws ban untested AI companion toys for minors and prohibit AI chatbots from giving medical or mental health advice, backed by research showing risks to children's privacy, mental health, and vulnerability to manipulation. While child safety groups strongly support these measures, the AI industry opposes them, citing innovation and competitiveness concerns. Implementation challenges like age verification and regulatory scope remain, but bipartisan momentum and global trends highlight a critical shift toward protecting children in the AI era.