Updated
Updated · CNN · May 5
Common Sense Media launches Youth AI Safety Institute to test child AI safety
Updated
Updated · CNN · May 5

Common Sense Media launches Youth AI Safety Institute to test child AI safety

7 articles · Updated · CNN · May 5
  • The lab starts with a $20 million annual budget backed by OpenAI, Anthropic and Pinterest, and plans to begin publishing research this month.
  • It will red-team leading AI models used by young people, issue consumer-friendly guides for families and create benchmarks companies can use to improve safeguards.
  • The launch follows lawsuits and investigations over chatbot harms to minors, as Common Sense argues independent standards are needed because rapid AI updates outpace company self-policing.
Beyond obvious dangers, how will we measure AI’s hidden impact on a child's developing brain?
With AI giants funding their own watchdog, can the new safety institute truly be independent?
AI models change daily, so how can a safety rating ever be truly reliable for parents?

Urgent Youth AI Safety Measures: California Launches Institute and Enacts Parents & Kids Safe AI Act

Overview

In response to a growing youth mental health crisis worsened by unregulated social media and the rapid rise of AI chatbots deeply embedded in teens' lives, California launched the Youth AI Safety Institute and enacted the Parents & Kids Safe AI Act in 2026. This law requires advanced age verification, bans harmful AI-generated content, and mandates parental control tools to protect minors. The Act, enforced by the California Attorney General, emerged from a unified effort by advocacy groups and tech leaders like OpenAI, aiming to set a national model. With bipartisan political support and ongoing education initiatives, California seeks to prevent past digital harms and ensure safer AI experiences for children.

...