OpenAI launches European youth safety blueprint and EMEA wellbeing grants
Updated
Updated · OpenAI · May 5
OpenAI launches European youth safety blueprint and EMEA wellbeing grants
8 articles · Updated · OpenAI · May 5
The company named 12 grant recipients sharing €500,000 across Europe, the Middle East and Africa, including groups in Ukraine, Kenya, Jordan, Britain, France, Germany, Ireland and Italy.
Its blueprint proposes five policy pillars, including responsible AI use in education, privacy-preserving age assurance, under-18 safety policies, protections against manipulative outputs and common parental-control standards.
OpenAI said the measures complement work with governments, schools and institutions including Education for Countries, the University of Tartu and the Beneficial AI for Children coalition.
As tech giants write the playbook for youth AI safety, are they protecting children or their own interests?
AI is becoming a digital confidant for teens; is it a helpful friend or a harmful substitute for human connection?
OpenAI Launches European Youth Safety Blueprint with Five Core Principles and €500,000 EMEA Grants
Overview
On May 8, 2026, OpenAI launched the European Youth Safety Blueprint, introducing five core principles to protect minors interacting with AI. Developed with input from child safety organizations, the blueprint emphasizes privacy-protective age verification, strict content moderation, parental controls, wellbeing tools, and transparency. Alongside this, OpenAI announced a €500,000 EMEA Youth and Wellbeing Grants program funding 12 organizations across 10 countries to advance AI literacy, mental health, and safety research. These initiatives align with major European regulations and have gained strong support from child safety partners, while also inspiring similar efforts like the proposed California Parents & Kids Safe AI Act. Together, they set a global benchmark for responsible AI use among youth.