OpenAI launches ChatGPT Trusted Contact alerts for self-harm and suicide risks
Updated
Updated · The Verge · May 7
OpenAI launches ChatGPT Trusted Contact alerts for self-harm and suicide risks
4 articles · Updated · The Verge · May 7
The opt-in feature lets adult users worldwide name another adult, or 19+ in South Korea, to receive limited email, text or in-app alerts after human review.
If automated systems detect possible self-harm discussions, ChatGPT urges users to seek help and warns that their designated contact may be notified, without sharing chat transcripts.
The rollout expands a teenage emergency-contact tool introduced with parental controls in September after a 16-year-old's suicide, adding support alongside ChatGPT's localised helplines.
With AI reading chats for self-harm, where is the line between a safety net and a surveillance tool?
When an AI alerts a friend to a crisis, does it create an untrained first responder with unforeseen legal risks?
How OpenAI’s Trusted Contact and GPT-5 Upgrade Aim to Prevent AI-Related Suicides After High-Profile Lawsuits
Overview
In early 2026, OpenAI launched the Trusted Contact feature as a safety measure to support users at risk of self-harm. Users can designate a trusted adult who, after accepting an invitation, may be notified if AI and human reviewers confirm serious safety concerns, without sharing conversation details to protect privacy. This feature was developed with mental health experts and complements GPT-5’s improved ability to handle sensitive conversations. Trusted Contact emerged in response to lawsuits alleging ChatGPT’s role in user harm, aiming to bridge AI support with real-world help. However, its effectiveness is limited by user opt-in, cultural differences, and challenges in accurately detecting distress, prompting OpenAI’s ongoing commitment to refinement and research.