OpenAI strengthens ChatGPT safeguards to prevent misuse for violence and ensure safety
Updated
Updated · OpenAI · Apr 28
OpenAI strengthens ChatGPT safeguards to prevent misuse for violence and ensure safety
13 articles · Updated · OpenAI · Apr 28
OpenAI has expanded ChatGPT's detection and enforcement systems, including new expert-guided measures, enhanced parental controls, and upcoming trusted contact features for adult users.
Automated and human review processes now better identify subtle warning signs in conversations, with immediate bans for violators and law enforcement notified in cases of imminent, credible risk.
These updates build on years of model training and expert input, aiming to balance user privacy and civil liberties while prioritizing community safety and preventing the platform's use for threats or harm.
Can teens secretly disable ChatGPT's new parental controls, leaving parents in the dark?
Is your private chat with an AI a safe space or a target for warrantless surveillance?
Experts warn of 'AI-induced psychosis.' Is your AI companion a mental health risk?
After multiple attacks, what is OpenAI's real threshold for alerting police to a threat?
If an AI knows it's being tested, are its safety guardrails just a performance?