Sam Altman apologizes for not reporting ChatGPT user linked to school shooting
Updated
Updated · Ars Technica · Apr 29
Sam Altman apologizes for not reporting ChatGPT user linked to school shooting
13 articles · Updated · Ars Technica · Apr 29
Altman’s apology follows lawsuits alleging OpenAI ignored internal safety warnings about a ChatGPT account later tied to a deadly Canadian school shooting in Tumbler Ridge.
OpenAI’s safety team had recommended notifying police, but leadership instead deactivated the account and failed to alert authorities, allowing the user to regain access and continue planning.
Altman pledged to improve OpenAI’s safety protocols and collaborate with governments to prevent similar tragedies, acknowledging the irreversible loss suffered by the affected rural community.
After disbanding its safety teams, can OpenAI be trusted to prevent future tragedies?
Did OpenAI's marketing of its chatbot's power create its current legal nightmare?
How can AI balance user privacy against the urgent need for public safety?
Could stronger government regulation have prevented the Tumbler Ridge mass shooting?
Are AI chatbots becoming an unregulated gateway to violent extremism?