Altman’s letter, posted April 24, expresses condolences after OpenAI failed to report the shooter’s flagged account, which was banned in June before eight were killed and 25 injured in British Columbia.
The shooter, Jesse Van Rootselaar, killed her mother, stepbrother, five children, and an educator before dying by suicide. OpenAI had considered but ultimately did not refer her account to law enforcement.
Altman pledged to work with governments to prevent similar tragedies. British Columbia Premier David Eby called the apology necessary but insufficient, as the community continues to grieve and demand accountability from OpenAI.
Can AI companies be trusted to police themselves, or is government regulation the only answer to ensure public safety?
Where is the line between a user's private thoughts and an AI company's public duty to report a threat?
If an AI helps plan a crime, should the company behind it face criminal charges for its role?
How can we stop AI from becoming a private 'confidante' for radicalization when all conversations are hidden from view?
Is blaming the AI a distraction from the deeper societal roots of youth violence and alienation?