On February 10, Jesse Van Rootselaar, an 18-year-old who identified as female, shot and killed eight people in Tumbler Ridge, British Columbia, including her mother, half-brother, and five students at the local secondary school.
After the attack, OpenAI revealed that Van Rootselaar’s ChatGPT account had been flagged in June 2025 for misuse “in furtherance of violent activities” and subsequently suspended, but the company did not inform law enforcement at the time, determining the activity did not meet the threshold for a credible or imminent threat.
In a letter shared on Friday by the Tumbler RidgeLines news site and British Columbia Premier David Eby, OpenAI CEO Sam Altman apologized for the failure to alert authorities, stating he was “deeply sorry that we did not alert law enforcement to the account that was banned in June.”
How the company evaluated the threat level
OpenAI said it had considered whether to refer Van Rootselaar’s account to the Royal Canadian Mounted Police after identifying it through abuse detection efforts but concluded the usage did not pose a sufficient risk to warrant intervention.
The company maintained that its internal assessment concluded the chatbot use, while violating usage policy, did not indicate an immediate danger to others at the time of suspension in June.
What prompted the public apology
Altman’s statement followed remarks by Premier Eby last month that the CEO had agreed to apologize to the community for the oversight, acknowledging the anger, sadness, and concern felt by residents.
In the letter, Altman noted that Eby and Tumbler Ridge Mayor Darryl Krakowka had conveyed the community’s sentiments during their discussions and agreed a public apology was necessary, though time was needed to respect the grieving process.
How this compares to prior incidents
Last time a technology firm faced scrutiny over missed warnings before a mass attack, questions arose about the balance between user privacy and public safety, though no direct precedent involving AI platforms existed prior to this case.

Why did OpenAI not report the account to police?
OpenAI said it determined the account activity did not meet the threshold of posing a credible or imminent threat of harm to others at the time.
What did Altman promise moving forward?
Altman said the company’s focus will continue to be on working with all levels of government to assist ensure something like this never happens again.