According to reports, an 18-year-old who is suspected of killing eight people in a mass shooting in Tumbler Ridge, Canada, used OpenAI’s ChatGPT in ways that worried the company’s employees.

The troublemaking communications

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,”

an OpenAI spokesperson said in a statement.

“We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

The automated mechanisms detected the inappropriate material and this was then reviewed by human specialists and her account was suspended in June 2025. 

However, it failed to comply with the internal case of an imminent threat adopted by OpenAI and, thus, did not trigger the need to inform the police in line with its company policy. 

A discussion went on but the privacy laws had their way up till the incident on February 10.

The Tragic Escalation

Van Rootselaar (18 years) was first to shoot her mother, Jennifer (39 years), and her 11-year-old half-brother in the house. She later went to Tumbler Ridge Secondary School where she murdered six people, of which is the case of educator Shannda Aviugana-Durand (39 years old), and five students ranging in age between 12-13, before committing suicide. 

There were 27 other people who were injured. On progress of digital forensics, it was found that there was a simulator in Roblox and Reddit chats about firearms; the local law-enforcement officials were conscious of alterations in her drug-related conduct, and there were previous indicators of explosive violence.

Increasing Concerns of Artificial Intelligence

In other legal cases, ChatGPT was involved in eight lawsuits, half of which were related to the United States, and where the system is accused of assisting with suicides, including the case in Colorado in 2025. 

The ECRI acknowledges the most common health-technology risk as misuse of AI chatbots in 2026, mainly because of incorrect spreading information. 

“Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,”

Marcus Schabacker said:

“Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations.”

Bottom Line

After the incident, OpenAI provided the Royal Canadian Mounted Police with information and confirmed that it would further cooperate. 

Opponents want fewer reporting procedures and anticipate the use of better threat-detection algorithms by 2027 as more and more claims are made that the deployment of large models is contributing to the breakdown of mental health.