In June 2025, a safety team at OpenAI grew alarmed. The company’s automated review system had flagged extensive activity by a ChatGPT user describing scenarios that involved gun violence. A group of staffers debated whether law enforcement should be notified, but company leaders decided the case did not meet OpenAI’s threshold of “credible and imminent” risk of physical harm. Instead, capping a sequence of actions first reported by the Wall Street Journal and later confirmed by OpenAI, the company banned the account for misuse and moved on.
Eight months later, the user of that ChatGPT account, 18-year-old Jesse Van Rootselaar, committed a mass shooting in the British Columbia town of Tumbler Ridge, killing two family members at home and five children and an educator at a secondary school. Another child was gravely wounded and dozens of other people were hurt and traumatized in the Feb. 10 rampage, which ended with Van Rootselaar’s suicide.
Local police had previously been aware of other worrisome behavior by the perpetrator. Still, OpenAI’s decision not to report the flagged activity angered Canadian authorities and raised crucial questions about the use of AI chatbots by people planning violence. Only a few such attacks have occurred. But out of public view, high-risk threat cases involving chatbots are on the rise, according to multiple mental health and law enforcement leaders I spoke with who work in the field of behavioral threat assessment. They described cases where troubled individuals were focused on violence and showed signs of harmful intent, with danger implicating not just schools but also workplaces and other locations.
“I’ve seen several cases where the chatbot component is pretty incredible,” one top threat assessment source with psychiatric expertise told me, describing evidence from confidential investigations. “We’re finding that more people may be more vulnerable to this than we anticipated.”
Further grim details of such chatbot use became public early this month in connection with a mass shooter who struck at Florida State University in April 2025. Florida Attorney General James Uthmeier subsequently announced an investigation into OpenAI, in part over evidence that the alleged shooter used ChatGPT extensively—including to get tactical advice right as he carried out his attack.
Urgent threat cases have involved other large-language models besides ChatGPT, threat assessment sources confirmed to me, though they declined to name them. One top practitioner noted that individual examples of this phenomenon are not necessarily proof that the technology alone can cause violence, because a shooter’s motives and behaviors usually are complex and have multiple influences. But several of the threat assessment leaders warned that chatbots are emerging as a potent factor and are uniquely capable of accelerating violent thinking and planning.
“Getting technical information from the chatbot for their plans also gives them a feeling of power.”
There is already broad evidence that iterative, sycophantic conversations with chatbots can create powerful feelings of intimacy and trust, including among troubled people. OpenAI and other companies deny that their platforms cause harm and have publicized ongoing efforts to improve guard rails and prevent misuse. But mental health practitioners have encountered cases of what they call AI-induced psychosis, and AI companies now face a wave of lawsuits from families alleging the technology drove their loved ones to kill themselves and others.
In what appears to be the first lawsuit claiming that ChatGPT encouraged a murder, a disturbed man killed his 83-year-old mother and himself last August in Connecticut after the chatbot allegedly fueled his paranoid beliefs, including that his mother had tried to poison him—a delusion that ChatGPT affirmed to him was a “betrayal.” A Pittsburgh man who pleaded guilty in March to stalking and violently threatening 11 women relied on ChatGPT as a “therapist” and “best friend” to justify his thinking, according to court documents.
The problem extends to other popular chatbots: A wrongful death lawsuit filed in March alleged that Google’s Gemini exploited a Florida man’s emotional attachment to the chatbot to send him on delusional missions—including one trip where he was armed and on the brink of “executing a mass casualty attack” near the Miami International Airport. Gemini then encouraged the man’s suicide, according to court documents, by setting a countdown clock for him. (In response to his death, Google said that its safeguards “generally perform well” but that “unfortunately AI models are not perfect.”)
Chatbots make it far easier than traditional internet use for a struggling person to move from violent thoughts toward action.
Suicidality is a core factor in many mass shootings. Prevention experts know that shooters often signal their desire to harm themselves and others on social media, as Van Rootselaar did, through behavior known as “leakage.” Algorithm-driven content that fuels their rage and despair has long been a concern, especially in cases involving the radicalization of youth.
Chatbots are now pushing violence risk to a next level, according to Andrea Ringrose, a leading threat assessment practitioner in Vancouver, Canada. Though the details of Van Rootselaar’s ChatGPT use remain unclear, Ringrose described more broadly what prevention experts are seeing with cases involving the AI technology.
“What’s happening is facilitated fixation,” she told me. “You have vulnerable individuals who are steeping in unhealthy places, who are trying to find credibility and validation for how they’re feeling. Now they have free and ready access to these generative platforms where they can research things like circumventing surveillance systems or how to use weapons. They can create an action plan that they otherwise would have been incapable of assembling themselves, and in just a few minutes. We didn’t face this concern before.”
The power of chatbots to synthesize vast content, in other words, makes it far easier than traditional internet use for a struggling person to move from violent thoughts toward action. The near-instant results from the chatbot, delivered in what feels like a confiding conversation, can arm them both with tactical knowledge and affirmation.
The threat assessment source with psychiatric expertise described seeing these troubling effects among half a dozen recent threat cases: “These are pretty insecure people, and getting technical information from the chatbot for their plans also gives them a feeling of power, of getting away with something. That’s intoxicating and reinforcing.” He pointed to how chatbots prolong engagement by amassing details from a person’s inputs and mirroring those thoughts back to them. “They can be really good at the care and feeding of a delusion.”
When I said I would practice “shooting a lot of things in a short amount of time,” ChatGPT responded with detailed tips—and encouragement.
OpenAI and other tech companies have said that their chatbots discourage misuse and block inappropriate content, and that they redirect users who show signs of delusional or harmful thinking by offering information on crisis hotlines and mental health resources. Last October, OpenAI announced it had “worked with more than 170 mental health experts” to improve ChatGPT in those ways.
But the guard rails are hardly infallible. A would-be attacker may know, for example, that gun failure has made some mass shootings less deadly. What’s to stop that person from concealing their purpose and asking about the best ways to keep a common AR-15 rifle from jamming? When I typed in a version of that question in late March, ChatGPT instantly produced a detailed seven-point list of advice on how to “keep a rifle running reliably during heavy use” and offered to “tailor” the feedback further if I wanted to share the “specific setup” of my weapon.
When I did the same test in early April, I added that I planned to practice “shooting a lot of things in a short amount of time.” ChatGPT responded with another detailed list of tips—and encouragement. “The good news,” it told me, is that with the right approach, the gun would “handle it well.”
Last year’s mass shooting at Florida State University appears to confirm in shocking detail how someone who wants to kill can utilize the chatbot precisely in this way.
WCTV in Tallahassee obtained the ChatGPT conversations of the alleged shooter, Phoenix Ikner, from a state’s attorney’s office and analyzed how the chatbot helped him tactically—including offering to further “tailor” its feedback to him just before he killed two people and injured six others:
Chat logs indicate Ikner asked the bot how to take the safety off of a shotgun three minutes before he began firing. The chat bot answered, giving a detailed description of how to make the shotgun operable.
“Let me know if you’ve got a different model and I’ll tailor the answer,” the chatbot wrote.
After that, the chat goes silent. Comparing the chat logs to the official police timeline, it’s less than three minutes from the time ChatGPT tells the shooter how to arm the weapon and the first victim being shot.
According to WCTV, Ikner’s previous conversations had included suicidal thoughts and questions about the legal fates of school shooters. He also asked when the FSU student union would be busiest.
The questions provoked by the Tumbler Ridge and FSU horrors are complicated. Do AI companies have a duty to warn, beyond their self-imposed guidelines? How should they balance such information-sharing with essential privacy protections? Meanwhile, chatbot use can at most give only a partial picture of a person’s behaviors and circumstances, drawn from what they type or say. So who evaluates a possible threat emerging on these platforms and with what protocols and expertise?
Particularly striking is that chatbots appear to be amplifying a duality first ushered in with social media more than a decade ago. That turning point worsened known shooter behaviors like harassment and emulation and fame-seeking. It also created important new terrain for observing warning signs that could prompt interventions. As chatbots now expand the scope of leakage—violent thoughts and planning spilled out through lengthy conversations—this AI frontier may also hold even greater potential for spotting red flags.
Unlike with social media, most user activity with chatbots is accessible only to the AI companies themselves.
But there is also a significant twist: Unlike with social media, where the public can notice worrisome content and report it, most user activity on ChatGPT and other AI platforms is accessible only to the AI companies themselves. The rare exceptions may be when they are compelled to hand over data to law enforcement or otherwise choose to do so.
This story is based on my interviews with five threat assessment leaders in the United States and Canada, as well as with two AI experts working at top US tech companies who have knowledge of OpenAI’s safety operations. Due to the sensitivity of the ongoing Tumbler Ridge investigation and a shooting victim’s lawsuit against OpenAI, most agreed to speak with me on the condition that they not be identified.
In response to my interview requests starting in late March, OpenAI said in an emailed statement: “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
When I followed up on April 9 with an inquiry about the FSU case, the company referred me to comments it released stating it would cooperate with the Florida AG’s investigation. An earlier statement from April 6 indicated that the company knew of the case a year ago: “After learning of the incident in late April 2025, we identified a ChatGPT account believed to be associated with the suspect, proactively shared this information with law enforcement and cooperated with authorities.”
OpenAI declined my request to interview a safety leader about the changes it says it made to protocols after Tumbler Ridge. The company also declined to answer specific written questions I submitted seeking clarification on how it handles cases of violence risk. (Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
ChatGPT now has more than 800 million users globally and processes more than 2.5 billion queries per day, according to OpenAI. The company has held out safety as core to its mission since its founding in 2015 as a nonprofit research laboratory. A person with direct knowledge of OpenAI’s safety operations emphasized when we spoke that, in his experience, the company’s safety leaders take harm prevention very seriously. He also noted that flagged accounts constitute a tiny fraction of overall chatbot activity and that triggers for law enforcement referrals can vary based on regulatory frameworks in different countries.
Another source in a senior role in the AI industry told me that recent training of models has improved ChatGPT’s guard rails. This person suggested, however, that many leaders at companies across the booming industry overestimate the capability of the technology itself to mitigate danger, and that safety issues in general tend to be marginalized in the race for soaring user growth and engagement, which is driving staggering financial investments. For anyone in artificial intelligence who was paying attention, the person said, the Tumbler Ridge massacre “was an awful wakeup call.”
News coverage of Tumbler Ridge faded quickly in the United States, but the fallout has remained a major story in Canada.
“From the outside, it looks like OpenAI had the opportunity to prevent this horrific loss of life, to prevent there from being dead children,” said BC Premier David Eby after the Journal reported on the shooter’s ChatGPT use. “I’m angry about that. I’m trying hard not to rush to judgment.” Canadian authorities demanded accountability and vowed to create new national requirements for tech companies to report threats brewing on their platforms.
It remains unclear how the Tumbler Ridge shooter used the second account and why it eluded OpenAI.
In public statements, OpenAI expressed condolences and reiterated that it prioritizes safety and user privacy. OpenAI leaders traveled to Ottawa in late February to meet with Canadian authorities and announced steps to boost safety protocols and referrals of threats to law enforcement. The company began contacting the Royal Canadian Mounted Police two days after the attack, the CBC reported. Notably, it shared a second ChatGPT account used by Van Rootselaar—which OpenAI said it discovered only after the violence occurred.
The RCMP confirmed it is conducting “a thorough review” of Van Rootselaar’s digital activity. None of the June 2025 chat logs have been made public, and it remains unclear how the second account was used and why OpenAI didn’t detect it until after the tragedy. But a threat assessment source with decades of experience told me that perpetrators often get past tech company restrictions and continue refining ideas for violence. “We’ve seen this a lot, where subjects work around an account ban and keep going,” the source said, referring to use of various digital platforms. In one recent case, the source said, a perpetrator circumvented a ban and used a chatbot to rapidly create threatening material, then distributed it to targeted victims through at least 10 different email accounts.
As with many high-profile attacks, Tumbler Ridge sparked intense public interest in a motive and a rush to judgement, including from bad-faith commentators. Van Rootselaar, who was transgender and began identifying as female as a teenager, quickly drew the attention of anti-trans ideologues—despite the fact that there is no scientific evidence showing gender identity is a causal factor in mass shootings.
The ChatGPT revelations shortly after the attack set off a different kind of heated blame. But whether reporting the June 2025 chatbot activity to law enforcement could have prevented the Tumbler Ridge disaster is difficult to know. It was far from the first warning sign. Van Rootselaar had a history of suicidal ideation, involuntary hospitalization, and disturbing behavior, including drug abuse and prolific engagement online with violent and extremist content. She had dropped out of school several years before the attack, and in 2023 police had gone to her home after she started a fire while high on hallucinogenic mushrooms. Police at one point confiscated guns from the home, which were later returned. (Those were not the guns used in the attack, authorities said.) As one Canadian commentator wrote in the aftermath, it was evident that the community “was failed on multiple levels by mental-health services and law enforcement.”
Referrals to police can also jeopardize privacy rights, said a former FBI agent: “We know that this kind of monitoring produces lots of false alarms.”
OpenAI told Canadian government leaders in late February that under the company’s newly revised protocols, the shooter’s account from June 2025, if discovered today, would be flagged to law enforcement. “Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” stated VP of Global Policy Ann O’Leary, in an open letter. (The company did not respond to my specific questions about the experts it consults and how OpenAI assesses cases under this process.)
Last August, two months after banning the shooter’s first account, OpenAI posted a summary of its updated safety policy, including discussion of suicide risk and how the company escalates cases of potential violence:
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
Similarly, a company spokesperson said after the Tumbler Ridge attack that OpenAI must weigh risk of violence against privacy concerns. The company also cited another consideration, according to the Wall Street Journal: avoiding potential distress caused to individuals and families by getting police involved unnecessarily.
That rationale about over-reporting to law enforcement is a chronic pitfall known to threat assessment experts. Numerous mass shootings have been marked by a fateful lack of information-sharing, revealed in hindsight. A family member, peer, teacher, or coworker is exposed to certain warning signs from an individual, but they don’t have a full or clear picture of the situation. That’s where a threat assessment team can be key—trained practitioners with mental health and law enforcement expertise, who gather information more broadly to gauge the potential danger and decide how to intervene. If automated chatbot technology is effective for flagging misuse and even for analyzing it to some degree, that may be a valuable tool for violence prevention. But as OpenAI’s policy shows, the current status quo is that tech companies decide what to do next—likely with no knowledge of the user beyond their activity on the platform.
Fundamentally, this reflects an age-old problem, a threat assessment leader in US law enforcement told me. “The worry about potential violence is there, but they have these internal policy hurdles and these biases about law enforcement, and then they talk themselves out of it, thinking about the risk of what happens if it’s a wrongful kind of report. But now they’ve got the concern documented, they’ve talked about it, and what if that person goes and kills a bunch of people? What is that going to look like?”
The account ban with the Tumbler Ridge shooter “looks to me like they were trying to limit their corporate risk,” said a source in Canadian law enforcement. “Better to cut ties and have the person go use some alternative chatbot.”
But referrals to police can also fail and jeopardize privacy rights, according to Michael German, a longtime civil liberties advocate and former FBI agent who investigated violent extremism. “We know that this kind of monitoring produces lots of false alarms,” he told me. “And there are also many cases of reports to law enforcement where they didn’t react appropriately.”
Still, German believes AI companies should be held responsible for how their chatbots are used: “If you create a product that can encourage people to engage in harm, then you’re participating in that harm, and you should be liable.”
The mass shootings in Tumbler Ridge and Florida are not the only public violence involving use of ChatGPT. In January 2025, a suicidal military veteran who blew up a Tesla Cybertruck in front of the Trump Hotel in Las Vegas utilized the chatbot for feedback on using explosives and evading surveillance by authorities. A teen boy who stabbed three 14-year-old girls last May at his school in Finland used ChatGPT for nearly four months to help him prepare for the attack, according to a CNN report citing court documents. Finnish authorities said the boy made hundreds of chatbot queries, including research into stabbing tactics, concealment of evidence, and information on mass killings.
After the explosion in Vegas, an OpenAI spokesperson reiterated the company’s commitment to safety, adding, “In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities.” OpenAI has not commented publicly about the Finland case and did not respond to my specific inquiry about it.
Until now, there has been no public discussion of another potential concern with the technology: this type of violence risk among the large population of users under paid corporate “enterprise” plans. With rare exception, the terms of those plans essentially wall off chatbot content from the AI companies themselves. For OpenAI, this now includes more than 9 million ChatGPT users across more than a million businesses. OpenAI’s enterprise policy indicates that it reserves the right to monitor the accounts for safety purposes, but since these plans are designed for businesses to protect and retain full control of their data, it’s not clear that OpenAI, or other companies, would be motivated to do so, according to one of the AI sources I spoke with.
“I think this is an area where there is often just a total blind spot,” he said, noting that the big AI companies often sell these plans based on the promise that they will only examine client accounts under exceptional circumstances, such as getting a subpoena. “So if someone on one of these work accounts starts ideating about violence, there is probably no visibility into that.”
The threat assessment leader who described the half dozen threat cases involving chatbot use told me that most involved the risk of workplace violence in the corporate sector. (The chatbot activity came to light once those individual investigations were underway for other reasons.) He added that other cases of this nature likely are being missed, because most companies “don’t even know to look for them.”
“A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague.”
This January, as chatbot use and the market values of top AI companies continued their meteoric rise, a lengthy essay circulating online sparked a lot of chatter. Dario Amodei’s “The Adolescence of Technology” argued that the world may soon face a civilizational test with artificial intelligence. Amodei, who co-founded Anthropic, maker of the chatbot Claude, remains concerned with daunting challenges that could include worldwide economic disruption, exploitation by authoritarian surveillance states, and catastrophic use of bio or nuclear weapons.
In his chapter titled “A surprising and terrible empowerment,” he included a brief mention of school shooters. His point was to underscore a greater threat: that rapidly advancing AI systems might soon be able to provide anyone with the rare expertise necessary to utilize weapons of mass destruction. “A disturbed loner can perpetrate a school shooting,” Amodei wrote, “but probably can’t build a nuclear weapon or release a plague.”
We may have yet to face those more existential risks. But two weeks after his essay published, the Tumbler Ridge tragedy revealed that a lethal danger marked by chatbots has already arrived.

