menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

OpenAI’s safety pledges in the wake of Tumbler Ridge aren’t AI regulation — they’re surveillance

26 0
18.03.2026

In a span of two days following news that the Tumbler Ridge perpetrator’s ChatGPT account had been flagged prior to the shooting, OpenAI CEO Sam Altman met with Federal AI Minister Evan Solomon and British Colombia Premier David Eby.

He secured commitments on both sides: reporting threats directly to the RCMP, retroactive review of previously flagged accounts, distress-redirect protocols, access to the company’s safety office for Canadian experts and an agreement to work with B.C. on regulatory recommendations to Ottawa.

He also agreed to apologize to the community of Tumbler Ridge, where 18-year-old Jesse Van Rootselaar killed eight people and wounded many others before dying of a self-inflicted wound. Months prior to the shooting, Van Rootselaar’s ChatGPT account had been flagged for scenarios involving gun violence. The account was banned, but not reported to law enforcement.

OpenAI’s new commitments are significant gestures. But they resolve a narrower question than the one Tumbler Ridge actually raised. As I argued earlier, the core problem was not a reporting failure. It was a governance vacuum.

What’s changed since? OpenAI has agreed to make the same type of unilateral determination it made before, but to act on it more aggressively, routing the result directly to the RCMP. That is not a fix. It is the same unaccountable architecture with a faster trigger.

The human-in-the-loop fallacy

Consider what we now know about the internal process. The shooter’s account was flagged. Human moderators reviewed the interactions. Some advocated escalating to law enforcement. Other humans, guided by the company’s own opaque thresholds, decided against it. The breakdown was not mechanical. It was institutional.

“Human in the loop”........

© The Conversation