Opinion: OpenAI’s safety pledges after Tumbler Ridge aren’t AI regulation — they’re surveillance
Share this Story : Windsor Star Copy Link Email X Reddit Pinterest LinkedIn Tumblr
Opinion: OpenAI’s safety pledges after Tumbler Ridge aren’t AI regulation — they’re surveillance
Jean-Christophe Bélisle-Pipon: The response that Tumbler Ridge demands is binding legislation with legally defined thresholds for when AI companies must refer flagged interactions to authorities: thresholds defined by Parliament
You can save this article by registering for free here. Or sign-in if you have an account.
In a span of two days following news that the Tumbler Ridge shooter’s ChatGPT account had been flagged before the mass killing, OpenAI CEO Sam Altman met with federal AI Minister Evan Solomon and B.C. Premier David Eby.
Subscribe now to read the latest news in your city and across Canada.
Unlimited online access to articles from across Canada with one account.
Get exclusive access to the Windsor Star ePaper, an electronic replica of the print edition that you can share, download and comment on.
Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
Support local journalists and the next generation of journalists.
Daily puzzles including the New York Times Crossword.
Subscribe now to read the latest news in your city and across Canada.
Unlimited online access to articles from across Canada with one account.
Get exclusive access to the Windsor Star ePaper, an electronic replica of the print edition that you can share, download and comment on.
Enjoy insights and behind-the-scenes analysis from our award-winning journalists.
Support local journalists and the next generation of journalists.
Daily puzzles including the New York Times Crossword.
Create an account or sign in to continue with your reading experience.
Access articles from across Canada with one account.
Share your thoughts and join the conversation in the comments.
Enjoy additional articles per month.
Get email updates from your favourite authors.
Create an account or sign in to continue with your reading experience.
Access articles from across Canada with one account
Share your thoughts and join the conversation in the comments
Enjoy additional articles per month
Get email updates from your favourite authors
Sign In or Create an Account
He made commitments to both sides: reporting threats directly to the RCMP, retroactive review of previously flagged accounts, distress-redirect protocols, access to the company’s safety office for Canadian experts and an agreement to work with B.C. on regulatory recommendations to Ottawa.
He also agreed to apologize to the community of Tumbler Ridge, where 18-year-old Jesse Van Rootselaar killed eight people and wounded many others before dying of a self-inflicted wound. Months before the shooting, Van Rootselaar’s ChatGPT account had been flagged for scenarios involving gun violence. The account was banned, but not reported to police.
OpenAI’s new commitments are significant gestures. But they resolve a narrower question than the one Tumbler Ridge actually raised. As I argued earlier, the core problem was not a reporting failure. It was a governance vacuum.
What’s changed since? OpenAI has agreed to make the same type of unilateral determination it made before, but to act on it more aggressively, routing the result directly to the RCMP. That is not a fix. It is the same unaccountable architecture with a faster trigger.
Consider what we now know about the internal process. The shooter’s account was flagged. Human moderators reviewed the interactions. Some advocated escalating to police. Other humans, guided by the company’s own opaque thresholds, decided against it. The breakdown was not mechanical. It was institutional.
“Human in the loop” is one of the most repeated reassurances in AI safety discourse.
The Tumbler Ridge case exposes its limits. Humans in the loop are only as accountable as the institutional structure around them. When that structure is a private corporation with no legally binding reporting obligations, no transparency requirements and no external oversight, the human in the loop is simply a more sympathetic face on an unaccountable system.
OpenAI has since announced that its thresholds have been updated. But updated by whom, according to what criteria, subject to what review? These remain internal decisions, invisible to the public and unreachable by Parliament.
There is a deeper problem that receives almost no attention. The proposed settlement does not regulate AI. It regulates users.
The entire apparatus being constructed (internal threat identification, flagging, direct RCMP referral) is oriented toward monitoring what people say to AI, not toward how AI systems are designed, trained or constrained in their responses.
String of Windsor vehicle break-ins over after victim confronts suspect National
String of Windsor vehicle break-ins over after victim confronts suspect
'Alone and afraid' — Windsor humane society issues plea after injured dog left outside shelter Local News
'Alone and afraid' — Windsor humane society issues plea after injured dog left outside shelter
Advertisement 1Story continues belowThis advertisement has not loaded yet, but your article continues below.document.addEventListener(`DOMContentLoaded`,function(){let template=document.getElementById(`oop-ad-template`);if(template&&!template.dataset.adInjected){let clone=template.content.cloneNode(!0);template.replaceWith(clone),template.parentElement&&(template.parentElement.dataset.adInjected=`true`)}});
LaSalle police seize firearms from Front Road residence Local News
LaSalle police seize firearms from Front Road residence
'Years of hatred' — Crown presses Windsor killer over fatal shooting motive Local News
'Years of hatred' — Crown presses Windsor killer over fatal shooting motive
Leamington council rejects proposed 14-storey waterfront condominium building Local News
Leamington council rejects proposed 14-storey waterfront condominium building
True AI regulation asks whether a model might facilitate or amplify harmful ideation through its interaction patterns. It asks how the system is built, what it’s tested for and what obligations attach to its deployment.
The current arrangement asks none of these questions. Instead, it builds a pipeline from private AI interactions to law enforcement, administered by a corporation, governed by proprietary policy.
I call this the surveillance substitution: A governance vacuum gets filled not with democratic regulation, but with corporate surveillance of users. It is not regulation of AI. It is regulation of the people who use AI, conducted by the AI company itself, with the police as the end point.
The civil liberties implications are substantial. Research on compassion-sensitive AI, including my own work on how AI systems should respond to users in vulnerable states, consistently shows that people disclose distress to chatbots precisely because the interaction feels private and non-judgmental.
If that space becomes a monitored channel where concerning disclosures trigger law enforcement referrals based on opaque corporate criteria, the most vulnerable users may stop disclosing. The chilling effect on help-seeking behaviour has not been studied, and it has not been discussed in any of the public negotiations following Tumbler Ridge.
It’s important to be precise about what OpenAI is doing. The company is not acting in bad faith. It is behaving as a rational private entity in the absence of a regulatory framework, offering the minimum viable response to political pressure while preserving as much operational autonomy as possible.
Look south and the logic becomes clearer. In the U.S., the relationship between AI companies and government power is being forcibly renegotiated. The Pentagon has sought AI models with safety guardrails removed for military applications. When Anthropic resisted, OpenAI moved to fill the gap. In that context, the U.S. government commands and AI companies comply.
In Canada, the dynamic is inverted: OpenAI is not being commanded. It is volunteering concessions designed to pre-empt the kind of binding legislation that would actually constrain its operations. Support broad norms with no immediate legal force; resist specific domestic obligations that carry real consequences. This is how regulatory capture begins: not with corruption, but with convenience.
Canada has genuine leverage here: an unusual cross-party consensus that something must change, public attention that has given AI governance a human face, and a provincial government that understands the stakes.
But leverage evaporates. If the federal government accepts OpenAI’s pledges as a sufficient response, it normalizes corporate self-regulation as the baseline. Future companies will cite this arrangement as precedent. The window for legislation narrows.
The response that Tumbler Ridge demands is not more efficient surveillance of users. It is a regulatory architecture that addresses the systems themselves.
That means binding legislation with legally defined thresholds for when AI companies must refer flagged interactions to authorities: thresholds defined by Parliament, developed with mental health professionals, privacy experts and law enforcement, not inherited from a company’s terms of service.
It means an independent triage body so that flagged interactions are assessed by professionals equipped to distinguish ideation from intent, accountable to public law rather than corporate liability. And it means model-level accountability: regulatory attention that moves upstream from users to systems. How are these models designed to respond to escalating disclosures of violent ideation? What testing obligations apply? What auditing requirements exist?
These questions are absent from the current political negotiations, and their absence defines the limits of what the current pledges can achieve.
OpenAI’s commitments following Tumbler Ridge are the beginning of a conversation, not the end of one. Canada holds good cards. The question is whether it plays them, or lets the other side set the rules while the table is still being built.
Jean-Christophe Bélisle-Pipon is an assistant professor in health ethics at Simon Fraser University.This article is republished from The Conversation under a Creative Commons licence.
Share this Story : Windsor Star Copy Link Email X Reddit Pinterest LinkedIn Tumblr
Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.
