menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Chatbots and Canada’s AI Governance Gap

25 0
12.03.2026

Allowing AI systems to prey on vulnerable populations at scale is not a technical inevitability. It is a policy choice. And when young people, without specialized legal or technical expertise, can identify governance needs that align with international regulatory consensus, the barrier to action is political will.

The Canadian federal government has signalled that it intends to regulate AI chatbots, citing growing concerns about youth mental health, wellbeing and safety. ChatGPT’s high-profile role in the recent tragedy in Tumbler Ridge, BC has made that signal impossible to ignore. But, as Ottawa once again enters a period of deliberation on its online harms file, there is a real risk that Canada will repeat a familiar pattern: acknowledging harm while postponing action, despite the policy roadmap already existing and young Canadians having helped draw it.

What Other Countries Are Already Doing

Around the world, governments have moved decisively to address the risks posed by AI chatbots, particularly to children and young people. The European Union’s AI Act explicitly prohibits systems that deploy manipulative techniques or exploit age-related vulnerabilities. Australia’s eSafety Commissioner designates AI chatbots as high-risk technologies, subject to safety-by-design requirements. Brazil’s Digital Child Protection Bill mandates the removal of harmful content and restricts features that encourage behavioural dependency.

Canada, by contrast, has no mandatory pre-deployment risk assessments, enforceable safety-by-design standards, or ongoing monitoring requirements, despite documented cases of chatbots offering harmful guidance related to self-harm, eating disorders, and emotional distress. As the government now signals interest in regulation, it should be clear-eyed about how far behind we are.

What Young Canadians Are Asking For

Canadian youth, disproportionately affected by these technologies, are calling for more proactive governance. Through Gen(Z)AI, a national youth-led deliberative process led by the Centre for Media, Technology, and Democracy and the Dialogue on Technology Project that began in November 2025, young Canadians aged 17 to 23 were brought together to examine the risks and governance gaps surrounding AI chatbots. Through structured deliberation, participants identified three interconnected domains of harm: relational dependence, cognitive impacts, and content risks. From this work emerged a set of policy recommendations that map remarkably closely onto international best practices that Canada has yet to adopt.

Participants called for mandatory user controls over chatbot responsiveness and conversational intensity, a recognition that design choices shape emotional reliance and behavioural outcomes. They proposed the creation of an independent regulatory body with enforcement authority, rather than leaving oversight to voluntary industry commitments. And they emphasized the need for content moderation regimes and data deletion processes.

Part of what makes these recommendations striking is their convergence with proven regulatory models already in operation elsewhere. Calls for design-level protections mirror Age-Appropriate Design Codes in the UK and those adopted in California and other U.S. states. Demands for independent oversight reflect the regulatory architectures embedded in the EU’s Digital Services Act and Australia’s eSafety framework. The conclusion is unavoidable: the solutions to AI chatbot harms are neither mysterious nor technically infeasible. They exist and can work well, but Canada’s legislative record makes clear that the government has repeatedly stalled on this file.

Why Canada Keeps Stalling

Bill C-27’s Consumer Privacy Protection Act would have classified children’s data as sensitive by default and extended protections to inferred information – precisely the kind of safeguard conversational AI requires. But, Parliament failed to pass it. Bill C-63 (Online Harms Act) explicitly excluded AI chatbots, despite peer countries, like Australia, having moved to regulate them. Now, Canada’s previous attempts to establish enforceable standards have not been picked up by the new administration, in part because of the government’s narrow focus on AI innovation and adoption, which has been viewed in opposition to the development of a regulatory agenda for AI and online harms.

This narrative status quo suggests that AI regulation burdens companies, suppresses investment, and cedes competitive ground to other jurisdictions. But this framing misreads both the evidence and the moment. Regulatory certainty can reduce legal and reputational risk for companies building at scale, and safety-by-design requirements can also drive better engineering. The real competitive liability lies in building AI systems and products that harm young users without any measures to hold those systems, and the companies that build and deploy them, accountable. That is not a real innovation ecosystem; it is a liability waiting to materialize, and the populations absorbing that risk in the meantime are not shareholders, they’re children.

Three Things Policymakers Need to Get Right

Policymakers should be thinking about three things as they work toward the next iteration of Canada’s online harms and AI governance portfolio. The first is scope. Online harms legislation that does not explicitly bring AI chatbots within its regulatory perimeter is not fit for the moment. In the United Kingdom, Ofcom is already exploring secondary legislation to close chatbot coverage gaps in the UK’s Online Safety Act, and Australia’s eSafety Commissioner has registered industry codes of practice that explicitly impose safety duties on chatbot developers. Canada cannot table a framework that is behind its international peers at the moment of enactment.

The second is design accountability. The shift from content moderation to upstream design regulation is the difference between a regime that is future-proof and one that is not. Safety-by-design and age-appropriate design obligations, imposed as preconditions for deployment rather than post-hoc responses to harm, are what the evidence demands. Children’s personal data must be classified as sensitive by default, with protections extended to inferred and derived data. These obligations are technically actionable through mandatory limits on conversational persistence, user-adjustable controls over emotional mirroring features, and purpose-binding mechanisms that prevent children’s interaction data from being recycled into training pipelines.

The third is independent institutional capacity. Recent survey data from the Dialogue on Technology Project at Simon Fraser University makes clear that Canadians distrust both tech companies and governments to manage AI responsibly. A credible regulatory architecture must include an independent body empowered to mandate data access, conduct algorithmic audits, and enforce compliance. Without such capacity, obligations on paper will not have real teeth.

We have heard youth themselves call for many of these interventions. More than ever, the democratic mandate for AI governance in Canada is substantial, cross-generational, and increasingly specific. Canadians are asking government to hold companies accountable for design choices that are already causing harm, and to build the institutional capacity necessary to govern AI technologies. Doing so is a choice, and Canadians have made their preference unmistakably clear.


© OpenCanada