menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

American courts just gave Carney a roadmap for reining in YouTube and Meta

43 0
01.04.2026

If the Carney government was looking for a fool-proof roadmap on how to rein in large online platforms from knowingly causing harm to their users — including children — the recent judgments from California and New Mexico provide some excellent guidance.  

First, a jury in New Mexico found Meta, the owner of Facebook and Instagram, liable for misleading its users when it comes to child safety and that Meta’s product design enabled the endangerment of children. The jury ordered Meta to pay the maximum penalty of $375 million for violating New Mexico’s consumer protection laws.

A day later, a jury in California found Meta and YouTube were negligent in the design of their platforms and, despite knowing their design was dangerous, both platforms failed to adequately warn users of those risks and caused substantial harm to the plaintiff who claimed the online giants intentionally addicted her to their platforms as a child, causing injury to her mental health. (The plaintiff also raised these concerns with respect to Snap and TikTok, but both platforms settled with the plaintiff before trial.)

Both Meta and YouTube have stated they will appeal the decisions, but things look kind of dicey at the moment for Meta in particular because of a judgment in Delaware that was handed down in February.

A Superior Court judge found that Meta’s insurance companies are not on the hook for the thousands of lawsuits that allege Meta’s platforms like Instagram and Facebook harm children because the allegations against Meta pertain to deliberate and intentional acts rather than an accidental or an infrequent occurrence that would otherwise trigger insurance coverage. The evidence in both New Mexico and California show intentional actions on the part of Meta, which means Meta loses its insurance coverage.

This marks a monumental shift in the way online platforms are judged. Going forward, because of the precedent set in these cases, it is likely that online platforms will finally be judged like any other consumer-facing product whose product design harms children. This is welcome news for anyone who has paid attention to the issue of online harms and it should serve as good news for the Carney government.

The crux of the online harms issue has always been the design of the products on offer by platforms like Instagram and YouTube — and how these  companies knowingly push their users, even children, to harmful content.  

For decades, online platforms have relied on an American law called the Communications Decency Act, specifically section 230 of that law, to fend off litigation related to their platforms because section 230 protects online platforms from liability for user-generated content. They’re just “platforms,” not publishers, under that law. But what if the platform itself is harmful? 

What makes both the New Mexico and California judgments so interesting is that the main thrust of what both juries were considering wasn’t the harmful content — however abhorrent the content may be — but the algorithmic design of the platform itself and what the platforms knew about the harms incurred because of the design of their products. Both cases were able to advance the largely untested theories of applying consumer protection laws and product liability laws to online platforms, focusing on the products’ design rather than the content.    

Despite not having similar legislation to section 230 of the Communications Decency Act, the Canadian debate on how best to improve online safety and minimize the harms experienced online has been a reflection of the American debate, focused on the regulation of content and its impact on freedom of expression, with algorithmic design considerations taking a back seat.

This was always silly and is even sillier now, given that even the US is recognizing the product liability aspect. The root of online harms was never one of individual bad actors posting individual pieces of harmful content, it was always about the systems and incentives that exist within the platforms’ structure, allowing and encouraging the harmful content to be created and disseminated at a much wider scale.

Canada must move past the false dichotomy presented by Big Tech and its backers that any accountability imposed on large online platforms is somehow the policy version of Sophie’s Choice, forcing Canadians to choose between freedom of expression and online safety, as this is blatantly incorrect framing.   

Regulating private, for-profit companies in a manner that subjects them to the same standards as any other company that provides consumer-facing products is not stifling freedom of expression. It is ensuring that these incredibly profitable foreign companies have to consider the impact of the products that they build and provide to Canadians, particularly as it pertains to children.

With these judgments out of the US, the Carney government now has even more reason to re-introduce online harms legislation that keeps the central tenet of the Trudeau-era bill alive, which was imposing a duty to act responsibly for the online platforms subject to the act. This would ensure that when platforms are considering the design of their products, they have a duty to ensure that they have taken active steps to mitigate against the known risks of their products.

Conservatives and a few terminally online progressives will likely once again use the talking points from Big Tech to claim that any oversight of these companies represents an attack on free speech, but that argument falls apart once one considers that all other consumer-facing products are subject to a duty to act responsibly. Continuing to exempt tech and social media platforms from this standard makes no sense, both morally and legally. Let’s hope the Carney government realizes that.


© National Observer