menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

OpenAI's deal with Trump is putting Canadians at risk too

20 0
04.03.2026

In a stunning showdown last week between the Trump administration and frontier AI company Anthropic, US Secretary of Defense Pete Hegseth demanded that Anthropic allow its AI models to be used for “any lawful purpose,” without any restrictions.

Anthropic CEO Dario Amodei refused and laid out the company’s two red lines for its models’ usage by the US government: barring the employment of fully autonomous lethal weapons without human intervention or oversight, and prohibiting the utilization of its AI for domestic surveillance at scale. 

Hegseth then said he would designate Anthropic a “supply-chain risk to national security,” which effectively turns Anthropic into an economic pariah, banning any federal contractor or subcontractor from doing business with the company. Trump, for his part, posted to Truth Social to order a phase-out of all Anthropic technology over a six-month period, threatening “major civil and criminal consequences” if Anthropic did not cooperate with the transition. 

It is worth noting that designating an American company as a supply chain risk to national security is unprecedented. It’s the kind of thing normally reserved for US adversaries and not domestic frontier AI companies that have deployed their models in classified American networks. Anthropic, for its part, has stated it will challenge the supply-chain risk designation in court. 

The interesting aspect to all this is that the underpinning of the Trump administration’s argument — that it should ultimately be up to the US government to decide how it uses technology, not the private companies supplying the technology — is one that makes sense and is consistent with the US Constitution. The problem, however, is that when applied to AI weapons and surveillance systems, this logic starts to break down for a few reasons. 

First and foremost, the engineers of these AI models fully acknowledge that the models can sometimes act in ways that are difficult to predict and can often confound the engineers themselves. This isn’t exactly like a tank-maker telling the US government how it can use those tanks. It’s more like a tank-maker telling the US the tank shouldn’t be given the freedom to shoot whatever it decides is a target because it might end up destroying everything in its path.

Second, while it’s true that it is fully legal to record conversations that occur in public, and it is legal for the US government to purchase datasets from third-party commercial data brokers without a warrant, including information like web browsing history, purchase habits and location data, the sheer scale of what AI can do when applied to surveillance is truly chilling. 

Unlike traditional surveillance that requires humans to analyze mounds of data that spans different datasets, AI surveillance can automatically analyze vast, disparate data points — automatically and at scale without any human oversight.

Unlike traditional surveillance that requires humans to analyze mounds of data that spans different datasets, AI surveillance can automatically analyze vast, disparate data points — web browsing history, social media activity, physical movements — automatically and at scale without any human intervention or oversight.

As Amodei put it in an essay written in January, “It might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do. A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow. This could lead to the imposition of a true panopticon on a scale that we don’t see today, even with the CCP.”

While Anthropic refused to back down on its red lines, OpenAI was quick to accept a deal with the Pentagon on the Pentagon’s terms, despite OpenAI’s insistence that it had established the same red lines as Anthropic set out to do. According to detailed reporting in The Verge, “OpenAI’s agreement says it will allow anything the US government determines is legal.” 

Which leaves us with the current predicament: a US government that is openly hostile to large swaths of its own people and a frontier AI company more than willing to supply the government with the tools to carry out all sorts of questionable surveillance on those people. Not to mention the potential damage that can be caused by fully autonomous lethal weapons — or, given its contempt for people beyond its borders, how it will direct both of those technological weapons at its ever-growing list of perceived enemies.

This is where previous criticism of the Carney government’s proposed Strong Borders Act (Bill C-2) looks even more profound, given all the data-sharing provisions with the US contained in the proposed legislation.

According to the University of Toronto’s Citizen Lab, “[D]ata and surveillance powers in Bill C-2 read like they could have been drafted by US officials.” The researchers noted the bill “contains several areas where proposed powers appear designed to roll out a welcome mat for expanded data-sharing treaties or agreements with the United States,” and the breadth of warrantless information-sharing covered under the bill has many potential issues — including violating Canadians’ reproductive rights. It could “open the door to information-sharing with law enforcement authorities in states like Mississippi, Idaho, or Tennessee, by compelling warrantless access to information about whether a person has obtained services from an abortion clinic in Canada.”

Up until this point, we have seen molluscan levels of backbone — which is to say none — from corporate America when it comes to standing up to the Trump administration. It was nice to be reminded through Anthropic’s Amodei that it is indeed still possible for CEOs to demonstrate they are not spineless. But Amodei is seemingly outnumbered by CEOs like OpenAI’s Sam Altman or xAI’s Elon Musk, who are willing to partner with the US government without any qualms about potential misuses or abuses of AI-powered weapons and surveillance.  

Ideally, people wouldn’t have to be reliant on the individual moral whims of CEOs to ensure civil liberties are protected. But we’re no longer in an ideal world. We’re in a world where the full power of the state is used to go after a private company for refusing to build a real-life version of SkyNet and use AI for domestic surveillance.

When the power of AI and Big Tech is being harnessed by the state to undermine basic civil liberties and human rights, it’s usually called technofascism. Now, however, we just call it Trump’s America.


© National Observer