menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The next AI fight: Do the chatbots have First Amendment rights?

42 0
24.03.2026

The next AI fight: Do the chatbots have First Amendment rights?

The standoff between Anthropic and the Pentagon is even bigger than it looks. And it’s even stranger than it seems

The fight between Anthropic and the Pentagon looks at first like a fight about AI safety — a principled tech company drawing ethical lines in the sand. It is at least partly that. But it’s also a First Amendment case.

It’s a test of whether the executive branch can summarily execute its vendors for “noncompliance.” It’s an investor risk story for everyone who put hundreds of billions into AI companies on the assumption that the U.S. government would be a customer, not a corporate murderer. And it’s a dress rehearsal for every painful question that humanity hasn't figured out how to answer about the most powerful information technology it has ever built. 

What’s the legal status of AI? Who’s in charge of it? When — not if — something goes wrong, who’s responsible? 

In other words, this fight is even bigger than it looks. And it’s even stranger than it seems.

The buildup, the breakup, the lawsuit

The conflict began when Anthropic refused to strip two safety guardrails from the specialized version of its Claude AI system that it provides to the Pentagon, under a deal worth some $200 million: protections against warrantless mass domestic surveillance of Americans, and against deployment in fully autonomous weapons systems. Late last month, CEO Dario Amodei detailed the Pentagon’s response: a threat to designate Anthropic a “supply chain risk," a label that has previously been reserved for foreign adversaries like Chinese telecom firms — never an American company.

The Pentagon followed through in early March, effectively blacklisting Anthropic from government contracts. Anthropic sued, warning the designation could cost it billions. A hearing on whether to grant Anthropic temporary relief is scheduled for Tuesday.

A more specific triggering incident has since been widely reported: After the January raid that captured the Venezuelan leader Nicolás Maduro, an Anthropic executive contacted Palantir $PLTR — through which Claude was integrated into Pentagon systems — asking how its AI had been used. Palantir flagged the inquiry to Pentagon officials, who read it as disapproval of a classified operation, kicking off the failed negotiations that preceded the rift. Pentagon CTO Emil Michael confirmed many of the details to The Wall Street Journal. “There is no chance,” he said. “There's no partnership that can be had.”

What Michael didn't say publicly was revealed in a court filing last Friday: Michael emailed Amodei on March 4 — the day after the Pentagon finalized the supply-chain designation — to say the two sides were “very close” on the exact two issues the government now cites as evidence that Anthropic poses a national security threat. The email has now become evidence, suggesting if not proving that the supply-chain threat designation was a bargaining chip, rather than a straightforward flagging of risk. If the two sides were "very close" even as the designation was being finalized, how much of a security risk could the Pentagon actually regard Anthropic to be?

The case........

© Quartz