menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Anthropic Refuses Military Use Of Claude AI, Causing US Government Clash

52 0
01.03.2026

In early 2026, a fundamental question about national defence has shifted from a tech-bro debate to a full-blown crisis: who decides how artificial intelligence is used on the battlefield?

For years, Silicon Valley and the military maintained a tense partnership. That balance has now collapsed, with Anthropic—the creator of the AI model Claude—at the centre. Anthropic attempted what many considered impossible: providing world-class AI to the military while legally forbidding it from being used for applications the company deemed unethical. Today, that experiment ended in a historic confrontation.

The breaking point came on 3 January 2026, during Operation Absolute Resolve, a high-stakes U.S. mission in Caracas that resulted in the capture of Nicolás Maduro. While the raid was a tactical success, it triggered alarms within Anthropic. Reports surfaced that Claude had been integrated into the mission’s planning through the defence firm Palantir.

This prompted a review by Anthropic’s Long-Term Benefit Trust, an independent body tasked with enforcing the company’s ethical charter. The board warned that the military was pushing Claude toward "bright red lines" that the company’s rules strictly forbid.

These red lines are encoded in what Anthropic calls Constitutional AI, a framework designed to ensure the model follows a “constitution” of its own, regardless of user requests. Two principles lie at the heart of the dispute.

First, no “robot assassins”: Claude is programmed to refuse any role in fully autonomous lethal systems, meaning it cannot independently authorise a strike or pull a trigger without a human making the final decision. Second, no domestic spying: the AI is prohibited from mass surveillance of Americans or from assembling private data into profiles without legal warrants.

For Defence Secretary Pete Hegseth, who now leads the renamed Department of War, these rules are an affront to government authority. His position is straightforward: if an action is legal under the U.S. Constitution, a private company has no right to block the government from using its tools.

If an action is legal under the U.S. Constitution, a private company has no right to block the government from using its tools

If an action is legal under the U.S. Constitution, a private company has no right to block the government from using its tools

Over the past year, the Department has systematically required tech companies to accept an “All Lawful Use” standard. Most of the industry has complied: OpenAI and xAI removed bans on military use in late 2025 to secure major government contracts, while Google aligned its AI systems with the new rules to protect multi-billion-dollar cloud deals. Anthropic, however, stood alone. Because the military’s Smart Systems relied heavily on Claude’s advanced reasoning, its refusal created a significant gap in U.S. technological capabilities.

The standoff reached its peak on 27 February 2026. Anthropic faced a 5:01 p.m. deadline to remove its ethical restrictions or face termination. CEO Dario Amodei refused, declaring the company would not “in good conscience accede.” The response was swift and severe. President Trump intervened via Truth Social, ordering all federal agencies to immediately cease using Anthropic’s technology.

Shortly thereafter, Secretary Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” a label normally reserved for foreign adversaries such as Huawei. In practical terms, this is a “corporate death penalty”: any U.S. defence contractor, from Boeing to Lockheed Martin, is now barred from even using Claude in its own internal workflows.

The financial damage is staggering. Just two weeks ago, Anthropic was valued at $380 billion and preparing for an IPO. Now, with billions in federal revenue at risk, the company’s future is uncertain. The government is even considering invoking the Defence Production Act to compel Anthropic to hand over its source code, escalating the confrontation from a contractual dispute into a constitutional showdown.

As of tonight, the Claude era within the U.S. government is over. Probably, the battle now moves to the courts, where judges will face a historic question: can a private company enforce an ethical code when it collides with an executive order issued in the name of national defence? The answer will shape not only one company’s future but the broader struggle over who controls AI in war.


© The Friday Times