menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Anthropic Set A Red Line. It Won’t Be The Only AI Company To Do So

16 0
03.03.2026

Artificial intelligence developer Anthropic set a red line governing how the United States Department of Defense could use its technology, insisting its Claude AI model be excluded from mass domestic surveillance and use in fully autonomous weapons.

If other AI developers follow suit, it could disrupt the Department of Defense’s ability to pursue and deploy cutting-edge technology, and force a new framework for government use of AI and safeguards, legal and defense analysts say.

Last week, the Pentagon demanded that the company remove such restrictions and allow for "all lawful use" of its AI in defense systems. It then threatened to blacklist Anthropic as a "supply chain risk" by invoking the Defense Production Act.

After the company refused to comply, the Pentagon is phasing out Anthropic’s technology across federal agencies, including the intelligence community.

"It’s about the principle of standing up for what’s right," said Dario Amodei, CEO of Anthropic, even as the company’s decision resulted in its tech being banned from use by the federal government.

A National Security Risk

President Donald Trump last week ordered every government agency to "immediately cease" using Claude and any technology from Anthropic. In a post on social media, the president claimed the terms of service imposed by Anthropic would somehow put American lives at risk and be a national security threat.

U.S. Embassy In Saudi Arabia Hit In Suspected Iranian Drone Attack

‘Blood Moon’ Eclipse Tonight — Exact Times And Livestreams For March 3

Bruce Campbell Reveals ‘Treatable’ But Not ‘Curable’ Cancer Diagnosis

Yet the U.S. military still relied on Anthropic's Claude to support the Operation Epic Fury attacks on Iran over the weekend. According to a report from The Wall Street Journal, Claude was used to assess intelligence, identify targets and simulate battle scenarios.

It would seem difficult to reconcile how Claude is such a threat if it was then used in the recent strikes, which the administration claimed were carried out flawlessly against the Islamic Republic.

Forbes Daily: Join over 1 million Forbes Daily subscribers and get our best stories, exclusive reporting and essential analysis of the day’s news in your inbox every weekday.

You’re all set! Enjoy the Daily!

You’re all set! Enjoy the Daily!

"Here is an administration that shoots down its own drones because its agencies can't work and play well with one another. It isn't a great look," suggested Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.

"An administration that blithely distorts statute to conform to what it wants to do argues that they should be allowed to use Anthropic products for anything lawful, while the company – among the most open about working with the Pentagon – expresses concern about tech being used for pervasive domestic surveillance or autonomous, agentic weapons systems," said Purtilo, adding that the company knows limits of its technology and of safety. "It all looks like Anthropic knows more about the intended application of these things and justly drew the line."

As Paulo Carvão previously wrote on Forbes.com, Anthropic signed a contract last summer with the Pentagon worth up to $200 million. This standoff exposed tensions between the tech sector and the U.S. government, and their "competing visions of national security and safety." Anthropic has resisted Pentagon demands to drop certain restrictions.

"This is an example of extreme abuse of power, the DoD wants Anthropic to remove all of its ethical rules with regards to the use of their tool and, since no one in their right mind would do that, the DoD is threatening them with destruction claiming their ethical rules somehow makes them unsecure," said technology industry Rob Enderle of the Enderle Group.

In an email, Enderle said no part of the Pentagon's ban makes sense.

"If you don't want an ethical tool, then build or buy one that isn't ethical, but claiming that adhering to ethics somehow makes a product unsafe is like telling a car company to get rid of brakes because they make their cars unsafe and, if they don't do that, their cars won't be allowed on the roads," Enderle added.

The way the Pentagon has gone about its decision also raises some serious questions, including whether it was handled legally.

"Absolute bans need to be done correctly, through the deparment process, which eventually goes to generally a U.S. District Court," Dan Meyer, managing partner of the Washington law office of Tully Rinckey, wrote in an email.

"The Administrative Procedure Act controls, and the standard is whether the decision was 'arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,’" Meyer added.

The Dangers May Be Overstated

It should be noted that Anthropic isn't alone in setting such conditions for how the government, specifically the military, uses emerging AI technology.

Just this weekend, OpenAI, the developer of the popular ChatGPT platform, announced its three red lines, which also include a ban on using its technology for mass domestic surveillance, in fully autonomous weapons, and in "high-stakes automated decisions."

The Pentagon accepted those terms, even as it refused to do so to continue to use Claude. Other companies are likely to follow suit in setting terms, raising questions about how the DoD will respond.

"If all major AI developers took Anthropic's position here and maintain the same 'red line' approach, the implications for the DoD and U.S. defense would be significant," said Northern California-based venture finance attorney Lindsey Mignano.

She said in an email that first, DoD could find itself unable to deploy cutting-edge AI models for certain high-impact tasks, ones that go far beyond surveillance and autonomous weapons. Moreover, the DoD could invoke emergency powers, such as the Defense Production Act, to compel access. That would raise legal and constitutional challenges.

"The U.S. military's competitive edge — especially against countries that don't adopt such ethical limits — could diminish unless alternate defense strategies or regulations are developed."

This may result in political pressure for new federal AI safeguards, transparency laws, or firmer ethical standards to govern defense contracts, and companies could push for statutory protections for red lines so the DoD can't override them via contract language, Mignano further suggested.

It remains unclear how this will play out.

"On one hand, if the White House wins this standoff, they could have unlimited access to dangerous equipment that could be used to take the lives of countless human beings, which they may argue is necessary to protect the lives of the American soldiers whose lives would be jeopardized if they were to be deployed to those locations to fight the same enemy that unmanned robots could have destroyed," said Anthony Kuhn, managing partner at Tully Rinckey PLLC.

Should Anthropic win this standoff, it raises the question of where the line is drawn regarding their corporate leadership and their imposition of morals on a government tasked with protecting the American people and warfighters.

"The two sides will likely come together and strike a deal that benefits both sides," added Kuhn. "But it will be interesting to see who will control the power to decide how far AI can go and who gets to make that decision with this type of initiative in the future."

Military Regulations And AI

Currently, the use of AI by the United States military is governed by internal DoD policies, including Directive 3000.09 on autonomous weapons, and by executive orders, but not by any comprehensive federal statute. This may be an example of technological development outpacing the rules and regulations to govern its use.

Mignano said that the current policies emphasize human oversight and other safeguards, but don't impose strict limits on what AI technologies can be used for in military contexts.

"That gap has become clear in recent tensions such as between the Pentagon and Anthropic over unrestricted access, which highlights a lack of clear legal boundaries around things like fully autonomous weapons or surveillance uses," Mignano continued. "New laws on military AI use are very plausible, and likely to emerge from Congress — especially if public and political scrutiny continues to increase."

Anthropic's standoff may start the conversation.


© Forbes