The Pentagon at War with Anthropic
Earlier this week, the CEO of Anthropic, Dario Amodei, maker of the extremely popular Claude AI products, put out a letter saying that
Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.
Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.
He went on to specify that those cases included mass domestic surveillance and fully autonomous weapons. He concluded:
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above.
The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above.
The statement said Anthropic couldn’t in good conscience comply with these requests, though it hoped to continue working with the Department of War wherever it could.
Why Not Unseal FBI Director Patel’s Grand Jury Testimony About Mar-a-Lago Documents?
A ‘Warning’ or a Threat?
Trump’s Chief of Staff Susie Wiles and the Mar-a-Lago Documents Probe
Today, Pete Hegseth took the nuclear option and designated Anthropic a supply chain risk, further outlining that every single company that works with the government would have to stop using Anthropic’s products. This is a kill shot aimed at Anthropic. It’s hard to overstate that this is happening during a two-week period in which almost every major company in the United States is looking at how Anthropic’s tools can help them be more productive.
It puts the U.S. government in the curious position of allowing NVIDIA to sell advanced chips to China, but disallowing department caterers from using Claude Code to manage their inventory spreadsheets.
On the one hand, I think Anthropic’s statement implied, perhaps wrongly, that the U.S. government was already planning to deploy AI for illegal means. This should be clarified. Working for the defense of our country is presumptively legal. Soldiers who volunteer don’t need to constantly remind their superiors about their highest moral duty to disobey any potential future illegal order. Similarly, contractors should demonstrate trust that the legal and democratic control of the military is in good order.
I really hope that both sides find a way to back down and come to an agreement. Our defense needs the best AI models, and our government (which has been such a champion of AI development) should not be taking actions that jeopardize investment in all American AI companies.
