Looks Like the Pentagon Will Be Using Anthropic’s A.I. For a While Longer
An ironic detail in the developments of the Pentagon’s ongoing fight with Anthropic about how it can use the company’s artificial intelligence, from the Wall Street Journal:
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.
Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.
The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.
Much like an aircraft carrier, U.S. government policies cannot turn on a dime.
Iran’s Last-Stand Strategy
Judge Cannon’s Flawed Order to Suppress Jack Smith’s Report on the Mar-a-Lago Documents Caper
The Iran Regime-Change War: Hoping for the Best, Preparing for the Worst
Back on February 27, President Trump posted on Truth Social:
I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
That directive, apparently, is impossible to enact, and the Pentagon does in fact need it and want it, at least in the near-term future.
In fact, it sounds like the process of removing Anthropic’s AI from Pentagon systems is going to be a complicated and time-consuming process.
Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said. In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI. Department leaders have urged their people to use the new tools, though they have declined to say how publicly. And even the Pentagon doesn’t really know; it is reportedly asking various commands to describe how much they use Anthropic. (Michael, however, has described U.S. INDOPACOM as “probably one of the premier users.”) So why is Claude the only one deployed on classified networks? One key reason, according to a defense official: Anthropic’s tools were the easiest to deploy on cloud networks powered by AWS, which contributes the largest chunk of the Pentagon’s Joint Warfighting Cloud Capability.
Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.
In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI.
Department leaders have urged their people to use the new tools, though they have declined to say how publicly. And even the Pentagon doesn’t really know; it is reportedly asking various commands to describe how much they use Anthropic. (Michael, however, has described U.S. INDOPACOM as “probably one of the premier users.”)
So why is Claude the only one deployed on classified networks? One key reason, according to a defense official: Anthropic’s tools were the easiest to deploy on cloud networks powered by AWS, which contributes the largest chunk of the Pentagon’s Joint Warfighting Cloud Capability.
…The individuals said it could be twelve months or longer to replace the capability. However, a Defense Department official said that he expected additional frontier AI models to be widely available on the Pentagon’s GenAi.mil interface before summer.
…The individuals said it could be twelve months or longer to replace the capability. However, a Defense Department official said that he expected additional frontier AI models to be widely available on the Pentagon’s GenAi.mil interface before summer.
As our old friend Kevin Williamson observed, “everything looks simple when you don’t know the first thing about it.” It would probably be in the best interests of the Pentagon, Anthropic, and the American public if the two sides could work out an agreement for how the U.S. military could continue using Anthropic’s A.I., without running afoul of the company’s concerns about fully autonomous A.I. weapons systems or mass domestic surveillance.
