Elizabeth Shackelford: The Pentagon’s fight with Anthropic is about unchecked power
Last week, contract negotiations broke down between the artificial intelligence company Anthropic and the Department of Defense over Anthropic’s attempt to maintain guardrails on DOD’s use of its technology. The administration’s message was clear: It will tolerate no such limits or constraints.
The dispute is happening against the backdrop of an administration whose actions in the world are increasingly aggressive, interventionist and independent, with existing checks and balances failing to restrain it. It also comes at a time when AI is still in development and the extent of its power, and danger, is not yet fully understood. In other words, it is a tool you want in the hands of responsible actors, not impulsive ones.
The administration of President Donald Trump is the latter. It has begun a war of choice with Iran that is violently affecting at least 11 countries in the region already. It just launched military operations in Ecuador. The U.S. military has been blowing up boats in the Caribbean on flimsy pretenses for six months, and it abducted Venezuelan President Nicolás Maduro at the start of the year. None of these acts has been authorized by Congress or followed a period of public debate designed to make the case to the American people.
This administration’s officials speak about war like it’s a video game without real-life consequences. Secretary of Defense Pete Hegseth is making football references and talking about “punching them while they’re down” and “death and destruction from the sky all day long” as he scoffs at the limitations imposed by laws and protocols, like “no stupid rules of engagement.”
These aren’t the people I would give unfettered lethal power. But even if you trust this administration’s intentions and judgment, do you really want any future administration to hold this same unchecked power?
It stands to reason that an ethical company might want to impose some guardrails of its own on how the administration uses a powerful product that has already escalated the military’s capacity for rapid violence.
It would be far better, of course, if ethical AI use were mandated by laws passed by Congress, developed through informed, public debate or regulations established in a transparent executive process. But this new technology remains unregulated. The administration wants to keep it that way, and Congress has ceded its oversight role. Since none of the existing constraints seem to be holding, we are left with only the protection that the conscience of a private business interest bestows, and the government is trying to nix that too.
DOD has contracted with Anthropic since President Joe Biden’s administration and awarded it a new $200 million contract just last year. That deal included two explicit restrictions on ethical grounds: The Pentagon was prohibited from using Anthropic’s AI model for mass domestic surveillance of Americans or for fully autonomous weapons systems where machines select and strike targets without any human intervention or oversight.
The Pentagon had agreed to these terms originally but sought to renegotiate earlier this year to get rid of them, arguing that no private company had the right to dictate the U.S. government’s use of the technology. The Pentagon claimed that a provision restricting it to “lawful use” would provide sufficient limits, but that offered Anthropic little comfort since no law has been developed yet to address the new technology at hand.
On Feb. 27, in a major escalation of the dispute, Trump ordered all U.S. federal agencies to “immediately cease” using Anthropic’s technology, and Hegseth designated Anthropic a “Supply-Chain Risk to National Security,” both via social media posts. The impact of these developments will be massive, not only in huge costs to Anthropic’s business but also to all the federal agencies and contractors that are now scrambling to comply. The government’s reaction seemed more like a tantrum over the fact that anyone would deign to limit its power, rather than a national security imperative.
The legalities of the Trump administration’s response will be hashed out in courts, but this administration’s insistence on obstructing all possible guardrails and constraints on its control raises bigger questions. Why does the U.S. military want to conduct massive surveillance on the American people? What does the world look like when its most capable military can execute war with fully autonomous weapons systems and can do so on the whim of a single man?
Machines use AI to simulate the reasoning, learning and problem-solving that we associate with human intelligence by processing massive amounts of data and inputs, but it does so much faster and more extensively than any human brain could. This means the impact of surveillance or weapons or any other tool controlled by AI is exponentially greater than in human hands. This suggests it should be more closely regulated and limited than other tools, not less so.
If the Trump administration succeeds in giving the military unconstrained use of AI, the interventionism we’ve seen to date might look quaint compared with what is coming.
Elizabeth Shackelford is a senior adviser with the Institute for Global Affairs at Eurasia Group and a foreign affairs columnist for the Chicago Tribune. She is also a lecturer with the Dickey Center at Dartmouth College. She was previously a U.S. diplomat and is the author of “The Dissent Channel: American Diplomacy in a Dishonest Age.”
Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.
