Anthropic’s autonomous weapons stance could prove out of step with modern war
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Anthropic’s stance on autonomous weapons may not survive the future
Much of the AI world is watching closely as Anthropic tangles with the Pentagon over how the government can use the Claude models. Anthropic has a $200 million contract with the Pentagon, but the contract says the military can’t use the AI company’s models as the brains for autonomous weapons or for mass surveillance of Americans. Defense Secretary Pete Hegseth insists, after the fact, that the military should be able to use the Anthropic models for “all lawful purposes.”
Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday morning meeting, in which he reportedly gave Anthropic until 5:01 p.m. Friday to comply with the Pentagon’s demand. If Anthropic fails to do so, Hegseth threatened to invoke the Defense Production Act to compel the AI company to supply its models with no guardrails. Hegseth also said the government would declare Anthropic models to be a “supply chain risk,” meaning that all government suppliers would be directed to avoid or discontinue use of Anthropic models.
Amodei said in an interview after the Hegseth meeting that his company has no intention of complying with Hegseth’s demands. (He’s got a strong case: After all, government officials agreed to the terms.) Amodei explained that the military relies on human judgement to avoid violating people’s constitutional rights. If AI is making the decisions, there will be no human being to object.
Amodei is right, and his company’s willingness to stand up for its values is laudable. The trouble is, we’re rapidly heading for a future where autonomous systems become the norm in warfare.
For years, the defense establishment talked about keeping the “human in the loop” in AI weapons systems. Often that human is a government lawyer who can make calls on rules-of-engagement issues on the battlefield. Today the Pentagon is talking more about fully autonomous weapons that can manage more of the “kill chain,” or the series of communications and decisions around the destruction of a target. Military leaders often say that whoever can use technology to shorten the kill-chain will win wars.
Things like electronic warfare (cyberwar), hypersonic missiles, and drone swarms are making war faster and response times shorter. This may eventually preclude the opportunity for human review and decision-making. Increasingly, the U.S. military may be forced to take humans out of the loop in order to stay competitive with its adversaries.
Disable your ad blocker to continue reading