What Both Anthropic and the Pentagon Get Wrong
What Both Anthropic and the Pentagon Get Wrong
Mr. Kendall was the secretary of the Air Force in the Biden administration.
At 5:01 p.m. Friday, the Pentagon may be at war. I’m not referring to Iran, nor to any other shooting war — but a potentially existential conflict between two parties, nonetheless: The artificial intelligence company Anthropic and the Department of Defense are fighting over the contractual terms for its continued use of Anthropic’s A.I. model.
Anthropic is insisting that the government agree to specific restrictions that would prevent the use of its model to conduct widespread surveillance of Americans or to control autonomous weapons like drones without a human in what is called the “kill chain.” The company reiterated on Thursday that it has no intention to change its position. The government says that the only requirement its contractors can insist on is that their products be used lawfully.
There is a lot at stake, and neither side is offering the correct solution. A.I. is poised to be the most transformative technology of our generation, perhaps of any generation, and we need to ensure the government and the private enterprises that develop these technologies have a constructive and mutually beneficial relationship consistent with American values. That can happen only if we use the mechanisms our country’s founders put in place to define the rules of the game, level the playing field and balance interests across the government and among individuals and businesses: through regulatory legislation passed by Congress.
The tool Anthropic is providing to the government is enormously powerful; like other tools, it can inherently be used for good or evil. Anthropic is rightly concerned that its tool could be used in ways that are unsafe or malicious. The company doesn’t want to see its A.I. model used without human control, which could result in the killing of noncombatants or friendly troops by automated weapons, nor deployed to spy broadly on Americans in ways that could violate dearly held values like privacy and freedom from illegal search and seizure or could suppress political dissent. Most Americans would probably agree.
On its side, the Department of Defense will not accept constraints on the use of products it has purchased. The government has a point. America’s national security team needs to have the freedom to use the products it buys within the law and not be beholden to preferences from the sellers.
The government is trying to force Anthropic to capitulate with two threats: invoking the Defense Production Act to force Anthropic to provide its product with no additional restrictions, and designating Anthropic as a “supply chain risk” contractor. The first of these is unusual but consistent with the law. Claude, Anthropic’s large language model, is the only A.I. product approved for use on classified Pentagon networks. It is not unreasonable for the government to assert that it must have access to Claude for national security reasons until a comparable product from a competitor becomes available (something that appears to be fairly imminent).
Subscribe to The Times to read as many articles as you like.
