The Pentagon’s Total War Against Anthropic
People in AI love talking timelines. Usually, they’re referring to the future: the number of months or years until models are able to do x or y; the predicted point at which they can improve themselves; the beginning of an irreversible “takeoff.” As a result, conversations about AI skip erratically between what’s happening now and what will be happening soon, or what might happen eventually, often at the end of a rapidly steepening curve.
The fact that ChatGPT is both a popular consumer-facing chatbot taking market share from Google Search and made by a firm that believes in a near future “where intelligence is a utility, like electricity or water, and people buy it from us on a meter” can certainly confuse discussions about, for example, chatbot advertising, AI safety, and how AI will affect jobs. (DeepMind head Demis Hassabis recently poked a knife into the gap between pre-singularity OpenAI and Sam Altman’s description of what’s coming next: If “AGI’s around the corner,” Hassabis said, “why would you bother with ads?”) On social media, this can make for a weird discourse, where incremental software updates are used to flavor visions of apocalypse. For executives trying to decide how to invest and hire over the next year, the elastic timeline between now and soon can complicate planning and disrupt lives, and for regulators and lawmakers, the specter of rapid change can be paralyzing, even with an abundance of present harms.
Last week, after long negotiations and against the backdrop of a sudden, AI-assisted American campaign against Iran, in which Anthropic software has reportedly been “central,” the relationship between the leading AI firm and the Trump administration fully broke down. The central points of contention are Anthropic’s prohibitions on using its software for lethal autonomous warfare — strikes without a human in the loop — or the surveillance of Americans en masse. In recent months, the Pentagon has demanded Anthropic abandon these rules and others, citing, more or less, its authority to do whatever it wants and accusing Anthropic of claiming authority it doesn’t have. So far, the standoff has resulted in the termination of federal contracts and the designation of Anthropic as a supply-chain risk, an extreme measure most recently used against state-connected Chinese tech companies. (A punitive executive order has been rumored as well.) Defense secretary Pete Hegseth had reportedly dismissed Anthropic CEO Dario Amodei as arrogant and entitled, saying “No CEO is going to tell our war fighters what they can and cannot do” and suggesting that the CEO has a “god complex.” Amodei later told his employees the company was targeted for not giving “dictator-style praise to Trump” like competitors.
Communication between characters as constitutionally different as Amodei and Hegseth was always going to be strained, but the breakdown was about more than “vibes and personalities.” It was about an AI company and the Pentagon trying to account for disparate and extreme visions of tomorrow in clauses and provisions written for today. It was a collision between two movements that believe, in different and inherently incompatible ways, that they might be on the cusp of achieving absolute........
