menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Future We Feared Is Already Here

75 0
08.03.2026

The Future We Feared Is Already Here

For years now, questions about A.I. have taken the form of “what happens if?” What happens if A.I. begins replacing workers? What happens if it becomes capable of writing its own code? What happens if it begins to deceive those testing its capabilities? What happens if governments use it for surveillance and war? What happens if governments decide it is so powerful that they need control of the labs that develop it?

This year, the A.I. questions have taken a new form, “what happens now?” What happens now that A.I. is, or at least is being used as the excuse for, replacing workers? What happens now that it is writing its own code? What happens now that it seems to recognize when it is being evaluated and reacts by changing its behavior? What happens now that governments are threading it through the national security state and using it in operations and wars? What happens now that the U.S. government has decided the technology is so powerful it needs a measure of control over labs that develop it?

The showdown between the Pentagon and Anthropic is a window into how unprepared we are for the questions we are already facing. In July, Anthropic signed a deal with the Pentagon to integrate Claude, its A.I. system, into the military’s operations. The contract included two red lines: Claude could not be used for mass surveillance or for lethal autonomous weapons.

Over the ensuing months, the Pentagon decided these prohibitions were intolerable, that they amounted to an A.I. company demanding operational control over the military. Negotiations collapsed over a clause in the contract barring the Pentagon from using Claude to analyze bulk commercial data — technically, that might not be “surveillance” because the data would be legally acquired, but in practice it could be a powerful way to surveil Americans.

Few would have been surprised if the Pentagon had canceled its contract with Anthropic and sought a different vendor for its A.I. needs — as it eventually did, choosing to work with OpenAI. But Pete Hegseth, the secretary of defense, went further, declaring Anthropic a “supply chain risk” and saying no company that does work with the Pentagon could engage in “commercial activity” with Anthropic. This would destroy Anthropic, as everyone from Amazon to Nvidia would be prohibited from working with it.

Whether Hegseth has the legal authority to demolish Anthropic in this way is doubtful. Anthropic says the letter it received from the Pentagon is more narrow, prohibiting the Pentagon’s contractors from using Anthropic in fulfilling defense contracts. Many legal experts think the courts will look skeptically on designating Anthropic a supply-chain risk given that the Pentagon used Claude in the Maduro raid and is still using it in the Iran war — how big of a risk can it be, if the military is using it even now?

Subscribe to The Times to read as many articles as you like.

Ezra Klein joined Opinion in 2021. He is the host of the podcast “The Ezra Klein Show” and the author of “Why We’re Polarized” and, with Derek Thompson, “Abundance.” Previously, he was the founder, editor in chief and then editor at large of Vox. Before that, he was a columnist and editor at The Washington Post, where he founded and led the Wonkblog vertical. He is on Threads. 


© The New York Times