menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

If A.I. Is a Weapon, Who Should Control It?

17 0
28.02.2026

If A.I. Is a Weapon, Who Should Control It?

Suppose that you had to die in a terrible artificial-intelligence-related cataclysm. Would you feel worse knowing that the path to destruction was smoothed by the hubris of Silicon Valley tech lords pursuing dreams of utopia and immortality — or by the folly of Pentagon officials who give the A.I. a fateful dose of autonomy and power in the hopes of outcompeting the Russians or Chinese?

We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind.

Since the two uses that Anthropic’s current contract explicitly rules out are the employment of A.I. for mass surveillance and its use for fully autonomous weapons (meaning no humans in the to-kill-or-not-to-kill decision loop), it’s easy to get Skynet vibes from the Pentagon’s demands. As Matt Yglesias noted, all the weird and complicated scenarios spun out by A.I. doomers get a lot simpler if our government decides to start building autonomous killer robots.

That’s not what the Pentagon says it intends to do. Its professed concern is that it can’t embed a crucial technology into the national security architecture and then give a private company a general ethical veto over its use, even if those ethics seem reasonable on paper. Doing so outsources decisions that are supposed to be made by an elected president and his appointees, and it risks a debacle when events don’t cooperate with corporate ideals. (The example the agency has offered is a hypersonic missile attack on the United States where an A.I. company refuses to assist in some crucial response because it falls afoul of the no-machine-autonomy rule.)

Subscribe to The Times to read as many articles as you like.

Ross Douthat has been an Opinion columnist for The Times since 2009. He is also the host of the Opinion podcast “Interesting Times.” He is the author, most recently, of “Believe: Why Everyone Should Be Religious.” @DouthatNYT • Facebook


© The New York Times