menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Could Unrestricted Claude Have Stopped Iran’s Missiles From Reaching Israel?

49 0
22.03.2026

The missiles were already flying when the answer started to become clear.

The Night Before the Bombs

On February 24, 2026, Defense Secretary Pete Hegseth sat across from Anthropic CEO Dario Amodei at the Pentagon and delivered an ultimatum. Remove all usage restrictions from Claude, the only artificial intelligence model approved to operate on the military’s classified networks. Grant the Pentagon access for “all lawful purposes.” No exceptions.

The consequences for refusal were explicit. Termination of Anthropic’s $200 million defense contract. Designation as a supply chain risk to national security. A label historically reserved for foreign adversaries like Huawei. Never before applied to an American company.

On Thursday, February 27, Under Secretary of Defense Emil Michael posted on X that Amodei was “a liar” with a “God complex” who “wants nothing more than to try to personally control the US Military.” On Friday, at 5:01 p.m., the deadline expired. Claude was expelled from the Pentagon’s classified systems.

On Saturday, the bombing of Iran began.

Operation Epic Fury. Operation Roaring Lion. The largest U.S. military operation in the Middle East since the 2003 invasion of Iraq. And the only AI that had been processing intelligence on the Pentagon’s most sensitive networks was gone.

Every outlet covered the politics. The lawsuit. The personalities. No one answered the operational question: what, specifically, would the Pentagon have done with an unrestricted Claude while missiles were flying between Iran and Israel?

What Anthropic Refused to Build

The dispute was not about separating products. The Claude that operated on classified networks was already a separate instance, deployed through Palantir into the military’s closed infrastructure. In a sworn declaration filed March 20, Anthropic’s Head of Public Sector stated the company cannot even see what government users type into the system. It is an isolated environment. The Claude you and I use was never part of this.

The Pentagon wanted Anthropic to remove two restrictions from that classified version: a prohibition on mass domestic surveillance of American citizens, and a prohibition on fully autonomous weapons without human oversight.

Anthropic said no. Not because it was the same Claude civilians use, but because Anthropic refuses to build any version of Claude, for any client, without those two guardrails.

In an internal memo later reported by the Financial Times, Amodei revealed the most telling detail of the entire negotiation. The Pentagon came close to accepting Anthropic’s terms. But at the last moment, it demanded the removal of one specific phrase: “analysis of bulk acquired data.” Exactly the scenario Anthropic feared most.

That phrase is the key to understanding everything that follows.

The Arsenal on the Other Side

Iran has spent decades building a missile force designed to survive a first strike and overwhelm its enemies through volume.

The backbone is ballistic. The Emad, Ghadr, and Kheibar Shekan are medium-range ballistic missiles capable of reaching Israel in as few as twelve minutes from launch. Iran fields cruise missiles with lower radar signatures. It deploys waves of Shahed drones as both weapons and decoys to saturate defenses.

The infrastructure is hardened. Iran has constructed underground “missile cities” inside mountains, facilities large enough to store, maintain, and launch missiles from protected positions. Some have been shown in state propaganda. Others remain hidden. The IRGC has deployed mobile Transporter Erector Launcher units, the TELs, that can exit a tunnel, position, fire, and relocate within an hour.

The doctrine is deliberate. Iran does not launch everything at once. It fires in escalating salvos, each designed to probe defenses, exhaust interceptor stockpiles, and force the adversary to reveal the locations of its defensive assets.

This is not a theoretical threat. Iranian missiles struck Israel on February 28. The question is not whether the missiles fly. It is whether you can degrade, intercept, and hunt them faster than they reload.

Before Launch: Blinding the Chain of Command

A missile in its launcher is not autonomous. It depends on a human chain. Commanders transmit launch orders through military communication networks. Target coordinates are loaded from centralized databases. Radar and sensor data feed the decision of when and where to fire.

All of that is digital infrastructure. Attackable.

The United States has pursued this approach before. The New York Times reported in 2017 that a program known as “left of launch” was designed to sabotage Iranian and North Korean missiles before they leave the ground, through cyberattacks and electronic warfare. Stuxnet, a cyberweapon built jointly by the United States and Israel, demonstrated in 2010 that a computer virus could physically destroy Iranian nuclear centrifuges without a single shot fired.

Claude does not hack systems. It does not emit signals or penetrate networks. The NSA and U.S. Cyber Command have those tools. What Claude does is synthesize. An unrestricted Claude processing classified signals intelligence could map the architecture of IRGC communication networks by fusing years of fragmented intelligence: intercepted transmissions, satellite imagery of facilities, human intelligence reports, procurement records for military-grade equipment. It could identify vulnerable nodes and design coordinated cyberattack sequences synchronized with the air campaign. Not static planning done over months. Adaptive cyber operations that evolve in real time as Iran switches to backup systems and reroutes communications.

The same capability applies to Iran’s hidden missile infrastructure. Two decades of intelligence exist across multiple agencies, in different databases, different formats, different languages. Thousands of satellite images from different dates. Intercepted communications mentioning code names. Geological surveys. Logistics patterns.

Claude could cross-reference all of it to build an integrated map that identifies not only known missile bases but patterns suggesting facilities that were never detected. Every known missile city had a construction signature: unusual road-building to remote mountain areas, power line extensions, ventilation installations, spikes in procurement of reinforced concrete. Search for that signature across the historical record and you may find what conventional analysis missed.

During Flight: The Twelve Minutes That Decide Everything

When Iranian ballistic missiles launch, Israel has roughly twelve minutes before impact. Cruise missiles take closer to two hours. Drones, up to nine. In that window, Arrow, David’s Sling, and Iron Dome must detect hundreds of incoming objects simultaneously, classify each one as ballistic missile, cruise missile, drone, or decoy, calculate trajectories, assign interceptors, and decide what to defend and what to sacrifice.

Claude would not decide which interceptor fires. That is hardware and doctrine. What an unrestricted Claude could do is fuse data from radar, satellites, signals intelligence, and ground sensors into an integrated operational picture in seconds.

Something that takes human analysts hours. If signals intelligence indicates an IRGC commander ordered a launch, and satellite imagery shows active launchers in a specific zone, and radar detects objects in flight from that direction, Claude synthesizes all of it into a coherent threat assessment before the missiles cross the midpoint of their trajectory.

An unrestricted Claude could theoretically monitor the integrity of defense networks in real time, detecting intrusions and discriminating between legitimate sensor data and injected noise. That would require analyzing massive volumes of network traffic without predefined filters. It is, by definition, bulk data analysis.

The arithmetic is brutal. In a hypothetical salvo of 200 ballistic missiles, the difference between intercepting 85% and 95% is 20 warheads that reach their targets.

After Each Salvo: Hunting the Mobile Launchers

Iran does not fire everything in one wave. After each salvo, the critical question is: where will the next one come from?

Mobile TEL launchers exit tunnels, fire, and relocate within an hour. Destroying them requires detecting them inside a window of minutes. In 1991, the coalition spent weeks trying to hunt Iraqi mobile Scud launchers with total air superiority and largely failed.

Claude would not see what sensors have not captured. But if a satellite image shows a vehicle matching a TEL’s dimensions near a known tunnel entrance, and a signals intercept indicates activation of a missile unit in that sector, and radar data recorded a launch from that bearing twenty minutes earlier, an unrestricted Claude could correlate all of it in real time. It could generate a search box of a few square kilometers instead of hundreds.

More critically, across days of combat, it could detect operational patterns. Missile units are human. They have preferred routes, habitual staging areas, predictable reaction times. A model processing launch data, communications, and movement intelligence across an entire campaign could anticipate where a launcher will be before it arrives.

And there is the intelligence windfall that comes with war itself. In peacetime, Iran maintains strict communications discipline. During active bombardment, that discipline degrades. Commanders need to report damage, request instructions, coordinate counterattacks, reorganize surviving units. That urgency generates traffic. Traffic generates intercepts. Claude processing that surge in real time, in Farsi, could extract operational intelligence almost instantly: which units are still active, which launchers survived, where the next salvo is being staged.

What Claude Cannot Do

Honesty matters here. Claude does not deflect missiles in flight. A ballistic missile on an inertial guidance system is a closed object. No data link. No signal to corrupt. No digital surface to attack. It is a bullet after it leaves the barrel.

Claude does not penetrate air-gapped launch systems. Missile launch controls are isolated networks, physically disconnected from anything reachable remotely. Stuxnet breached air-gapped centrifuges, but that required years of intelligence work and a physical agent to introduce the malware via USB.

Claude does not break military-grade encryption. If the IRGC uses strong cryptographic protocols on its most sensitive communications, no amount of linguistic processing changes that.

Claude is not a weapon. It is an intelligence amplifier. It does not see what has not been observed, does not create data from nothing, does not guarantee completeness. What it does is compress the cycle of analysis from hours to minutes. In a missile war, that compression is the variable that determines how many warheads get through.

What Claude Should Not Do

The operational advantages are real. So are the risks. And they escalate with time.

In the short term, speed amplifies error. If an unrestricted Claude generates a targeting recommendation based on incomplete or misinterpreted data, and that recommendation is acted upon in minutes because the war does not wait, the result can be a strike on a civilian target. A school misidentified as a launcher staging area. A hospital whose communications pattern resembled a command node.

Human analysts make these errors too, but slower, with more institutional friction. Friction in targeting is sometimes what prevents atrocities. Speed also erodes the human in the loop. Not by design but by operational reality. When a model generates an apparently complete threat picture faster than any human can independently verify, the operator becomes a rubber stamp. Technically present in the decision chain. Functionally absent. In a twelve-minute missile engagement, this is not a theoretical concern.

In the medium term, emergency powers become permanent infrastructure. The bulk data analysis capability built to track Iranian launchers does not disappear when the war ends. The surveillance architecture persists. The databases of monitored individuals persist. This is not speculation. It is the history of the Patriot Act. Powers justified by crisis, normalized by inertia, never fully retracted. Now imagine that pattern amplified by AI capable of processing entire populations.

If the United States normalizes the deployment of AI without ethical guardrails, it provides justification for every other military power to do the same. Russia. China. Their models will not even have the conversation about restrictions. A world in which every major military operates unrestricted AI is a world in which the probability of catastrophic escalation by error or by speed increases exponentially.

In the long term, the trajectory bends toward full autonomy. If unrestricted Claude proves it can compress decision cycles from hours to minutes, the next demand will be from minutes to seconds. Then the question becomes: why keep the human at all? The history of military technology shows that every capability that can be automated eventually is. Anthropic’s two red lines were not only about this war. They were about the slope that future generations inherit.

The Question Before Judge Lin

On March 24, Judge Rita Lin will hear Anthropic’s request for a preliminary injunction in San Francisco federal court. The company argues the supply chain risk designation is unconstitutional retaliation for its publicly stated views on AI safety. A First Amendment case.

The court filings tell a revealing story. On March 4, one day after the Pentagon formally designated Anthropic a national security risk, Under Secretary Emil Michael emailed Amodei to say the two sides were “very close” on the exact issues the government now cites as evidence of that risk. If Anthropic truly posed an unacceptable danger to national security, its chief antagonist would not have been writing conciliatory emails to its CEO 24 hours later.

Meanwhile, the Pentagon has announced it is building its own large language models to replace Claude. No external red lines. No corporate conscience to negotiate with. No company that can say no.

The question before Judge Lin is legal. But the question beneath it is not.

If an AI capability exists that could compress the missile engagement cycle from hours to minutes, degrade an adversary’s command structure at machine speed, and protect millions of lives by intercepting a few more warheads per salvo, does a private company have the right to withhold it?

And if that same capability, once unleashed, erodes the human role in lethal decisions, creates surveillance infrastructure that outlasts every war, and sets a precedent that no future government will voluntarily reverse, does a private company have the obligation to withhold it?

The missiles have not stopped flying. The answer has not arrived.


© The Times of Israel (Blogs)