The First AI War
When retrospective books are written in the future about the current war in Iran, it will be seen to have been a historic – perhaps revolutionary – watershed in military history. Not because of an air war leading (hopefully) to the collapse of a dictatorial regime, or the successful initial decapitation of Iran’s leadership, or even the extraordinary coordination between Israeli and American military might. True, these are all impressive accomplishments, but not quite “historic.”
What then? The widespread use of artificial intelligence by the attacking powers, as The Economist reported: “America is thought to have employed Claude, an artificial-intelligence model, to process intelligence, select targets and carry out military simulations” (https://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-strikes-in-middle-east-use-anthropic-hours-after-trump-ban-ozNO0iClZpfpL7K7ElJ2).
The irony is that “Claude’s” owner and developer – Anthropic – is in the midst of a fierce legal battle with the U.S. government about placing limits on its military use. In short, America’s army at present uses Claude as the best AI program out there for military purposes, but the U.S. Department of War wants to have complete independence in how it will be employed. Anthropic, though, demands to put into its contract with the U.S. that Claude will not be used for any illegal purposes (e.g., surveillance of U.S. citizens). For its part, the government argues that such a contractual agreement is not necessary, as the government in any and all cases is bound to obey American laws regarding military matters. Other companies have fewer moral compunctions or demands that they themselves ensure the proper use of their military AI (https://www.wired.com/story/ai-model-military-use-smack-technologies).
While this overall issue might seem to be overly legalistic and technical, underlying it is the far more worrisome “slippery slope” issue: autonomous use of AI in war. To put it bluntly: won’t this eventually lead to giving AI the capability to make life or death decisions in warfare without human input or supervision?
This overriding question has moral applicability to every country. But it also constitutes a future, major geo-strategic problem, especially for advanced militaries as the IDF and the U.S. armed forces.
Permit me to explain. The moral issue is clear to all: does humanity really want to enable non-humans to make decisions regarding the killing of other humans? True, even today most armies have semi-autonomous weaponry, but at the beginning of the process – the decision to fire or not – stands a human being. One can easily see the problematics in removing the “semi” and enabling the “autonomous” to be fully free of human input (other than the initial programming). For one, war is incredibly “messy” – it would be quite easy for an AI to make serious errors in (mis)identifying who’s an enemy combatant, or what is the appropriate amount of firepower to use in order to limit civilian casualties (as seems to have happened with the American missile hitting the Iranian girls school by mistake, with dozens killed). Without an overheated imagination, each of us can think of other “mistakes” an AI could make in the heat (and confusion) of battle.
For advanced armies such as in Israel, the U.S., NATO and China, the problem extends much further. From time immemorial, it has taken years of training and high-level investment in resources to train and arm soldiers. Israel is a classic example. True, it has some of the world’s most sophisticated armaments, but its real military advantage lies in the IDF’s highly trained (wo)manpower. The country has always been able to stay at least two steps ahead of its enemies in large part due to its military prowess – not only fighters but intelligence etc.
AI threatens to undermine such a critical advantage, as it’s a lot easier for a militarily minor country to develop a useful AI program (or buy it from one of the AI powers that don’t have the same moral compunctions as does Anthropic), than to invest hugely in upgrading its warmaking manpower. In other words, whereas AI today mostly increases the gap between the more economically and educationally advanced countries and their more militarily backward enemies, that will most probably change as AI programs come down in price and become military commodities, as widespread as rifles and hand grenades.
We have witnessed a similar process of “equalization” in the Russia-Ukraine War. What significantly reduced Russia’s advantage there in numbers and technology is the newest tech on the block: Ukraine’s drones. They have effectively leveled the killing field, negating Russia’s overwhelming numerical and resource advantages. AI would probably be an even greater “equalizer” once such programs can be purchased off the shelf.
Many countries are aware of the moral issue and in coordinated fashion have started considering how to control military AI. For instance, in 2023 the U.S. issued a Political Declaration on Responsible Military Use of AI that was endorsed by several nations, aiming to create international norms for the responsible development and deployment of military AI, with the emphasis on meaningful human control. At around the same time, several global powers initiated summit meetings to discuss best AI practices; unfortunately, China was not willing to accept any formal constraints on developing AI, virtually guaranteeing an AI arms race.
The bottom line (for now): at present, the world is watching Israel and the U.S. in fascination (some in stupefaction), pummeling Iran – in part through the use of AI. Nevertheless, these two militarily powerful allies might do well to heed Shakespeare’s famous aphorism so that they don’t suffer from the adage: “hoist with his own petard” (Hamlet, Act 3, Scene 4).
