menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Opinion – When the Algorithm Becomes the Alibi

60 0
13.04.2026

In September 2024, The New York Times published an op-ed by Raj M. Shah and Christopher M. Kirchhoff. They argued that the United States needed to urgently adopt AI-powered weapons systems to keep up with a so-called civilizational race with China. They also that the military needed to overhaul its technological capabilities to keep up with AI-driven warfare. Since then, that vision has ceased to be on the opinion pages and has become a living reality. The United States launched Operation Epic Fury on February 28, 2026, striking over 1,000 targets within the first 24 hours, which CENTCOM attributes in part to AI-assisted systems.

The argument put forward by Shah, Kirchhoff, and the chorus of Western commentators who follow them is based on three interconnected assumptions: AI makes war more precise, American technological leadership is a moral imperative in a world endangered by authoritarian competitors, and a human in the loop offers enough ethical protection against atrocity. These assertions are not merely contestable, but they are refuted by documented facts based on battlefields where this technology has already been used against population centres.

Start with the most alluring argument: that AI will make killing more humane. Tech corporations and their representatives advance the images of sterilized, clean, and bloodless violence. Their demonstrations do not involve civilians or the demolition of civilian infrastructure. In short, the algorithmic warfare is presented as a clean and efficient activity. However, the actual practice of AI targeting systems is a radically different narrative, written in the blood of Palestinians, Iranians and others who never had the privilege to be given a column in the New York Times.

The Israeli military in Gaza had a list of up to 37,000 Palestinians, algorithmically connected to Hamas, under the Lavender system. The number of civilians that the officers were permitted to kill with each junior target was set at up to 20 in the first weeks of the war. This is not precision. It is machine learning-washed industrial-scale probabilistic killing. The army knew that human supervision was minimal and that personnel would not detect errors. They treated errors statistically because of the scale and size of the task. They assumed that even when they did not know whether the machine had made the right decision, it was statistically correct. Officers rubber-stamped targets in an average of twenty seconds to ensure approval of a Lavender-marked target.

The records taken in a secret Israeli military database showed that of the 53,000 Palestinians........

© E-International