The Rise of AI Warfare: How Autonomous Weapons and Cognitive Warfare Are Reshaping Global Military Strategy
Photo by Logan Voss
In the 1983 film War Games, a supercomputer known as WOPR (for War Operation Plan Response) is about to provoke a nuclear war between the United States and the Soviet Union, but because of the ingenuity of a teenager (played by Matthew Broderick), catastrophe is averted. In the first Terminator film, which was released a year later, a supercomputer called “Skynet” decides to exterminate humanity because it’s perceived as a threat to its existence rather than to protect American nuclear weapons.
Although these films offered audiences grim scenarios of intelligent machines running amok, they were also prophetic. Artificial intelligence (AI) is so commonplace that it’s routinely applied during a simple Google search. That it is also being integrated into military strategies is hardly any surprise. It’s just that we have little understanding of the capacity of these high-tech weapons (those that are now ready for use and those in development). Nor are we prepared for systems that have the capacity to transform warfare forever.
Throughout history, it is human intelligence that uses the technology, not the technology itself, which has won or lost wars. That may change in the future when human intelligence is focused instead on creating systems that are more capable on the battlefield than those of the adversary.
An “Exponential, Insurmountable Surprise”
Artificial intelligence isn’t a technology that can be easily detected, monitored, or banned, as Amir Husain, the founder and CEO of an AI company, SparkCognition, pointed out in an essay for Media News. Integrating AI elements—visual recognition, language analysis, simulation-based prediction, and advanced forms of search—with existing technologies and platforms “can rapidly yield entirely new and unforeseen capabilities.” The result “can create exponential, insurmountable surprise,” Hussain writes.
Advanced technology in warfare is already widespread. The use of uncrewed aerial vehicles (UAVs)—commonly known as drones—in military settings has set off warnings about “killer robots.” What happens when drones are no longer controlled by humans and can execute military missions on their own? These drones aren’t limited to the air; they can operate on the ground or underwater as well. The introduction of AI, effectively giving these weapons the capacity for autonomy, isn’t far off.
Moreover, they’re cheap to produce and cheap to purchase. The Russians are buying drones from Iran for use in their war in Ukraine, and the Ukrainians have been putting together a cottage industry constructing drones of their own against the Russians. The relative ease with which a commercial drone can be converted into one with a military application also blurs the line between commercial and military enterprises. At this point, though, humans are still in charge.
A similar problem can be seen in information-gathering systems that have dual uses, including satellites, manned and unmanned aircraft, ground and undersea radars, and sensors, all of which have both commercial and military applications. AI can process vast amounts of data from all these systems and then discern meaningful patterns, identifying changes that humans might never notice. American forces were stymied to some degree in wars in Iraq and Afghanistan because they could not process large amounts of data. Even now, remotely piloted UAVs are using AI for autonomous takeoff, landing, and routine flight. All that’s left for human operators to do is concentrate on tactical decisions, such as selecting attack targets and executing attacks.
AI also allows these systems to operate rapidly, determining actions at speeds that are seldom possible if humans are part of the decision-making process. Until now, decision-making speed has been the most important aspect of warfare. If, however, AI systems go head-to-head against humans, AI will invariably come out ahead. However, the possibility that AI systems eliminate the human factor terrifies people who don’t want to see an apocalyptic scenario on celluloid come to pass in reality.
Automated Versus Autonomous
A distinction needs to be made between the term “autonomous” and the term “automated.” If we are controlling the drone, then the drone is automated. But if the drone is programmed to act on its own initiative, we would say it is autonomous. But does the autonomous weapon describe the actual weapon—i.e., a missile on a drone—or the drone itself? Take, for example, the Global Hawk military UAV (drone). It is automated insofar as it is controlled by an operator on the ground, and yet if it loses communication with the ground, the Golden Hawk can land on its own. Does that make it automated or autonomous? Or is it both?
The most important question is whether the system is safety-critical. Translated, that means whether it has the decision-making capacity to use a weapon against a target without intervention from its human operator. It is possible, for example, for a drone to strike a static military target on its own (such as an enemy military base) but not a human target because of the fear that innocent civilians could be injured or killed as collateral damage. Many countries have already developed drones with real-time imagery capable of acting autonomously in the former instance, but not when it comes to human targets.
Drones aren’t the only weapons that can act autonomously. Military systems are being developed by the U.S., China, and several countries in Europe that can act autonomously in the air, on the ground, in water, and underwater with varying degrees of success.
Several types of autonomous helicopters designed so that a soldier can direct them in the field with a smartphone are in development in the U.S., Europe, and China. Autonomous ground vehicles, such as tanks and transport vehicles, and autonomous underwater vehicles are also in development. In almost all cases, however, the agencies developing these technologies are struggling to make the leap from development to operational implementation.
There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers. The U.S. has, for instance, struggled to bring autonomous UAVs to operational status, primarily due to organizational infighting and prioritization in favor of manned aircraft.
The Future Warrior
In the battleground of the future, elite soldiers may rely on a head-up displaythat feeds them a wealth of information that is collected and routed through supercomputers carried in their backpacks using an AI engine. With AI, the data is instantly analyzed, streamlined, and fed back into the head-up display. This is one of many potential scenarios presented by U.S. Defense Department officials. The Pentagon has embraced a relatively simple concept: the “hyper-enabled operator.”
The objective of this concept is to give Special Forces “cognitive overmatch” on the battlefield, or “the ability to dominate the situation by making informed decisions faster than the opponent.” In other words, they will be able to make decisions based on the information they are receiving more rapidly than their enemy. The decision-making model for the military is called the “OODA loop” for “observe, orient, decide, act.” That will come about using computers that register all relevant data and distill them into actionable information through a simple interface like a head-up display.
This display will also offer a “visual environment translation” system designed to convert foreign language inputs into clear English in real time. Known as VITA, the system encompasses both a visual environment translation effort and voice-to-voice translation capabilities. The translation engine will allow the operator to “engage in effective conversations where it was previously impossible.”
VITA, which stands for Versatile Intelligent Translation Assistant, offers users language capabilities in Russian, Ukrainian, and Chinese, including Mandarin, a Chinese dialect. Operators could use their smartphones to scan a street in a foreign country, for example, and immediately obtain a translation of street signs in real-time.
Adversary AI Systems
Military experts divide© CounterPunch
