
Photo by VCG / Contributor via Getty Images
The new logic of frictionless fatalities leaves us to an automated fate.
There is a certain shimmer to the language of the new war, the gloss of the corporate campus laid over the battlefield. We read of a “target factory” employed by the Israel Defense Forces in the Gaza Strip, an AI system named Habsora, or “the Gospel.” This system does not command armies; it “produces targets at a fast pace,” accelerating the selection of what to bomb from 50 targets a year to 100 a day. The locution is precise and anodyne, suggesting a process scrubbed of human friction, of the messy, fallible business of judgment. It suggests an assembly line. This is the new aesthetic of conflict, in which the fog of war is not so much pierced as managed and optimized by an algorithm’s unblinking gaze.
The idea of the automaton in battle is not new. The thread runs from the trip-wire crossbows of ancient China to the V-1 flying bomb, a ghost that has haunted the martial imagination for centuries. In the 1970s, the Soviets developed a computer model, VRYAN, to predict an American nuclear strike, a primitive attempt to render the apocalypse foreseeable, calculable. We have always sought to mechanize decision, to delegate the hardest choices to something other than ourselves. What seems different now is the speed and scale of that delegation. The machine is no longer just a calculator in a distant bunker; it is a partner and a hunter.
The risk is not that the machines will take over in some cinematic rebellion, but something quieter and more insidious.
Consider the Turkish Kargu-2 quadcopter, a drone that a UN report suggests, in 2020, “hunted down” retreating fighters in Libya without a human in the loop. The phrase itself is unnervingly animate, verging on the biological. Or consider the swarm, that most organic of metaphors, now applied to networked drones that communicate and decide collectively, overwhelming defenses through distributed intelligence. The Israel Defense Forces deployed the first true combat swarm in 2021, a flock of small drones that dynamically allocated tasks among themselves to find and fix rocket launch sites. The human role was to set the mission, wind the clock, and let it run. The actual fighting, the infinitesimal decisions of who looks where, who guides what, was left to the logic of the swarm.
The theorists, in their cool, measured way, call this “centaur” warfighting: human intuition riding atop machine precision. The distinction echoes the ancient Greeks, the difference between techne, or craft skill, and phronesis, the ability to judge well. We are told that the machine possesses the first in abundance but is innocent of the second. An AI can win a simulated dogfight 5-0 against a seasoned human pilot, as a DARPA program demonstrated, but it cannot understand why the fight is happening. It can identify a camouflaged tank from 30,000 feet, but it cannot weigh the political cost of its destruction.
The tacit promise is that the human, the centaur’s head, will supply this missing wisdom. Yet the very structure of the system, so much more like a Minotaur, seems to diminish that possibility. When the kill chain is compressed from hours to seconds, when a targeting solution is served up by an algorithm that has fused petabytes of data, the space for dissent, for the second thought, for the intrusion of human doubt, narrows to a sliver. The momentum is with the mechanism. To question its recommendation is to introduce friction into a system designed to eliminate it. We hear of so-called automation bias, the tendency to over-trust the machine, but this is not merely a psychological tic. It is the logical consequence of a system that presents its conclusions with an authority no human analyst could muster.
Photo by NurPhoto / Contributor via Getty Images
A peculiar cultural dissonance attends this shift. In the West, particularly in the United States, the turn to the algorithm is driven by a deep aversion to casualties — our own, that is. Drones and robots are presented as a humane choice, a way to project force without risk, to engage in conflict while remaining mostly untouched by it. The warrior ethos, with its valorization of courage and sacrifice, is quietly reconfigured. The new hero is not the fighter ace but the operator in a desert container, or perhaps the coder in camouflage, the new warrior-geek whose decisive contribution is an efficient piece of software.
This drive for sterile, risk-free conflict creates a demand for the very systems that distance us from what Clausewitz called the “visceral gravity of lethal choice.” The pilot of an F-16 feels the aircraft shudder when a missile is released. The drone operator feels a different kind of strain, a more remote and abstracted stress. The commander who approves a target generated by "the Gospel” is another step removed. The algorithm provides the solution; the human provides the authorization. The division of labor is a diffusion of responsibility. If a mistake is made, a wedding party misidentified as an enemy convoy, where does the culpability lie? In the code? In the data it was trained on? In the officer who trusted it? The chain of agency dissolves into a network diagram.
We are assured that these new systems are merely tools, that they are not changing the fundamental nature of war, which remains a contest of human wills. One must wonder. When conflict is waged at the speed of light, when battles are algorithmic duels between competing swarms, when victory is determined by the side with the superior data processing, does it not become something else? Something less a human drama than a systems check, a violent audit? The risk is not that the machines will take over in some cinematic rebellion, but something quieter and more insidious: that in our quest to perfect the instruments of war, we will render it meaningless, a spectacle of automated destruction from which we are numbly estranged. "The Gospel” will generate its targets, the swarm will execute its logic, and we will be left to watch.
Stephen Pimentel