The United States Air Force Chief, Gen. David L. Goldfein, created a sensation at the Dubai Airshow in October 2019 by revealing plans for automation of the kill chain during lethal engagements.
Humans in their design would enter the picture only at the last stage of target engagement, while rest of the kill chain – detection of objects, their identification, decision to initiate lethal engagement and assignment of targets to weapons platforms – would be fully automated. This development would be in sync with the need for quicker response time to effectively respond to future threats.
To quote Goldfein, “In most kill chains today there is a human in every step of the loop, but the future would require humans on the loop – not in the loop, making final decisions for lethal or non-lethal fires.”
It is also argued that in future a human would be ‘on the loop’ when even the last stage is automated, although he would only oversee the operation with veto power.
The rapidly advancing technology has, of late, started bringing about some radical transformation to long-standing concepts and beliefs at large. Warfare is no exception in this regard, with some of its seemingly enduring concepts transforming under the impact of technology. Keeping humans in the chain of killing and destruction is one such issue. The terminology for the same – being used more after the proliferation of computers and networking in warfare – is ‘keeping human in the loop’.
Leaving the decision to kill or destroy totally to machines is invariably seen as cold-blooded and a gross violation of human ethics. Even the military league is generally strongly opposed to the idea. In certain cases, such automation is there even now with humans on the loop.
In air defence and missile defence, for example, the time available to make a decision is extremely limited, especially with modern high-speed aircraft and missiles, and it is “either they or us situation” most of the time.