This could soon change as scientists push the boundaries of artificial intelligence, which could one day open the door to drones that make independent “decisions” that have life and death implications. Of course, unmanned vehicles, or robots in general, are not intelligent in the human sense of the word, nor can they be said to be sentient. But advances in computing power are giving machines greater situational awareness and adaptability. As those capabilities continue to improve, drones could one day become “fire-and-forget” weapons, with much greater attention spans and durability than human beings, capable of lingering over a target for several hours and making split-second decisions to strike when an opportunity occurs. Moreover, the incentives for giving combat roles to machines and endowing them with life-and-death decisions will continue to increase as the costs associated with training and retaining soldiers continue to rise (another disadvantage of using soldiers: they have grieving families and loved ones).
Giving robots license to kill is only the logical next step in the increasingly videogame-like nature of warfare. Their deployment adds yet another a layer of distance between the perpetrator of violence and the victim, which lowers the psychological threshold for using force. Once the decision is made to give drones combat duty, the incentive will be to make them as “free” as possible, as the side that acts the quickest, with the least decision chokepoints and human input, will likely prevail in a confrontation.
J. Michael Cole is a Taipei-based journalist, a Senior Fellow at the China Policy Institute University of Nottingham, a graduate in War Studies from the Royal Military College of Canada and a former analyst at the Canadian Security Intelligence Service.
Editor's Note : This piece first appeared in February of this year. It is being recirculated due to reader interest.