Let the Robots Do It

September 20, 2011 Topic: Military StrategyTerrorism Region: United StatesPakistan Blog Brand: Paul Pillar

Let the Robots Do It

The benefits of removing human judgment from killing on the battlefield.

A front-page story in Tuesday's Washington Post discusses how advances in military technology make possible complete automation of killing on the battlefield. This would take delegation to machines a couple of steps beyond what we see today with the use of drones to hunt down militants in places like northwest Pakistan. The current use involves remotely piloting the drones and, after video of a potential target is transmitted back to human monitors, another human with sufficient authority makes a decision to fire a missile and kill the target. In the fully automated scenario, the drones and perhaps ground-based robotic systems would be programmed to identify targets according to specified criteria and, if the criteria are met, to make the kill without further human intervention.

The fully automated mode unsurprisingly raises concerns, some of which might be evocative of Michael Crichton's 1970s thriller Westworld, in which lethal robots go out of control and start killing unsuspecting humans. Doesn't something like the use of lethal force in a situation in which targeting information is ambiguous and the repercussions of collateral damage significant—conditions which apply to the Pakistan theater where most of the drone strikes have occurred—require human judgment? The better question to ask is: how is such human judgment to be exercised? What are the standards or criteria according to which the human involved is making decisions? The killing of terrorists using the drones is a tool that should not be given up, but one of the principal shortcomings in their use to date is opacity regarding the criteria used in each individual decision to shoot or not to shoot. Some of the uncertainty no doubt concerns whether a potential target is really who he is believed to be. But even if uncertainties about identity are resolved, what determines whether a particular individual should be subject to this form of long-distance execution? The only thing preventing the whole process from being arbitrary and capricious is a clear set of criteria that answer that question.

Programming a robot weapon to make those determinations instead of a human forces the criteria to be clear. A vague sense of what makes someone enough of a bad guy to be bumped off from high altitude is not the sort of basis for decision that can be translated into computer code. In practice, full automation is never going to replace all operations like the drone strikes in Pakistan, because of unexpected opportunities and attributes of potential targets that cannot all be anticipated in advance. But forcing the construction of explicit standards for pulling the trigger would enable the entire effort, even the part involving humans, to be put on firmer moral and legal ground.