Killer Robots: Moral Concerns Vs. Military Advantages

Killer Robots: Moral Concerns Vs. Military Advantages

Ethical concerns over computers making life and death decisions are real, and they’re important

The myriad of opinions on the prospect of autonomous weapons in the not-too-distant future seem to portend a hopeless situation—the U.S. is either doomed to defeat by an enemy who fully embraces war-fighting advantage through full autonomy, or Americans are destined to see the complete erosion of humanity and human rights principles which have long guided U.S. forces in combat. To some extent, both sides are right. Senior U.S. leadership can neither dismiss ethical concerns nor cede technological advantage to potential adversaries. However, they should recognize that America’s moral judgment on autonomous weapons is likely to change based on context. So rather than completely disavow full autonomy in favor of concepts like human-machine teaming, DoD leadership should look toward systems capable of both—just like it’s done for decades—with adjustable degrees of autonomy. Such systems would enhance warfighting with a man-in-the-loop today, and provide even greater capability if needed in future major wars.

Regardless of advances in computer processing, without human oversight and control, fully autonomous weapon systems will make mistakes. Perhaps technology will progress to the point where mistakes are few and far between, but they will happen. Proponents of this argument are right; when those mistakes occur in a system’s ability to discriminate between enemy combatant and innocent civilian, atrocities will result. 

Of course, the sad reality is that unintended civilian casualties occur in every conflict, even when a human controls the weapon and makes the decisions. Two questions arise then. First, will lethal autonomous systems be more or less likely to make mistakes than those operated by humans? And second, are civilian casualties at the hands of a human inherently less reprehensible than at the hands of a computer?

On the first question, while artificial intelligence and machine learning may not yet be on par with human ability to discriminate targets from non-targets, it is trending ever closer. And a computer can process inputs, weighing them against decision-making criteria, exponentially faster than a human without worry of emotion or fatigue degrading judgment. So, technology may be on its way to a point where computers are able to make decisions faster and better than a human. This would make them less prone to cause inadvertent civilian casualties than human-controlled weapons.

On the second question, the rational argument is that there is no difference between accidental civilian deaths by human or by computer. In fact, the likelihood that autonomous systems could result in fewer unintended deaths should outweigh any qualms about man vs. machine responsibility for those few that do still occur. However, humans are not purely rational. In American society, for reasons related to emotions, moral beliefs, and perhaps the need for human accountability, there is an innate difference. Some believe unintended civilian deaths at the hands of a cold, unfeeling machine are somehow worse than the same deaths at the hands of a living, breathing human, whose conscience will bear the burden. Senior defense leadership should acknowledge this. To try and arm-wave it away with a numbers-based argument—however rational—risks losing public trust in America’s defense institutions.

On the other hand, society’s collective appraisal is not set in stone; it would likely evolve based on context. In a major war with a near-peer adversary who is fielding autonomous weapons against U.S. forces, most would likely feel less squeamish about full autonomy than they would in conflicts like the recent wars in Iraq and Afghanistan. China and Russia, either one of which the United States could conceivably face in a future kinetic war, are both pursuing greater autonomy in weapons systems, and their records suggest neither will fret much over human rights implications. Faced with that scenario, not only would society’s ethical qualms likely diminish, but most Americans would probably now consider it senior defense leadership’s duty to ensure U.S. troops are equipped with at least comparable technology. This argument is not meant as a cynical characterization of Americans’ fickle nature. Even those with the firmest conviction of moral principles would likely provide two different opinions based on two global military contexts.

So, given the evolutionary nature of societal values, why not design systems with adjustable degrees of autonomy, ranging from a human fully in control on one end of its capability to a computer fully in control on the other. Operating modes in between might look like a computer processing criteria, selecting a target, and giving a recommendation to shoot, or a computer initiating an engagement, but requiring a human to give authorization. This leaves room for concepts within the Third Offset Strategy like human-machine collaboration and combat teaming, which envision significant autonomy, but still rely on a human decision to pull the trigger. These concepts hold significant potential for advances in war-fighting capability and live up to American society’s moral expectations of today. However, designing platforms within these concepts—once the technology is mature enough to do so—that are also capable of operating in a fully autonomous mode, could ensure the U.S. military does not find itself at a decided disadvantage when societal values shift in the face of any future major war against a technologically advanced adversary.

Scalable autonomy already exists in systems that have been fielded on U.S. platforms for decades, particularly in air and missile defense. Over the course of their existence, their autonomy had the potential to mistake friendly forces for enemy forces, or even civilians for enemy forces, but operators mitigated those risks through the ability to switch the fully-autonomous features of the systems on and off. Meanwhile, the more advanced capabilities remain ready at hand for any future major fight where a human operator may become inundated or cannot react fast enough to potential threats.

Ethical concerns over computers making life and death decisions are real, and they’re important. The American public will not countenance their dismissal. At the same time, the Defense Department has an obligation to ensure the U.S. military maintains its edge in the face of rapid advances in artificial intelligence and machine learning across the globe. U.S. leadership can reconcile these two ideas by recognizing that societal values are evolutionary by nature and by developing individual new platforms and systems built around scalable autonomy. This would leverage technological advances within the bounds of contemporary American values and retain the capacity for even greater military advantage in a future fight.

Colin Roberts is a Visiting Military Fellow at the nonprofit, nonpartisan RAND Corporation and a surface warfare officer in the U.S. Navy. The opinions expressed here are the author’s and do not represent the official position of the U.S. Navy, Department of Defense, or the U.S. Government.

Image: U.S. Air Force Photo