Should 'Killer Robots' Be Banned?

Autonomous weapons could be a military game changer that many want banned. Before considering such a move, we need to refine the debate—and America must demonstrate leadership. 

Autonomous weapons that select and engage targets on their own might sound far-fetched, but 90 countries and over 50 NGOs are taking their possible development seriously. For two years now, they have come together for sober discussions on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW), most recently in April 2015. Talks are progressing, but the glacial pace of international diplomacy is out of step with rapid advancements in autonomy and artificial intelligence.

Autonomous and intelligent systems are making rapid and startling progress, outpacing humans in fields as diverse as driving, chess playing, cancer diagnoses, and facial recognition. Similarly, autonomy is becoming increasingly important in the military domain. Drones are not autonomous weapons – they are controlled by human operators – but the rapid incorporation of drones into military operations and their global proliferation points to the speed at which digital technology is changing warfare. Already 90 states and non-state groups possess drones, and 30 have armed drones or programs to develop them.

While no state has said that they are building autonomous weapons, few have ruled them out. Moreover, the same types of sensors and algorithms that will allow self-driving cars to avoiding hitting pedestrians could enable weapons that could select and engage targets on their own.  In response, over 50 NGOs have joined together in a Campaign to Stop Killer Robots, calling for a legally-binding international treaty banning autonomous weapons, similar to bans on cluster munitions, landmines, or blinding lasers.

The April CCW meetings, held in Geneva, suggest that one of the obstacles towards reaching international consensus on addressing the challenges of autonomous weapons is the lack of clarity on what, exactly, they are. Despite a relative convergence of definitions among major organizations engaged on the issue, including Human Rights Watch, the ICRC, and the U.S. Department of Defense, significant confusion remains.

To some, autonomous weapons are akin to missiles with more sophisticated targeting and greater freedom of action. To others, they are learning, sentient machines with moral agency. Since there is no established definition, neither side is right or wrong, but the divergence in views leads to confusing conversations, with parties talking past one another. Autonomous weapons do not yet exist and so understanding their characteristics and potential benefits and risks is inherently speculative.

The most recent CCW meetings did show near-universal agreement that some degree of human involvement is required in the use of force. Delegates expressed this in different ways, calling for “meaningful human control” or “appropriate human judgment,” but all express the basic sentiment that the use of lethal force requires human responsibility and accountability.

This common ground represents an opportunity for forging consensus, but those both for and against a ban have avoided precisely defining terms like “meaningful human control,” fearful of the risk of ceding negotiating terrain that others might not give back.

Currently, the Campaign to Stop Killer robots is calling for the establishment of a formal Group of Governmental Experts within the CCW to begin negotiating a treaty banning autonomous weapons, but without offering a working definition of what an autonomous weapon is. The odds that states will endorse such an approach are slim. The CCW operates by consensus, meaning it only takes one state to object and a ban will not pass, a hurdle that forced previous efforts to ban landmines and cluster munitions outside of the CCW.

Proponents of a ban, many of whom are veterans of these previous efforts, understand this. They see the CCW as an “incubator” around which to build a group of like-minded states supporting a ban, who will eventually move outside the CCW to craft their own treaty. But with only four states explicitly calling for a ban (Cuba, Pakistan, Ecuador, and the Holy See), none of whom are leading developers of robotic technology and one of whom lacks an army at all, momentum does not appear to be building.

The Campaign’s strategy hinges on a gamble that states will be willing to agree first to the principle of a ban and then work out the details of what they would be banning later. This strategy succeeded for cluster munitions, where the weapon was already well-understood. Autonomous weapons are fundamentally different. Because they do not yet exist and what they “are” is still very much in dispute, states can only guess about what capabilities they might be giving up in agreeing to a ban.

Going around governments and appealing directly to the public is unlikely to work, either. Unlike landmines and cluster munitions, because autonomous weapons do not yet exist, there are no demonstrable effects of their harm. “Killer robots” may sound bad, but as self-driving cars enter the roadways, the public’s most tangible experience with autonomous systems may be ones that save lives.

Pages