Should 'Killer Robots' Be Banned?
Autonomous weapons could be a military game changer that many want banned. Before considering such a move, we need to refine the debate—and America must demonstrate leadership.
Autonomous weapons that select and engage targets on their own might sound far-fetched, but 90 countries and over 50 NGOs are taking their possible development seriously. For two years now, they have come together for sober discussions on autonomous weapons at the United Nations Convention on Certain Conventional Weapons (CCW), most recently in April 2015. Talks are progressing, but the glacial pace of international diplomacy is out of step with rapid advancements in autonomy and artificial intelligence.
Autonomous and intelligent systems are making rapid and startling progress, outpacing humans in fields as diverse as driving, chess playing, cancer diagnoses, and facial recognition. Similarly, autonomy is becoming increasingly important in the military domain. Drones are not autonomous weapons – they are controlled by human operators – but the rapid incorporation of drones into military operations and their global proliferation points to the speed at which digital technology is changing warfare. Already 90 states and non-state groups possess drones, and 30 have armed drones or programs to develop them.
While no state has said that they are building autonomous weapons, few have ruled them out. Moreover, the same types of sensors and algorithms that will allow self-driving cars to avoiding hitting pedestrians could enable weapons that could select and engage targets on their own. In response, over 50 NGOs have joined together in a Campaign to Stop Killer Robots, calling for a legally-binding international treaty banning autonomous weapons, similar to bans on cluster munitions, landmines, or blinding lasers.
The April CCW meetings, held in Geneva, suggest that one of the obstacles towards reaching international consensus on addressing the challenges of autonomous weapons is the lack of clarity on what, exactly, they are. Despite a relative convergence of definitions among major organizations engaged on the issue, including Human Rights Watch, the ICRC, and the U.S. Department of Defense, significant confusion remains.
To some, autonomous weapons are akin to missiles with more sophisticated targeting and greater freedom of action. To others, they are learning, sentient machines with moral agency. Since there is no established definition, neither side is right or wrong, but the divergence in views leads to confusing conversations, with parties talking past one another. Autonomous weapons do not yet exist and so understanding their characteristics and potential benefits and risks is inherently speculative.
The most recent CCW meetings did show near-universal agreement that some degree of human involvement is required in the use of force. Delegates expressed this in different ways, calling for “meaningful human control” or “appropriate human judgment,” but all express the basic sentiment that the use of lethal force requires human responsibility and accountability.
This common ground represents an opportunity for forging consensus, but those both for and against a ban have avoided precisely defining terms like “meaningful human control,” fearful of the risk of ceding negotiating terrain that others might not give back.
Currently, the Campaign to Stop Killer robots is calling for the establishment of a formal Group of Governmental Experts within the CCW to begin negotiating a treaty banning autonomous weapons, but without offering a working definition of what an autonomous weapon is. The odds that states will endorse such an approach are slim. The CCW operates by consensus, meaning it only takes one state to object and a ban will not pass, a hurdle that forced previous efforts to ban landmines and cluster munitions outside of the CCW.
Proponents of a ban, many of whom are veterans of these previous efforts, understand this. They see the CCW as an “incubator” around which to build a group of like-minded states supporting a ban, who will eventually move outside the CCW to craft their own treaty. But with only four states explicitly calling for a ban (Cuba, Pakistan, Ecuador, and the Holy See), none of whom are leading developers of robotic technology and one of whom lacks an army at all, momentum does not appear to be building.
The Campaign’s strategy hinges on a gamble that states will be willing to agree first to the principle of a ban and then work out the details of what they would be banning later. This strategy succeeded for cluster munitions, where the weapon was already well-understood. Autonomous weapons are fundamentally different. Because they do not yet exist and what they “are” is still very much in dispute, states can only guess about what capabilities they might be giving up in agreeing to a ban.
Going around governments and appealing directly to the public is unlikely to work, either. Unlike landmines and cluster munitions, because autonomous weapons do not yet exist, there are no demonstrable effects of their harm. “Killer robots” may sound bad, but as self-driving cars enter the roadways, the public’s most tangible experience with autonomous systems may be ones that save lives.
Autonomous weapons raise important considerations for accountability, moral responsibility, and safety. At the same time, states have an obligation to defend their territory and citizens, and are unlikely to forgo systems that might help fulfill that responsibility based on hypothetical fears. Additionally, from a purely humanitarian standpoint, a blanket ban risks outlawing beneficial uses of autonomy that could save civilian lives, much as precision-guided weapons have done.
Addressing the challenge of autonomous weapons should therefore begin by seizing on an area where there is common ground–the need for continued human involvement in the use of force–and build from there. States and NGOs don’t have to give up their positions on what the ultimate outcome should be–a legally-binding treaty or not–but setting aside those disagreements for now might help close the gap between positions. It is conceivable that with a better understanding of terms like “meaningful human control” or “appropriate human judgment,” a consensus might form around the role humans should play in the use of force. In that case, parties might only disagree whether the paradigm for human control should be codified in a legally-binding instrument. At the very least, improved dialogue could help clarify differences in positions. Today, much of the dialogue on autonomous weapons is people talking past each other. Undefined terms like “autonomous system” only further confuse things, with one person envisioning a Roomba and another a Terminator.
To start, all sides should look at how human control is exercised in the use of force today. Automation is already used in many tasks in warfare. What would be different about autonomous weapons, and what are the implications of those differences for human accountability, moral responsibility, and safety? Autonomous weapons should be viewed in comparison to already existing weapons.
If proponents for a ban want to make progress, they should start by explaining what capabilities they envision a ban covering. While ban advocates have stated that they are not hostile to military robotics in general, merely autonomous weapons, many of their arguments for a ban would seem to suggest otherwise. Arguments that autonomous weapons would remove combatants from harm’s way, thus potentially lowering the threshold for conflict, apply equally to drones or other remotely controlled vehicles. Ban proponents should also clarify whether they see automated air and missile defense systems and fire and forget homing munitions, both of which have been used for decades by dozens of militaries, as problematic and targets of a ban.
States, likewise, should clarify their positions, including their vision for the future role of humans in the use of force. Many states came to the most recent CCW meetings prepared with substantive statements, a promising sign for only the second international meeting on this topic.
Finally, the United States should lead on this issue, leveraging its own internal policy guidance towards crafting a set of international best practices for addressing autonomy in weapons. The U.S. Department of Defense, after a multi-year study, created a policy document outlining a robust set of required checks and approvals before building or fielding any autonomous weapons. This policy was motivated not by legal requirements, which are addressed in the legal weapons review process, but primarily by safety considerations. No analogous set of international rules or best practices for how states ought to approach autonomy in weapons exists, however.
U.S. policymakers with experience on landmines and cluster munitions are no doubt wary that any engagement might be seen as “concessions” that NGOs might pocket while still pushing for a ban, but this fear misunderstands U.S. strategic interests. With cluster munitions and landmines, military utility was weighed against humanitarian harm. The United States could adopt its own policies to reduce the weapons’ harm, and whether other states similarly did so was of humanitarian interest, but not strategically relevant. With autonomous weapons, however, the United States has an interest in preserving its legal and political freedom of action to adapt to an emerging technology, but also a strong strategic interest in ensuring other states use autonomy responsibly. U.S. leadership could make a difference.
The international community has come together to address the question of how autonomy should be incorporated into weapons, but differences in views over the end-state – driven in large part by philosophical differences about the law of war itself – are hampering an honest exchange of ideas. To make progress, all sides should focus on where there is common ground, the continued importance of humans in the use of force, and build from there. There is little time to waste. Diplomacy will have to move faster if it is to keep pace with technological change.
Paul Scharre, a senior fellow at the Center for a New American Security (CNAS), and Michael C. Horowitz, an associate professor of political science at the University of Pennsylvania, direct CNAS’ Ethical Autonomy Project. From 2009-2012, Scharre led the Defense Department’s working group that resulted in the DOD policy directive on autonomy in weapons.
Image: Flickr/Swamibu