In Defense Of Autonomous Weapons

Reuters
October 14, 2018 Topic: Security Region: Americas Tags: Artificial IntelligenceRobotsWarEthicsResearch

In Defense Of Autonomous Weapons

The development of artificial intelligence and ethics for their use must go hand-in-hand. Otherwise, such weapons will be built without regard for morals or the laws of war.

 

In July 2018, the Future of Life Institute (FLI), an organization dedicated to mitigating existential risks from artificial intelligence (AI), published a pledge to ban the development of lethal autonomous weapons systems (LAWS). FLI is one of the most respected institutions in the AI risk domain, having on its board such notable names as Elon Musk and, before his death this year, Stephen Hawking. As a result, this commitment to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” made a significant international impact. Some of the world’s leading AI researchers, including Yoshua Bengio and Jürgen Schmidhuber, companies including Google DeepMind and Element AI, and academic institutes, have committed to the pledge. Independently, twenty-six countries, including China, Cuba, Venezuela, and Zimbabwe, have called for a ban on LAWS. While there exist valid ethical and pragmatic concerns about autonomous weapons, the project of building them should not be abandoned wholesale, but rather regulated to ensure they are used in the pursuit of greater peace.

LAWS are artificially intelligent military technologies that are able to execute a decision to kill a combatant without human input. These weapons systems do not exist as of yet, though artificial intelligence and advanced robotics technologies have already proliferated on the battlefield. The most aggressive push in the use of unmanned technologies in warfare has been by the Russian military, which has been using minesweeping drones—such as the Uran-6 in Syria—to limit casualties since 2016. Earlier this year it was confirmed that the Russians had deployed a fully unmanned ground vehicle, the Uran-9, into the conflict zone, equipped with anti-tank missiles, an automatic cannon, and a machine gun turret. While the unmanned vehicle is remote controlled and therefore does not have lethal autonomous capabilities, Russian arms manufacturer Kalashnikov announced their intent to release “autonomous combat drones” that can make lethal decisions without human input. As these technologies continue to develop, a commitment by nations more willing to enforce ethical standards in war to not develop LAWS is an abdication of responsibility because it allows more aggressive nations to set the standards of their development and deployment.

 

Artificial intelligence researchers in western liberal democracies have an obligation to aid in the development of lethal autonomous weapons as a means to ensure that they are tools that are designed well. AI, in its current form, is fundamentally an optimization technology, meaning that the protocols it internalizes and the goals set for it reflect the values of its operation. If those who are committed to higher ethical standards decide not to contribute to the development of LAWS in a way that reflects their values, the technology will be developed anyway but by less scrupulous members of the research community. Therefore, it is vital to emphasize the means by which LAWS can be used to make warfare more humane, and focus on regulating its use rather than calling for a blanket ban.

In designing autonomous weapons to reflect humane considerations, nobody has contributed more than roboticist Ronald Arkin. Arkin has been working with the U.S. Department of Defense since 2006 to develop ethically-guided military robotics and has written widely on the means by which they could improve battlefield conduct. Much of his argument relies on the simple fact that humans are not good rule followers, while robots are. In combat, humans tend to make many mistakes, be motivated by anxiety or anger, and as a result of the strong bonds between military personnel, tend to cover for each other in the case of illegal activity. LAWS suffer from none of these issues. Those who critique the possibility of ethical robots in war tend to focus on technology’s lack of empathy that allows for immoral conduct, but neglect the ways human emotion amplifies, rather than minimizes, error in conflict.

Advances in artificial intelligence have improved object detection and navigation, allowing for lethal autonomous weapons to limit collateral damage. For example, they can be programmed to ensure that there are no women or children in an area before carrying out an operation, and will cancel their course of action if civilian casualties are likely. LAWS can better internalize the rules of war, and they can be placed alongside human combatants to limit ethical abuses or more accurately report them. Restricting the autonomy of advanced weapons systems would allow them to be subject more to the unethical conduct of individuals, rather than the ideal behavior a robotic agent can be developed to reflect.

None of this is to say that there are not valid concerns about the use and development of LAWS, but focusing on regulation is preferable to prohibition. Researchers should be held to the high ethical standards in the development of LAWS, and rigorous debate is necessary about how and when these systems should act, but this is only possible if their development is supported. The existential risk from autonomous systems results from low probability, but high impact events, such as control over a nuclear arsenal, which even the smallest chance of system failure should be a cause for concern. Banning these sorts of activities would avoid these worrisome possibilities while allowing for the benefits LAWS bring to conventional military action.

There are even legitimate worries about the incentives that a more humane warfare creates, given that the negative impact of collateral damage makes military solutions currently unpalatable. LAWS would make conflict more effective, leading decisionmakers into conflict more often. These concerns as well, are better addressed by focusing on increasing the costs to go to war politically, rather than relying on the pain and suffering caused by human failure. It is not immediately clear that infrequent, but more intense, conflict is preferable to more frequent, but less intense military options. When these unclear value choices are presented, turning to deliberation is the mechanism that democratic societies have chosen.

Robots do not rape, robots do not murder, and robots do not make excuses to kill innocent civilians. There is much of this deliberation needed surrounding LAWS, but it is necessary that they are developed, and that talented, ethically-motivated researchers do not turn their back on the project of military robotics. Risks exist, but so do benefits. Only they can mitigate the former and ensure the latter.

Ryan Khurana is executive director of the Institute for Advancing Prosperity. Follow him on Twitter at @RyanKhurana.

Image: Reuters