Killer Robots Using AI Could Transform Warfare. And China Might Hate That.

July 1, 2018 Topic: Security Region: Asia Blog Brand: The Buzz Tags: ChinaMilitaryTechnologyDronesAIRobots

Killer Robots Using AI Could Transform Warfare. And China Might Hate That.

As in they might not trust them.

What happens when you mix killer robots and Communists?

It sounds like the punch line to a Cold War joke, but there are reasons to wonder whether China’s Communist government, which likes to make decisions for its citizens, can work with autonomous armed robots that make their own decisions. Ditto for the People’s Liberation Army and its Soviet-style centralized command and control system.

“In a military context, questions of trust will be paramount,” China expert Elsa Kania told The National Interest.

“Will the PLA be more or less inclined to trust machine intelligence, relative to human intelligence? Will a military organization that often seems unwilling to grant autonomy to its officers and enlisted personnel be willing to embrace the autonomy of AI systems?”

In fact, it is not even clear where China stands on the existence of killer robots.

On the one hand, Beijing has expressed support for an international campaign to ban autonomous robots that kill without human authorization. At the same time, China is developing killer robots such as autonomous combat drone swarms.

Recommended: How China Plans to Win a War Against the U.S. Navy

Recommended: How the Air Force Would Destroy North Korea

Recommended: 10 Reasons No Nation Wants to Fight Israel

“The juxtaposition of these announcements illustrates China’s apparent diplomatic commitment to limit the use of fully autonomous lethal weapons systems' is unlikely to stop Beijing from building its own,” writes Kania, a fellow at the Center for New American Security, in a post on the Lawfare blog.

While China has joined other nations, such as the United States and Russia, aggressively exploring the potential of Artificial Intelligence to revolutionize warfare, China’s stance seems to be deliberately ambiguous.

Kania points to two contradictory papers submitted by China to the UN’s Group of Governmental Experts on lethal autonomous systems. In December 2016, China submitted a paper supporting an additional protocol to the Convention on Certain Conventional Weapons that would ban killer robots, similar to an earlier ban on blinding laser weapons. Yet an April 2018 paper did not include support for a ban, but only advocated “the applicability of general legal norms” to autonomous weapons. However, the latest April 2018 position paper—released just a few days before its delegation called for a ban—did not include support for such an agreement.

China also suggested a very narrow definition of killer robots. “By such a standard, a weapons system that operates with a high degree of autonomy but involves even limited human involvement, with the capability for distinction between legitimate and illegitimate targets, would not technically be a LAWS, nor would a system with a failsafe to allow for shutdown in case of malfunction,” Kania writes.

Interestingly, this definition is much narrower than than the Chinese military's own published definition of armed autonomous robots, suggesting “there may be daylight between China’s diplomatic efforts on autonomous weapons and the military’s approach,” writes Kania.

Kania suggests that China may have a cynical motive in developing killer robots while publicly indicating support for an international ban: “It is worth considering whether China’s objective may be to exert pressure on the U.S. and other militaries whose democratic societies are more sensitive to public opinion on these issues.”

Yet the most interesting contradiction between China and killer robots is existential: Can a centralized political and military system accommodate robots that think for themselves? Kania points to a notorious incident involving two Chinese chatbots—computer software designed to mimic human conversation, such as the ones we encounter on customer service Web sites. When users asked whether they love the Communist Party, the chatbots answered “no,” after which the government took down the chatbots.

So how will autonomous combat robots fit into such a tightly controlled system?

“The core of China’s military command culture prioritizes centralized, consolidated control,” Kania writes. “In the absence of a culture of trust, the military is hesitant to tolerate even the uncertainty associated with giving humans higher degrees of autonomy, let alone machines. Even if the military someday trusts artificial intelligence more than humans, it may still face issues of control, given the potential unpredictability of these complex technologies.”

Michael Peck is a contributing writer for the National Interest. He can be found on Twitter and Facebook.

Image: Reuters