1

China's Olive Branch to Save the World from AI Weapons

February 1, 2019 Topic: Security Region: Asia Tags: ChinaWarArtificial IntelligenceWeaponsNational Security

China's Olive Branch to Save the World from AI Weapons

Is China open to arms control over AI weapons development? The United States should find out.

The fundamental problem with great-power competition as a strategy is that it offers little in the way of concrete, realizable objectives. If China is successfully marketing big, yellow bananas in Bhutan, then Uncle Sam must endeavor to get some bigger, more yellow bananas up into the Himalayan kingdom pronto. The seeming prize of this contest would be a great declaration by the King of Bhutan and splashed across the pages of Thimphu’s most influential newspapers that the “Indo-Pacific” strategy is a great success and that American bananas are indeed far superior to Chinese bananas by any metric.

Such an approach is myopically zero-sum, not to mention that it defies the law of comparative advantage, while attempting to magically overstep cultural, historical and geographic factors too—always a dubious proposition. Significantly more disturbing then the consistent and enormous waste of resources, however, is the fact the apparently endless struggle for “influence” does not pay any particular attention to the dark side of competition, namely the possibility for escalation (both purposeful and inadvertent), as well as preemption and misperception too.

All these concerns are present in the developing strategic rivalry with China over artificial intelligence (AI). Last fall, Kai-Fu Lee, CEO of Sinovation Ventures and notably a former employee of both Apple and also Google, boldly asserted that “AI’s center of gravity [is shifting] . . . away from the United States and toward China.” Yet, even as it is essential to understand Beijing’s strides in the AI domain, it is simultaneously vital to keep our minds open to alternative and even more hopeful futures. For example, examine what appears to be an “olive branch” from the Middle Kingdom in this article from a mid-2018 edition of the prestigious journal Contemporary International Relations [现代国际关系] with the title “Pondering Arms Control and Lethal Autonomous Weapons Systems [致命性自主武器系统及军空思考].” It was written by two authors, Xu Nengwu [徐能武] and Ge Hongchang [葛鸿昌] of China’s National University of Defense Technology located in Changsha.

In the first sentence, the authors note that Xi Jinping has called upon China’s scientists to “accelerate the building of a strong country [加快建设制造强国]” that includes emphasis on “big data, artificial intelligence” and other similar cutting edge technological tools. Without contradicting this top level guidance directly, however, Xu and Ge quite boldly articulate the view that such development should not be without limit: “the international community maintains that the control of autonomous lethal weapons systems is difficult, but necessary.” They recommend a framework negotiated through the UN that emphasizes both “transparency [透明],” as well as “legal principles [法治原则].”

Reviewing the history of AI in modern warfare, these Chinese authors highlight the fact that United States began to experiment with these technologies intensively during the Vietnam War. But it is recognized that “UAVs . . . have become the customary weapon for the US on the counter-terrorism battlefields [无人机 . . . 成为美国反恐战场上的常备武器].” Not surprisingly, the authors recount some of the U.S. Navy’s latest impressive feats with unmanned vehicles, including both the X-47B carrier-launched drone aircraft, as well as the long-range submarine chaser “sea hunter [海上猎手].” Yet, Xu and Ge also observe that Washington does not have a monopoly on such systems, as they conclude that Russia has deployed an unmanned armored vehicle into combat operations in Syria with considerable success. The authors explain that Moscow is “not keen to show weakness [不甘示弱]” in the accelerating contest for AI weaponry. Their conclusion is that lethal autonomous systems are already at work in all domains, including “sea, land, air, and space . . . [海, 陆, 空, 太空].”

Unlike most defense analysts, who are content to passively observe this process, these authors contend that the intensifying arms race in lethal AI systems constitutes a “Pandora’s box [魔盒]” that could lead the world to “the apocalypse [终结时代].” Great Western thinkers and innovators from Stephen Hawking [霍金] to Elon Musk [马斯克] are cited. It is indeed noted that Musk warned in fall 2017 that an AI arms race could even be the spark for World War III. Western experts may smirk at a Chinese argument that involves discussion of human rights, but Xu and Ge articulate concern that targeted killings by drones violate a “person’s right to a fair trial.” Moreover, they view the characteristics of such robotic systems as lacking in emotion or ethical reasoning and having “psychological distance [心里的距离]” that will lead invariably to “massacres and such humanitarian disasters [滥杀等人道主义灾难].” Illustrating the important point that Chinese defense scientists are also increasingly perturbed by the possibility escalation in an AI strategic environment, Xu and Ge evince the concern that AI weaponry “significantly lowers the threshold of war [这极大降低了战争的门槛].”

The article is not ultra-specific regarding its recommendations, but the authors do state very clearly that China, as both a permanent member of the UN Security Council and also a rising power, should play an active role in arms control. Thus, China will be acting “as a responsible great power [作为负责人大国].” The authors duly note that the UN has several commissions and offices that should take an interest in this matter, including obviously the UN Advisory Committee on Disarmament. True, Xu and Ge see these proposals as partly aimed at “hegemonic country unilateral [反对霸权国家单方]” dominance of rule-making structures. Still, the proposal to create an “international consultative framework” for arms control with respect to lethal autonomous weapons system is an unusual and almost wholly positive step. A relatively concrete initial step, according to the authors, could be formation of a relatively “small expert group [专家小组],” comprising specialists in robotics, computer science, law, ethics, and defense to begin developing both an agenda and a framework for substantive progress.

Lest one gets too excited, one should realize that the above proposal is really just a scholarly article at this point and is not any kind of official Chinese proposal yet—maybe just a trial balloon. American conservatives, moreover, are certain to howl that this is a “Trojan Horse,” designed to lull the U.S. defense establishment into complacency while China surges ahead in AI weapons capability development. Yet, that is excessively paranoid and ignores the many benefits that could accrue for global security from some kind of basic, foundational agreements limiting AI weaponry. In a parallel with the extremely complex climate change discussions, such negotiations with Beijing on AI weaponry would hardly be easy, but the imperative to explore such options is starkly obvious.

Lyle J. Goldstein is research professor in the China Maritime Studies Institute (CMSI) at the United States Naval War College in Newport, RI. In addition to Chinese, he also speaks Russian and he is also an affiliate of the new Russia Maritime Studies Institute (RMSI) at Naval War College. You can reach him at [email protected]. The opinions in his columns are entirely his own and do not reflect the official assessments of the U.S. Navy or any other agency of the U.S. government.

Image: Reuters.