Can Chinese and Russian Drones Attack Without Human Approval?

Can Chinese and Russian Drones Attack Without Human Approval?

The Pentagon is worried that might indeed be the case. Soon, if not already. 

 

Russia and China are not paying attention to ethical limits or considerations regarding the widespread application of Artificial Intelligence (AI) across the spectrum of civilian and military operations, according to Pentagon leaders in charge of America’s military applications of AI. 

“We know that China and Russia are developing and exporting AI-enabled surveillance technologies and autonomous combat systems without providing any evidence of adequate technical or ethical safeguards and policies,” Nand Mulchandani, Acting Director, Joint Artificial Intelligence Center, told reporters according to a Pentagon transcript. 

 

While Mulchandani addressed issues related to China’s domestic use of AI for things like censorship and facial recognition, there are many military applications that are raising the alarm from a national security perspective. Certain ethical constraints may, quite simply, not exist for Russian or Chinese weapons developers who could quickly leverage various kinds of warfighting robots. For instance, armed drones could advance upon enemy forces without needing human teleoperation, and use built-in sensors to avoid obstacles, find targets and also fire weapons autonomously

The Pentagon has sustained its existing man-in-the-loop doctrine, despite rapid advances in autonomy and AI that make it possible for unmanned systems to detect, track, and destroy targets without human intervention. All decisions regarding the use of lethal force must, according to Pentagon doctrine, be made by a human being. However, the ability to perform all the aforementioned functions without human decision-making is already here. Perhaps Russian or Chinese Drones could operate with complete autonomy, potentially compromising an ability to properly separate civilians from combatants. What if even ballistic missile target selection were completely autonomous? An error would bring catastrophic consequences. 

Alongside ethical considerations, there are technical reasons why AI and computer automation are not perfectly suited for all kinds of operations. While progressing quickly, an AI-system is only as effective as its database. Therefore, certain algorithms may not yet have the requisite reliability to accurately identify everything that is detected. Complete “trust” in certain AI-generated algorithms as it is called, is still not fully established in all circumstances. A lot of work and research is now going into engineering certain kinds of “real-time” analytics, enabling faster machine-learning so that databases can receive, process and integrate new information during the analytical process. However, much of this is still in the early phases of engineering, so there are likely certain scenarios, conditions or nuances less recognizable by even the most advanced AI-empowered systems. 

Here's What You Need to Remember: Drone defense is one area where this has been underway with some of Pentagon’s industry partners, such as Raytheon. For example, AI-empowered sensor networks are being developed to detect approaching drone threats, categorize or identify them and then choose the optimal defensive solution.

Accordingly, even small questions of accuracy or reliability take on great significance when it comes to using lethal force. In addition, AI developers are learning how enemies can work to “spoof” or “confuse” certain sensing algorithms. For this reason, technology developers at places like Booz Allen Hamilton are working diligently on engineering a certain kind of holistic AI-empowered detection, which simultaneously weighs a host of variables at one time, helping to foster an integrated and therefore less “spoofable” image. 

U.S. military scientists and AI developers often explain that there are likely some defensive uses of AI which could be used to fire interceptors, detect and destroy incoming rockets, missiles, artillery or drones. These applications, while already under development in areas such as drone swarm defense or ballistic missile interceptors, are quite different from offensive uses of lethal force against enemies. 

Drone defense is one area where this has been underway with some of Pentagon’s industry partners, such as Raytheon. For example, AI-empowered sensor networks are being developed to detect approaching drone threats, categorize or identify them and then choose the optimal defensive solution. For instance, some threat scenarios might call for a “kinetic” strike, whereas others might be best defended by electronic warfare jamming. AI-enabled algorithms can discern threat scenarios, compare data against a vast database to analyze prior scenarios, perform analytics and make nearly instant decisions regarding which method of defense might best to suited to destroy an attack. 

Kris Osborn is the new Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University. This article first appeared earlier this year.

Image: Reuters