Artificial Intelligence Drones May Prove Useful Against China

Artificial Intelligence Drones May Prove Useful Against China

This possibility seems, at very least, to be under consideration at the Pentagon due to both the pace and seriousness of the threat.

Here's What You Need to Remember: Simply put, when confronted by this kind of high-speed lethal attack, there simply may be no chance for a human to respond with any kind of decision.

What if waves of hundreds of autonomous, integrated artificial intelligence (AI)-capable mini-drones were closing in upon a forward Army unit, Air Force base or Navy ship at staggering speeds, presenting unprecedented complexity for defenders? Perhaps they are programmed with advanced algorithms such that they operate in close coordination with one another? Perhaps hundreds of them are themselves engineered as explosives to close in upon and explode on target?

Simply put, what happens when computerized swarms of enemy drone attacks exceed any human capacity to respond in time?

“When you have little drones operating in different patterns and formations, all talking to each other and staying in sync with one another...imagine that with the ability to create lethal effects on the battlefield. There is no human who will be able to keep up with that,” Gen. John Murray, Commanding General, Army Futures Command, told The National Interest in an interview.

Drone and hypersonics weapons defenses, among other things, are taking on new urgency among Pentagon technology experts who increasingly recognize the growing urgency with which high-speed, computer enabled attacks need to be defended.

Therefore, given this circumstance, the emerging question is clear: When it comes to increasingly autonomous and coordinated drone swarm attacks, is there a basis to employ advanced, super high-speed AI-enabled defensive systems to find, track, attack and destroy the drone swarm without needing human intervention?

“I think its a conversation we are going to have eventually. I think the conversation needs to be about when you’re talking about a decision that involves another human life. I think that is clearly in the human domain. But when you are talking about small swarming UAVs where there is not a human life involved… can we allow a machine to make the initial decision about which effectors are used or in what order defenses are employed to destroy the swarm with no loss of human life allowed?” Murray said.

This possibility seems, at very least, to be under consideration at the Pentagon due to both the pace and seriousness of the threat and the promise of AI-enabled autonomy regarding weapons systems.

The technological ability of a robotic platform to surveil, find, track, target and actually destroy a target without human intervention, is basically here. What does this mean for Pentagon futurists and war planners seeking to prepare for and anticipate enemy attacks and threats in coming years? Particularly when it is widely recognized that potential rivals are not likely to allow for or consider ethical questions related to the use of this kind of technology in warfare?

Of course, the Pentagon maintains strict adherence to its existing doctrine which specifies that, when it comes to the question of the potential use of “lethal force,” a human must always be “in-the-loop.” However, the prevailing consensus appears to be that, despite the disturbing reality that enemies may not share this kind of doctrinal approach grounded upon ethical considerations.

But what about non-lethal force? That is the question. Certainly, the pace, complexity and anticipated lethality of future attacks introduces new warfare dynamics of fast-increasing complexity for defenders. Think, for instance, about the pace of an approaching hypersonic missile attack at closer-in distances? Simply put, when confronted by this kind of high-speed lethal attack, there simply may be no chance for a human to respond with any kind of decision.

AI-capable drone defenses can already gather, pool, organize and analyze an otherwise disconnected array of threat variables, compare them against one another in relation to what kinds of defense responses might be optimal and make analytical determinations in a matter of milliseconds. As part of this, AI-empowered algorithms can analyze a host of details such as weapons range, atmospheric conditions, geographical factors and point of impact calculations, all in close relation to one another as part of an integrated picture, examine and compare what has worked in specific previous circumstances and scenarios to determine the best defensive response.

Murray’s thinking parallels current Department of Defense initiatives now taking up the question of defensive, non-lethal employment of autonomous weapons.

Given the short-term promise of this kind of technical capability, perhaps there may indeed be some tactical circumstances wherein it is both ethical and extremely advantageous to deploy autonomous systems able to track and intercept approaching threats in seconds, if not milliseconds. A recent Pentagon report says there is now an emerging area of discussion pertaining to the extent to which AI might enable “in-the-loop” or “out-of-the-loop” human decision making, particularly in light of threats such as drone swarms.

“When you’re starting to see swarming activities of hundreds or potentially thousands [of UAS] in the future, obviously you want your system to operate as fast [as possible] to provide those weaponeering solutions to the operator, or operate within a set of parameters,” Col. Marc E. Pelini, the division chief for capabilities and requirements within the Joint Counter-Unmanned Aircraft Systems Office, told reporters during a teleconference, according to a Pentagon report.

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.

This article first appeared in March 2021 and is being reprinted for reader interest.

Image: Reuters.