Here's What You Need to Remember: These kinds of AI-related technical nuances are by no means restricted to identifying enemy vehicles, but can rather be applied across a wide or even seemingly limitless sphere of combat factors. Advanced algorithms could enable a drone to compare terrain features, assess weather conditions, distinguish enemy from friendly aircraft, navigate independently and even detect enemy electronic signals.
What happens if Russia or China builds a new secret tank or heavy armored vehicle that even the most advanced U.S. databases are not able to recognize? What if a weapon attacks U.S. forces that is simply not in any known threat library? Does the U.S. military have any recourse with which to make a fast, informed, combat-sensitive decision? What kind of munition should be used to counterattack? What kind of ammunition does the new threat fire? What is its range and scope? Are there AI-enabled computer programs now equipped to confront some of these challenges likely to present problems for U.S. commanders operating long-range sensors?
The answer is: maybe. If not now, not too far away, according to Army drone and robotics requirements writers now tracking threats and technical trends in autonomy and Artificial Intelligence (AI).
“Using AI, a small unit UAS (drone) identifies an enemy tank, asks other sensors to confirm and then reports back to a platoon leader, giving him various courses of action with which he can make a decision,” Col. Sam Edwards, Director of Robotics Requirements, Capability Development Integration Directorate, Ft. Benning, Ga., told The National Interest in an interview.
Also, what if, as Edwards also posited, the enemy tank is not recognized by an AI-capable database? This is where analytics comes in; a complex series of AI-informed algorithms would assess a range of additional variables to make a determination, to include analysis of the various configurations and components of known tanks, surrounding context, heat signature or even previous circumstances presenting similar dynamics.
“What if the UAS sees what it thinks is a tank? Maybe it is a new tank which it does not know. The information then goes through a larger AI cloud to determine if it is a tank? Or maybe a new tank that is not in the database?”
Comparing unknown data against a seemingly limitless database of known information and variables to arrive at new conclusions represents the heart of Machine Learning, as it can enable an AI system to accommodate, organize and integrate new, previously unknown information. This is in part done by advanced algorithms capable of making certain kinds of more subjective determinations pertaining to context.
For example, an AI system could likely distinguish the meaning of a “dance ball” from a “baseball” in a sentence by virtue of analyzing the surrounding words and discerning the broader meaning of the sentence. In a comparable fashion, perhaps tanks quite similar to the unknown one typically move at certain speeds, emit a certain electromagnetic or heat signature, fire certain kinds of ammunition, generate specific battle damage effects and navigate through particular types of terrain? This could offer additional variables with which an AI system can make determinations. Even further, perhaps an AI system would analyze the broadest elements of a combat circumstance, analyze other weapons in the region, and draw upon a database of prior fights to discern the most solutions to a new threat? Through an ability to access a vast database complete with history, comparisons and a seemingly limitless sphere of interwoven variables, AI-empowered computer algorithms can increasingly perform these kinds of calculations.
However, this technical process is not without challenges, because there are still often measures of uncertainty regarding just how accurately an AI system can recognize something it has not encountered before. Developers refer to it as reliability or “trusting” the algorithms. AI is progressing so quickly that variable levels of reliability are fast improving to a point wherein AI-generated automated systems will integrate more broadly into crucial military technologies and weapons systems.
“From a strategy standpoint we see autonomy and AI as the key to enable cross domain maneuver,” Edwards said.
These kinds of AI-related technical nuances are by no means restricted to identifying enemy vehicles, but can rather be applied across a wide or even seemingly limitless sphere of combat factors. Advanced algorithms could enable a drone to compare terrain features, assess weather conditions, distinguish enemy from friendly aircraft, navigate independently and even detect enemy electronic signals.
“Perhaps a UAS has obstacle avoidance and built in autonomy? It knows when it needs to recharge and does not require anyone to control its movements and transmissions,” Edwards said.
Kris Osborn is the new Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.