The U.S. Military's Ships, Tanks, Aircraft and Drones Are Getting AI

U.S. Air Force F-16 Fighting Falcon aircraft assigned to the 100th Fighter Squadron prepare to refuel during Red Flag-Alaska 13-3 over the Joint Pacific Alaska Range Complex Aug. 19, 2013.
July 17, 2018 Topic: Security Blog Brand: The Buzz Tags: NavyArmyMilitaryTechnologyWorldMarinesAI

The U.S. Military's Ships, Tanks, Aircraft and Drones Are Getting AI

And that is a really big deal. 

Watson, for example, can take unstructured information from maintenance manuals, reports, safety materials, vehicle history information and other vehicle technologies – and use AI to analyze data and draw informed conclusions of great significance to military operators, Aven explained.

When created, IBM stated that, "more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses," according to IBM Systems and Technology.

Working with a firm called C3IoT, the Air Force is doing something similar with F-16s. On board avionics and other technologies are monitored and analyzed using AI-enabled computers to discern when repairs or replacement parts are needed.

Applications of AI are also credited with enabling the F-35s “sensor fusion” technology which uses computer algorithms to autonomously gather and organize a wide-range of sensor data for the pilot.

It goes without saying that targeting data is of critical importance when it comes to mechanized ground warfare. With this in mind, Army combat vehicle developers are prototyping AI-enabled sensors intended to combine sensor information essential to identifying targets. If long-range EO/IR or thermal imaging sensors are able to both collect and organize combat data, vehicle crews can attack enemy targets much more quickly.

Some near-term applications, senior officials with the Army Research Laboratory say, include increased air and ground drone autonomy. It is an example of an area where AI is already having a large impact and is anticipated to figure prominently over the long-term as well.

“We know there is going to be unmanned systems for the future, and we want to look at unmanned systems and working with teams of manned systems. This involves AI-enabled machine learning in high priority areas we know are going to be long term as well as near term applications,” Karl Kappra, Chief of the Office of Strategy Management for the Army Research Lab, told Warrior Maven in an interview. “We also know we are going to be operating in complex environments, including electromagnetic and cyber areas.”

For instance, Kappra explained that sensor-equipped micro-autonomous drones could be programed with advanced algorithms to send back combat-relevant images or provide attacking forces with key interior dimensions to a target location.

“We are looking at micro-electrical mechanical systems and image-based systems to fly through a building autonomously and show you where walls and threats are inside the buildings,” Kappra said.

Also, Army combat vehicle developers consistently emphasize manned-unmanned teaming with “wing man” drone robots operating in tandem with manned vehicles to carry ammunition, test enemy defenses, identify targets and potentially fire weapons. Some senior Army weapons and technology developers have said that most future combat vehicles will be engineered with some level of autonomous ability or manned-unmanned teaming technology.

Warfare, Ethics & AI

Interestingly, debates about the future of AI, especially when it comes to autonomy, continues to spark significant controversy. Current Pentagon doctrine specifies that there must always be a “human-in-the-loop” when it comes to making decisions about the use of lethal force. However, the technology enabling an autonomous system to track, acquire and destroy a target by itself without needing human intervention – is already here.

In a previous interview with Warrior Maven, an Air Force scientist made the point that the current doctrine is of course related to offensive strikes of any kind, however there may be some instances where weapons are used autonomously in a purely defensive fashion. For instance, AI-enabled interceptors could be programmed to knock out incoming enemy missile attacks – without themselves destroying anything other than an approaching enemy weapon. In this instance, AI could serve an enormously valuable defensive function by performing intercepts exponentially faster than having a human decision maker involved.

Naturally, this kind of technology raises ethical questions, and some have made the point that even though the US military may intend to maintain a certain ethical stance – there is of course substantial concern that potential adversaries will not do the same.

Also, while often heralded as the “future” of warfare and technology, AI does have some limitations. For example, problems presented in combat, less-discernable nuances informing certain decisions, determining causation and the analysis of a range of different interwoven variables – are arguably things best performed by the human mind.

Many things in warfare, naturally, are often a complex byproduct of a range of more subjectively determined factors – impacted by concepts, personalities, individual psychology, historical nuances and larger sociological phenomena. This naturally raises the question as to how much even the most advanced computer programs could account for these and other somewhat less “tangible” factors.

This first appeared in Warrior Maven here.