If an enemy tank were hiding in a forest to deliberately avoid being detected by overhead surveillance drones, it might choose heavily wooded areas where any clear view of the ground was obscured by trees. Better yet, the tank would turn off its engines to avoid emitting a heat signature detectable by infrared sensors.
However, one thing potential adversaries are increasingly adept at understanding is that advanced, artificial intelligence (AI)-enabled computer algorithms can account for all of these variables, compare them to one another, and make a determination as to what the sensors might be detecting based upon a previously compiled database of information.
However, what if an automated or AI-cable sensor system encountered something that was not part of its compiled database, no matter how limitless and vast it may seem to be? Certainly, AI programs can use incredible databases able to compare new input off of millions of variables and previously compiled amounts of information. Perhaps the sensor has discerned something in the past with a similar host of variables and can therefore determine the specifics of the tank through comparative analysis?
This kind of complexity is exactly what potential adversaries seek to exploit, as they are devising methods of spoofing advanced algorithms or essentially confusing them to prevent the AI from making determinations. Computer scientists with Booz Allen Hamilton explain that enemies are developing specific countermeasures intended to confuse or “throw off” algorithms engineered to detect them. One scientist with Booz Allen Hamilton, for example, said perhaps an adversary might do something as simple as put a poster or large piece of cardboard on top of a tank to make it look different. An AI-capable sensor trained with data to identify the shapes, structures and signatures of a tank might simply be at a loss to make an accurate determination by virtue of having encountered something it has not seen before. That is precisely the intent, to offer up a signal or rendering to a sensor that prevents accurate surveillance. An AI-capable system is only as effective or as extensive as its database, so algorithms can at times be challenged to accurately process information or data that introduces variables, images, objects or signals the system has never seen before.
Given this set of complexities, Booz Allen computer scientists are seeking to evolve a new generation of wholistic AI-enable sensing which can simultaneously account for a wide scope of variables to search for patterns, indications or similarities related to a larger overall picture. This means that perhaps if a tank is hidden beneath cardboard, the AI-system could pick up otherwise disconnected cues, clues or indiciations… all in relation to one another .. that can help analytics to the point such that it can make an accurate determination or identification. In effect, it represents an attempt for computer warriors in the field of AI to develop countermeasures in response to countermeasures and remain in front of the competitive curve.
One senior scientist at Booz Allen Hamilton told The National Interest that, while AI cannot do this fully quite yet, “its something research is working on.”
“There are multiple possible future horizons. Our most effective approaches to date involve attacking our models ourselves and adding those attacks to the training data,” Ed Raff, Senior Computer Scientist, Booz Allen Hamilton, said in a written statement.
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.