The U.S. Military Is Already Building the AI of Tomorrow

The U.S. Military Is Already Building the AI of Tomorrow

The next generation of artificial intelligence will not only engage in database comparisons but learn in real-time and assist in taking on missions that were once thought to be strictly meant for human beings.


The commanding general of the Air Force Research Lab is already thinking about the next generation of artificial intelligence (AI) as a tool that could not only engage in database comparisons but learn in real-time and assist in taking on missions that were once thought to be strictly meant for human beings.

“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there’s so much more that we as humans can do that AI cannot. How do we get to that?” Maj. Gen. Heather Pringle, commanding general of the Air Force Research Lab, told The National Interest in an interview.


Pringle’s thinking appears closely aligned with cutting-edge AI-focused research which is focused on ways to catalog, discern, interpret, organize, and ultimately analyze things typically more difficult for mathematically-driven machines to understand. For example, Pringle questions whether AI will be able to interpret things like emotion and other nuanced topics. 

Is there a way these kinds of cognitive phenomena could be accurately tracked by computers? The hurdles preventing such a development appear high, as an interwoven blend of emotional, philosophical, and even psychological variables all inform human behavior and human perceptions. Nonetheless, Pringle appears to be referring to areas of great promise, such as AI’s emerging ability to determine context and, for instance, understand the difference between foot “ball” and dance “ball” by analyzing surrounding words. 

This kind of machine learning represents the cutting edge or new boundary for AI, which Pringle explained is not without challenges. The U.S. military and their industry partners have already made some progress in this direction, where things like previous behavior patterns, philosophical concepts, or speech patterns, for instance, can be cataloged and potentially analyzed to predict solutions. However, as Pringle explains, there are still many yet-to-be understood complexities and variables, and the intricacy of human consciousness and decisionmaking ensures that many tasks will remain well beyond the reach of what AI-enabled systems can do.

“The AI that we see today, like the navigation systems that automatically give you a pathway to get from point A to point B? Well, we placed a lot of trust in those systems, but the consequences are pretty low. And so it was pretty easy to develop a human-machine trusting relationship. But when we’re talking about warfare and warfighters, we want to build in that trust along the way,” Pringle explained.

This challenge is often referred to as “zero trust,” meaning that advanced AI-empowered algorithms need to improve reliability by better integrating an ability to assimilate and analyze new data or information that is not part of its database. It is often said that AI is, to a large degree, only as effective as its database, as it must bounce new or incoming information off of a seemingly limitless database. Thus, what happens when an AI-capable computer comes across something it has not seen? That is a fundamental predicament in certain respects, as there are numerous abilities and faculties entirely unique to human cognition and cannot be replicated by machines… at least not yet.”

Part of the solution, Pringle explained, lies in increasing the ability for human-machine interfacing, meaning each can inform the other in a way to optimize data analysis and decisionmaking. Pringle described this as a “symbiotic relationship.”

“Right now, a lot of times when we see AI, we don’t fully understand why it’s taking the actions that it is. It’s leveraging so much data and coming up with novel solutions that we can’t understand. So it’s going to cause the trust relationship to be a little bit lower… Then, at a point in the future, when we’re able to make that more transparent, or have the AI or autonomous vehicle communicate better with the human or to even respond to the human, we even have a line of research where we’re looking at how can we adapt a machine to respond to what the human is learning, knowing, understanding, communicating,” Pringle explained.

At the same time, Pringle was also clear that there are many extremely promising near-term applications of AI which are already showing impactful breakthroughs. These will be greatly consequential to current platforms, weapons, and networks.

For instance, AI and autonomy are already helping fixed-wing aircraft such as F-35 Joint Strike Fighters share data in real-time with nearby drones, a step toward ultimately enabling a fifth-generation stealth fighter to operate numerous drones from the cockpit of an aircraft. This reduces latency and significantly multiples tactical options for pilots who could use drones to test enemy defenses, blanket an area with surveillance, or even fire weapons under human direction. Early iterations of this have already been demonstrated by the Air Force’s Valkyrie program, in which an unmanned system flew alongside F-35 and F-22 fighter jets while sharing information in real-time. 

By extension, the Valkyrie drone has even launched its own mini-drones. The Valkyrie launched a Kratos-built ALTIUS-600 mini drone in what the Air Force describes as the first-ever opening of its internal weapons bay. This demonstrates a number of interesting and significant tactical possibilities, as a drone-launched drone could operate as a mini-scout surveillance node over extremely hostile or high-threat areas amid heavy enemy fire. It would not only have a better chance of not being shot down by virtue of its small size, but a small drone of this kind could even function as a weapon itself. The Valkyrie is configured to drop bombs and fire weapons as part of a manned-unmanned teaming operational scope.

Of course, the concept is to ultimately ensure human command and control in a supervisory capacity, especially when it comes to the use of lethal force, yet breakthroughs in AI and autonomy can enable machines and unmanned systems to increasingly perform a growing number of time-sensitive warfare functions without needing human intervention. Pringle explained that researchers and weapons developers are still exploring complexities with these sorts of questions and technology.

“What is the role of the human? How are they managing these systems? How can we ease their cognitive load? How can we be most efficient with the number of vehicles? What is the right ratio of the vehicles? There are a lot of really great S&T [science and technology] questions to answer,” Pringle said.

“There’s a lot of challenges to address when you’re looking at increasing the number of systems and the number of platforms, due to the integration and the data links between them,” Pringle said.

In yet another instance, emerging programs such as Golden Horde are already demonstrating an ability for weapons to autonomously share data while en route to a target, greatly expanding the tactical attack envelope and introducing the ability for weapons to change course in flight.

The largest or most impactful near-term application of AI, arguably, is its continued contribution to “data processing” and identifying moments of relevance at the point of collection to streamline the networking of organized, relevant information across the battlefield in near real-time. This can enable multi-domain connectivity, or connect fighter jets with command and control centers, bombers, drones, and even ground forces and Navy ships. The speed at which new information can be gathered, compared to a database, analyzed, and effectively transmitted continues to rapidly increase the speed of attack, reduce “sensor-to-shooter” time, and enable attacking forces to operate inside of or ahead of an enemy’s decisionmaking process.

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master’s Degree in Comparative Literature from Columbia University.

Image: Reuters.