Key point: China no doubt wants AI to assist its human pilots. However, there are some things only a real human can do.
Is there a chance that a two-seat 5th-Generation Stealth fighter might bring some additional advantages to multi-role air combat? The answer to this seemingly complex question might, at least at the moment, seem somewhat vague. One answer may be as ambiguous as it is true--maybe.
This first appeared earlier and is being reposted due to reader interest.
The question is taking on new relevance in light of rumors that the Chinese PLA Air Force is engineering a two-seat J-20 stealth fighter, according to a recent report in The Diplomat.
The possibility raises what might be called an interesting 2-fold question.
Initially, it is without question well-known that, despite the rapid advances in AI and autonomy, human cognition and decision making amid fast-evolving combat circumstance still offers a unique, indispensable set of attributes that mathematically-oriented computer algorithms simply cannot replicate.
A second set of eyes and human decision-making capabilities could easily bring added value, as the human brain can quickly adapt to previously unknown emerging variables, analyze strategic, conceptual and tactical dynamics in ways beyond the current reach of computers and therefore ease the burden placed upon an individual pilot.
Computers can aggregate and analyze vast pools of data, quickly sift through to find items of relevance and perform rapid, integrated analysis. Of course, these continued technical advances bring unprecedented advantages, yet not without some limitations. A second human brain might add new abilities to make more subjective forms of analyses and free up the pilot to spend greater amounts of cognitive energy on other high-priority items.
The prevailing consensus is not that computers necessarily exceed humans but that they offer amazing, unprecedented, yet different attributes. The optimal approach, therefore, is to “team” humans and computers through a man-machine interface, as it brings previously unimagined advantages to combat. Manned-unmanned teaming brings new capabilities that far exceed what one or the other can do by itself.
All this being said, humans also bring new possibilities of “human error,” as computers are far less likely to miss critical procedural or analytical details. However, even the most advanced algorithms are not perfect, meaning they can be confused or at times deliberately “spoofed” by unknown variables or information not part of a compiled database. Perhaps different kinds of sensor data can be compiled and analyzed for a second crew member able to support the main pilot with additional decision-making abilities. This might make a lot of sense given the expectation that computer algorithm-enabled sensors will increasingly be gathering and analyzing vast amounts of information and performing greater levels of procedural functions.
Adversaries are already known to take specific measures to confuse or disrupt AI-empowered analysis by inserting unknown variables or obscurants into what sensors can discern. For example, enemies might put a large piece of differently shaped wood or alternative structures on top of an armored vehicle so AI-driven sensors might have trouble identifying the platform. Possibilities such as this are one reason computer algorithms may not yet be at a level of complete “reliability” beyond the capabilities of human perception.
Kris Osborn is Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University. This first appeared earlier and is being reposted due to reader interest.