Pilots can now dogfight against artificial intelligence (AI)-enabled machines while in flight in the sky, testing their ability to maneuver into position, respond to enemy movements and adjust to incoming fire.
The Air Force carried out a one of its kind experiment, pitting a manned fighter jet against a simulated Chinese J-20. The J-20 fighter jet was simulated through an augmented reality “virtual” AI-enabled pilot visible through a helmet screen. The AI-driven simulation was previously only possible through ground-based systems, but is now introduced into the cockpit of an in-flight airplane, “presenting the pilot in the real aircraft with a simulated adversary flying a J-20 fighter,” a story in The Drive writes.
“The live-flight augmented reality dogfight involved a Freeflight Composites Berkut 560 experimental plane and a simulated, reactive adversary aircraft in the form of a computer-generated projection inside the Berkut pilot’s augmented reality helmet-mounted display,” the Drive story writes.
The experiment used a Red 6-engineered augmented reality system including a wide field of view, full-color virtual reality display and control systems projecting images directly into the cockpit.
Red 6s technology “aims to provide pilots flying real-world aircraft to perceive synthetic threats in real-time and in high-speed environments. This, of course, requires augmented reality to be blended seamlessly with the environments both inside and outside the cockpit,” the story continues.
What the drive story does not report is how well did the AI-enabled virtual J-20 perform against a manned fighter? Certainly, machines can react and perform various analytic and procedural functions much more quickly than humans, yet they may not yet be evolved to the point where they can simultaneously account for a wide-range of interwoven, overlapping or even contradicting variables.
For example, it is not yet clear that AI-empowered systems can instantly process fast-emerging new variables or consider the totality of an overall combat circumstance including a range of disparate factors.
Can a machine learn in real-time should it encounter a weapon or circumstance not stored in its database? It may not have a reference point against which to analyze new information. Advanced AI-systems are only as effective as the database they operate with. Now there is much work going on to expedite real-time machine learning and fast-moving or immediate analytics, yet it is not yet fully established that machines can reliably process all new information as quickly and as accurately as humans can.
What about more subjective determinations? Perhaps a computer might be inclined to mistake a manned system for a drone, or interpret an approaching conventional weapon as nuclear? Humans make philosophical, emotional and psychologically nuanced decisions often informed by less “calculable” criteria and therefore less likely to be accurately mirrored or replicated by machines.
Perhaps, therefore, a combination of human-machine coordination might make for the best approach, given that it could both optimize the speed of decision-making enabled by computers and rapid analysis while still preserving and calling upon those particular characteristics unique to human cognition? That is the prevailing view among Pentagon scientists and weapons developers.
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.