Why the F-35 vs. A-10 Face-Off Isn't a Fair Fight

August 12, 2018 Topic: Security Blog Brand: The Buzz Tags: F-35A-10MilitaryTechnologyWorld

Why the F-35 vs. A-10 Face-Off Isn't a Fair Fight

Rather than telling us whether or not the F-35 can actually provide the kind of close support our ground forces need to survive and prevail, this grossly inadequate test has been designed to mislead.

In other words, the test was designed by someone with a vested financial interest in the F-35 program, rather than by people whose primary interest is its performance in combat.

The F-35 Joint Strike Fighter is finally going up against the battle-proven A-10 close-air-support attack plane for the long-promised fly-off. The unpublicized tests began on July 5, 2018 and will conclude on July 12, according to a copy of the testing schedule reviewed by the Center for Defense Information at the Project On Government Oversight.

But the tests, as designed, are unlikely to reveal anything of real value about the F-35’s ability to support ground troops in realistic combat situations—which the F-35, as the presumptive replacement for the A-10, must be able to demonstrate.

A close air support test should involve large numbers of ground troops in a highly fluid combat simulation in varied terrain, across many days. It should test the pilot’s ability to spot targets from the air in a chaotic and ever-changing situation. The test should also include a means of testing the program’s ability to fly several sorties a day, because combat doesn’t pause to wait for airplanes to become available.

(This first appeared last month.)

But the Air Force scheduled just four days’ worth of tests at desert ranges in California and Arizona. And according to sources closely associated with the fly-off, not a single event includes ground troops, or any kind of fluid combat situation, which means these tests are hardly representative of the missions a close air support aircraft has to perform.

These tests put U.S. Air Force leadership in a difficult position.

They want their largest and highest-priority weapons buy, the  troubled, $400-billion F-35  multi-mission fighter, to quickly replace the A-10 they’ve been  trying to get rid of  for over two decades. The now-former Pentagon weapons testing director, J. Michael Gilmore,  said in 2016  that a fly-off would be the only way to determine how well the F-35 could perform the close-air-support role compared to the A-10—or whether the F-35 could perform that role at all.

The testing office and the various service testing agencies had already meticulously planned comparative tests to pit the F-35 against the A-10, F-16, and the F-18, because the F-35 program is contractually required to show better mission effectiveness than each of the legacy aircraft it is to replace.

In other words, the test was designed by someone with a vested financial interest in the F-35 program, rather than by people whose primary interest is its performance in combat.

Many Air Force leaders  strenuously objected to the fly-off , claiming that the F-35 would perform the mission differently so it wouldn’t be fair to compare its performance to the A-10. These tests are only happening now—albeit in an inadequate form—because Congress mandated them  nearly three years ago .

The Senate  established strict criteria  and specific scenarios for the tests. These include demonstrating the F-35’s ability to visually identify friendly forces and the enemy target in both day and night scenarios, to loiter over the target for an extended time, and to destroy targets without a joint terminal attack controller directing the strike.

The Congressionally-approved plan includes a schedule for tests and funding for elaborate tactical test ranges with combat-realistic, hard-to-find targets defended by carefully simulated missile and gun defenses, and appropriate ground-control teams for the close-support portion of the test scenarios. Testing to date has revealed the  F-35 is incapable  of performing most of the functions required for an acceptable close-support aircraft, and it seems unlikely the criteria outlined by Congress and testing officials would have produced the results Air Force leaders wanted.

Designed to mislead

Air Force leaders came up with a simple solution to this dilemma. They are staging an unpublicized, quickie test on existing training ranges, creating unrealistic scenarios that presuppose an ignorant and inert enemy force, writing ground rules for the tests that make the F-35 look good—and they got the new testing director, the retired Air Force general Robert Behler, to approve all of it.

According to sources closely involved with the A-10 versus F-35 fly-off, who wished to remain anonymous out of concerns about retaliation, this testing program was designed without ever consulting the Air Force’s resident experts on close air support, A-10 pilots and joint terminal attack ground controllers.

The Air Force’s  422 Test and Evaluation Squadron  at Nevada’s Nellis Air Force Base maintains an A-10 test division. But no one from the operational test unit contributed to the design of these tests. Even more egregiously, no Army or Marine representatives participated. Since the services fighting on the ground have a primary interest in effective close air support, excluding them from this process borders on negligence.

Recommended: Why an F-22 Raptor Would Crush an F-35 in a 'Dogfight'

Recommended: Air War: Stealth F-22 Raptor vs. F-14 Tomcat (That Iran Still Flies)

Recommended: A New Report Reveals Why There Won't Be Any 'New' F-22 Raptors

This testing event should have been designed by the Joint Strike Fighter Operational Test Team, which is charged with designing all tests for the F-35. Rather than going through the proper channels, design of these tests was outsourced to a consultant from  Tactical Air Support, Inc. , a company with a contract to provide adversary aircraft to serve as air-combat training opponents for the Air Force, especially for the F-35 squadrons, which it also does for  foreign air forces .

In other words, the test was designed by someone with a vested financial interest in the F-35 program, rather than by people whose primary interest is its performance in combat.

The testing schedule shows four days of actual testing. One at Marine Corps Air Station Yuma’s open-desert bombing training range, in southern Arizona, and three at Naval Air Weapons Station China Lake’s electronic combat range, an open-desert facility in California primarily used for electronic countermeasure research.

The first day’s test—July 5, at Yuma—scheduled one F-35 two-ship flight and two A-10 pairs. Each flight was to spend one hour making attack passes at highly visible, bombed-out vehicle hulks and shipping containers simulating buildings plus one highly visible, remote-controlled moving-vehicle target, all in flat, open terrain near a large simulated airfield target.

Each A-10 carried two laser-guided 500-pound bombs, two captive-carry Maverick guided missiles, a pod of marking rockets, and only 400 30 mm cannon rounds. The F-35s carried a single 500-pound laser-guided bomb and 181 25-millimeter rounds, the most each plane could carry. For the last 20 minutes of each one-hour target-range session, altitude was restricted to 10,000 feet, an alleged evaluation of each plane’s ability to operate beneath low cloud cover.

The first day’s attack scenarios called for “permissive” anti-aircraft defenses consisting of simulated shoulder-fired missiles and light anti-aircraft guns. A permissive environment is one in which there are few or no threats capable of shooting down an aircraft. Despite the “permissive” description, these are the anti-aircraft weapons that close air support planes will typically encounter while supporting our troops in battle against near-peer maneuvering enemy forces.

However, the simulated defenses at Yuma had no precision instrumentation to track aircraft flight paths, gun aiming, or missile launch and homing. As a result, no quantitative data regarding the actual performance of the A-10 and F-35 will have been gathered. Rather than having charts of performance data, the evaluators will simply be able to report any results they want, without any way to verify the reports.