Are We Preparing to Fight the Wrong War?

May 30, 2018 Topic: Security Blog Brand: The Buzz Tags: MilitaryTechnologyWorldChinaAINavy

Are We Preparing to Fight the Wrong War?

The consequences of artificial intelligence—and its even spookier subset of machine learning—will have a profound influence on military operations. 


Are we preparing to fight the wrong war? That’s the question being asked increasingly frequently by Australian defence planners, especially in the RAAF. What makes some people nervous are a number of emerging disruptive technologies that will have a profound effect on military operations in the very near future.

These include, but aren’t limited to: artificial intelligence (AI) and machine learning; micro uninhabited aerial systems (UAS); quantum computing; hypersonics; micro ‘cube’ satellites and matching launch technologies; uninhabited underwater systems; the vastly increasing power of conventional explosives utilising nanotechnology; and information operations and cyber warfare.


In fact it’s not the maturing of any single one of these technologies that’s causing such concern, but rather that all of them—and more—are being developed in parallel at an extraordinary rate. That gives rise to a myriad of possible combinations that risk turning all of the tens of billions of dollars’ worth of platforms the ADF is acquiring into so much obsolete junk.

Recommended: How an ‘Old’ F-15 Might Kill Russia’s New Stealth Fighter

Recommended: How China Plans to Win a War Against the U.S. Navy

Recommended: How the Air Force Would Destroy North Korea

The consequences of artificial intelligence—and its even spookier subset of machine learning—will have a profound influence on military operations, but we don’t yet fully understand what they’ll be. In 2015 a computer was able to defeat a professional player of the board game Go—an achievement previously considered by many experts to be impossible because of the inherent complexity of the game with the two opponents having either 180 or 181 pieces each.

Similarly, a chess-playing program called Alpha Zero not only beats all human opponents, but has developed strategies from scratch after just a few hours of learning that are unlike anything seen before because it taught itself to play from first principles. Henry Kissinger, writing in The Atlantic, concludes that these sorts of developments in AI mark the end of the Age of Enlightenment.

At the RAAF’s signature air power conference held in Canberra eight weeks ago, a number of alarming scenarios were discussed. One that illustrates the problem facing planners is the use of swarming micro UAS that could see hundreds—or thousands, or even tens of thousands—of these being used in saturation attacks even against the most well-defended targets. They already have the capability to fly autonomously in a GPS-denied environment to find and destroy objects with small explosive payloads. And they can be purchased in massive numbers even with a small budget.

The development of micro-UAS is surging ahead for recreation, entertainment and parcel delivery services. According to former Marine and now military theorist Dr Thomas X. Hammes, the parcel delivery company UPS in the US is planning to open a factory producing up to 100,000 of those devices per day, with each able to carry a five-kilogram payload. That’s not a misprint: there are already a myriad of quadcopter devices around—but those numbers will be absolutely dwarfed in the coming years, and the devices’ range and power will increase.

They’re in widespread use by the world’s militaries for surveillance tasks, and now they’re being weaponized. Even the remnants of Islamic State in the desolate desert regions of Iraq and Syria have their own tiny air force in the form of remote-controlled quadcopters carrying hand grenades and explosives.

The US has recognised the importance of AI and has set up a crash program called Project MAVEN, designed to interpret vast amounts of surveillance data that’s already beyond the ability of humans to deal with. The volume of information pouring in from satellites, aircraft and uninhabited systems is growing exponentially. To give the project its full name, the Algorithmic Warfare Cross Functional Team is using software originally developed by Google to boost its global surveillance capabilities massively.

Another example of an autonomous system that already has the potential to make manned aircraft such as the F‑35 and Super Hornet obsolete is the X‑47B, developed by Northrop Grumman. This is a fighter-size platform designed for carrier operations that has already demonstrated the ability to take off, land and perform combat missions without any human intervention whatsoever.

Experienced fighter pilots say that their worst nightmare would be to come up against an X‑47B—which can easily outmanoeuvre a conventional aircraft—equipped with internally carried advanced missiles and programmed to destroy anything that came into its ‘kill box’. According to Dr Hammes, the X‑47B development has stalled only because the US Navy’s ‘pilots’ union’ doesn’t want it to go ahead.

Just as the massive investment in battleships around the world was rendered obsolete overnight by the aircraft carrier attack on Pearl Harbor in 1941, some Australian planners can also foresee the possibility that emerging disruptive technologies could leave the ADF extraordinarily vulnerable. There’s no room for complacency. Those who think the next big war will be like the last one are in for a shock.

Kym Bergmann is the editor of Asia-Pacific Defence Reporter and Defence Review Asia, and worked for Saab from 1998–2005.
This first appeared in ASPI's The Strategist here