Why 'Robot Wars' Might Not Be Our Future
Despite the popular consensus that robot war is inevitable, future conflicts might look more like those of the past than some care to let on.
In 1926, Maj. Gen. J. F. C. Fuller, one of the most prolific military writers of the twentieth century, implored his colleagues to adopt a more open mind toward merging scientific progress with military theory. Long before anyone dreamt of autonomous machines, Fuller had a vision: “To restrict the development of war by divorcing it from civil science is to maintain warfare in its present barbarous and alchemical form.”
Considering the volume of articles on militarized robotics in this publication alone, it is safe to say that the world heeded his advice. And yet, despite the marvels of scientific innovation over the last several decades, technology has failed to make ground war much less frequent or unforgiving.
Coalition forces clearing houses during the 2006 Battle of Ramadi—such as me—encountered very few aspects of war that combat veterans centuries past would find wildly unfamiliar. Radio communications were essential, but also a point of frustration. Air assets were immeasurably helpful, but at times limited in their ability to engage targets or provide advanced situational awareness to ground forces due to the nature of dense urban terrain. If the United States is asked to fight a war in a megacity with subterranean networks, problems associated with gaining and maintaining a common operating picture will worsen.
Night vision devices offered coalition forces an advantage while moving in the dark (albeit one they may not have in the next war), but traversing uneven farmlands and irrigation canals with no moonlight was as tedious as it was decades ago. Buildings were breached with tools or explosives. Moving room-to-room in small teams was a type of organized chaos still monopolized by humans.
What does all of this mean?
For one, despite the popular consensus that robot war as inevitable, future conflicts might look more like those of the past than some care to let on. Many of the aforementioned challenges are not only present today, but with NATO’s increased focus on joint, multinational exercises and operations following Russia’s 2014 annexation of Crimea, they have assumed a greater degree of complexity.
In response, most Western nations have taken up the crusade of militarized robotics and Artificial Intelligence (AI) as a means of remaining competitive. There is, however, potential for advances in these fields to add to this complexity rather than reduce it. Furthermore, while America’s competitors promote visions of a robot war, their massive human armies are not a reflection of this alleged epiphany. In light of these concerns, a critical look at such initiatives is essential in developing a realist future war policy.
Bridging Theory and Practice
At the behest of organizations such as the Defense Advanced Research Projects Agency (DARPA), tech giants Boston Dynamics and Lockheed Martin have spearheaded various military robotics programs. While many are currently under development, two of the most prominent prototypes stand out. Let’s start by addressing some of the concerns related to the theory and practice of each.
Theory: Robot dogs could carry equipment for dismounted soldiers, thus mitigating the physical strain on their bodies and increasing their allowable load.
Practice: The BigDog is Boston Dynamic’s gasoline-powered, four-legged, load-bearing robot. Able to carry up to 330 pounds while negotiating rough terrain, it weighs approximately 240 pounds unloaded. On its face, this concept appears promising—but upon further inspection, flaws emerge.
Let’s assume the BigDog passes field testing and is integrated into forward deployed infantry squads. That nine-man squad places its equipment (ammunition, rations, optics, and heavier weapons) onto this robot. If the robot is destroyed, that entire squad’s equipment goes with it.
Following this train of thought, if the robot becomes disabled in a vulnerable area, the squad must rotate out as each member retrieves his or her equipment. In past wars, enemy snipers would shoot to wound, so as to draw friends of the wounded into the open and then kill them. In future wars, they may just need to disable the mule bot.
There is also the issue of maneuverability. The bulkiness of these bots renders them incapable of rapidly traversing urban terrain or densely wooded areas that involve walls, cliffs, and narrow alleyways. According to U.S. Army Chief of Staff General Mark Milley, future wars will require a degree of ground-force mobility heretofore unheard of. This system hardly seems to contribute to that much needed flexibility.
Theory: Unmanned ground vehicles mean troops won’t have to die in convoys from improvised explosive devices (IEDs).
Practice: Lockheed Martin’s Autonomous Mobility Appliqué System, or AMAS, is no doubt impressive. Capable of maneuvering through urban areas under limited visibility conditions, the AMAS can be dropped into most existing vehicle platforms and lead unmanned convoys. Despite the removal of human beings from these vessels, their use implies that they are still transporting supplies to and from bases occupied by soldiers, which tells us there will still be humans in this future war.
When IEDs or rockets disable these vehicles, someone must recover them and tow them to a maintenance bay, perhaps after repelling subsequent attacks at the blast site to prevent the cargo and onboard communication systems from falling into the wrong hands.
None of these observations begin to touch on the inconvenient truth that most remotely operated systems are controlled through a satellite link that is subject to compromise. The above critiques should take nothing away from the brilliance of the engineers behind these projects. But given the price of failure, the United States cannot afford to get this wrong.
Many of these systems were designed to keep the United States abreast of its competitors in the global AI arms race. How nations such as Russia and China are navigating this environment is telling, and certainly worth exploring.
Deeds Speak Louder
A contributing factor behind the need to focus a great deal of time and energy on these machines is the fact that leaders from competing states have graciously informed the world that it should. Lt. Gen. Andrey Grigoriev, head of Russia’s Advanced Research Foundation (ARF) said as much in 2016: “[F]uture warfare will involve operators and machines, not soldiers shooting at each other on the battlefield.”
The following year, a 2017 article in the National Interest highlighted various Russian officials lauding the benefits of drone swarms controlled remotely by a single operator’s computer, and the potential for Russia to relinquish control of its aviation and air defense systems to artificial intelligence. China is making similar strides in military robotics, and making similar statements on the future of war.
Russia and China’s competitors no doubt appreciate this window into their strategic defense horizons—only it may not be so transparent. To expand upon this concept, a “red team” approach could be valuable. If Sun Tzu’s maxim describing all war as a matter of deception is any guide to interpreting China and Russia’s intent here, several points of interest emerge.
First, while senior Russian officials parade their killer machines at high visibility events and tell the world that robots are the key to the future, their actions may not be a reflection of their proclamations.
According to a 2016 report from the Washington Post, Russia plans to form three new military divisions in response to NATO’s presence on its western border—an increase of nearly one hundred thousand troops. Why waste billions investing in antiquated human divisions when those resources could be diverted to a more productive synthetic enterprise? This question is particularly instructive considering Russia’s economic woes and the recent tightening of U.S. sanctions.
China, on the other hand, despite making drastic cuts to its military personnel in 2015, still boasts an active force of more than twice the size of the United States (roughly 2.3 million members). Furthermore, according to the most recent Pentagon report on military and security developments in the People’s Republic of China, most of these cuts were administrative and had little to do with reductions to combat power. In fact, China is ramping up modernization efforts with its military’s organizational structure—not just its equipment—by adopting a combined arms approach that focuses on joint, multi-domain operational capabilities.
What can we learn from this?
While the West becomes increasingly starry-eyed with robot dogs doing the running man, it must still address the reality of its competitors’ million-man armies. If a state such as Russia or China were to feel outmatched technologically on the battlefield, then there is no reason to assume that they would refrain from using the blunt force of their armies. Factor into the equation challenges associated with megacity warfare and operating in a degraded technological environment, and the picture on the ground would not be so alien.
This does not mean that such a scenario is imminent. Mankind has, however, proven itself rather allergic to accurate war forecasting, and there seems to be far too much momentum moving the world toward a singular vision of future warfare. If the past is any prologue, it is highly unlikely that the next great military challenge will come in the form of that which popular consensus deems most apparent.
In sum, despite the fact that robotics and AI have soaked up the most sunlight as of late, America’s competitors have not only maintained vast human armies, but are also expanding them or making significant improvements to their composition and strength.
Looking Ahead
As research and development in the field of militarized robotics pushes forward, it is important to remember that America’s future enemies will—as they have in the past—take the path of least resistance in war, expose weaknesses, and exploit them. If the United States becomes dependent upon AI and robots to fight its wars, it will create a new center-of-gravity and reveal its weakness: A ground war with high human casualties. In turn, the strategic objective of any opponent at war with the United States would be to draw it into precisely such a conflict by mitigating its technological advantages.
Pentagon official Jeff Becker noted recently in the National Interest that it is premature to dismiss the role of AI in future wars based merely on its potential shortfalls. Likewise, it is also premature to assume that robots will deliver as promised in future wars based merely on their potential capabilities. Similar to arguments in the Unrestricted Warfare thesis (China’s response to the projection of American military power during the First Gulf War), domination in these emerging areas is not a sure path to victory—in part because this would prompt adversaries to seek other means of leverage, and encourage the United States to allow its more seasoned domains of war to atrophy (air, land and sea).
That said, the policies of Secretary of Defense Jim Mattis and Secretary of the Army Mark Esper prove that they are acutely aware of challenges related to Joint Force lethality. Over the last eighteen months, both officials have taken swift action to underscore the importance of unit readiness, talent management, and leader development.
Does this mean the United States needs to pack up its future war programs and fix bayonets? Certainly not.
But it does mean that Western leaders should be wary of the temptation to acquire tunnel vision in their endeavor to remain militarily competitive. As Williamson Murray and Allan Millett surmised in their analysis of military innovation in the interwar period, having a vision of future war is important, but that view “must also be balanced and well connected to operational realities.”
One of those realities is the sheer vastness of human capital that America’s adversaries would likely be willing to expend in a war with the United States—especially if those opponents view troop strength and combined arms maneuver as their strongest hands against an enemy who has subordinated both to machines. Without a highly motivated, expertly trained, and masterfully-led Joint Force willing to contend with such a challenge, all the back-flipping robots in the world won’t bail the United States out of the next war.
Michael P. Ferguson, M.S., is an officer of the United States Army with experience throughout Europe and the Middle East. A former instructor at the U.S. Army Ranger School, he often writes on issues concerning strategic theory. The author’s views are his own, and do not reflect the official positions or policies of the U.S. Army, the Department of Defense, or the U.S. government.
Image: Reuters