Illiteracy, Not Morality, Is Holding Back Military Integration of Artificial Intelligence

Illiteracy, Not Morality, Is Holding Back Military Integration of Artificial Intelligence

A data-illiterate culture in the military is widening the gap between the United States and its competitors. Success will require deeper and more direct congressional action.

In his recent essay in The National Interest, John Austerman says he believes that a complex of moral issues related to increased automation in weapons systems is the sticking point for progress on military integration of artificial intelligence (AI), and that this hesitancy increases the risk to American national security. Without a doubt, continued sluggishness places the United States dangerously behind the curve in the great power competition arena. Austerman’s emphasis on the timeless consideration of morality in military operations is also an important point that all should embrace. But in my experience, the U.S. military’s lag in applying this critical technological development is not due to a moral blind spot but rather a crippling type of illiteracy—data illiteracy. With notable exceptions, this illiteracy exists across the Department of Defense and among powerful constituencies within the military services. It starts with a paucity of understanding the value of data and the data science that drives the artificial intelligence and machine learning (ML) revolution. Industrial age combat development organizations add further inertia. These elements must change in order to prevent critical damage to our core national security interests. The data-native innovators within the Department have promoted and initiated change to the maximum extent possible within their authorities, but Congress is the only institution that can ensure required actions and investments to compete and prevail are made. The 2021 National Defense Authorization Act (NDAA) contains seven helpful actions addressing ethical concerns about AI and beyond. But further and more direct congressional involvement is required before AI is accepted and applied across the U.S. military at the level required to close the capability gap.

In the early fall of 2019, I led the Marine Corps’ Manned-Unmanned Teaming (MUM-T) planning effort, one of several teams established to provide feasible forward-looking concepts to drive the Commandant’s Force Design 2030 effort. Our team was tasked with bringing automation to Marine combat formations to increase their lethality, mobility, and survivability for effective distributed maritime operations in contested littoral regions of the globe. Composed of very senior and experienced Marines and civilians with a broad array of operational, technical, and tactical expertise, this team developed sound concepts and investment areas to achieve the battlefield advantages Commandant David H. Berger is calling for. With many personal experiences of the moral aspects of past battles and operations on our minds, we were never pessimistic about Marines’ ability to ethically employ AI-enabled combat systems. The Marine Corps, like the other services, has faced this issue for a long time.

The United States has employed autonomous killing machines for over half a century. The first American autonomous weapons were the air-to-air missiles developed in the mid-1950s. The United States completed development of the AIM-9 Sidewinder missile in 1956, and this weapon saw its first use in combat shortly thereafter. In 1958, during the Second Taiwan Strait Crisis, Republic of China fighters, assisted by a Marine Corps aviation detachment, shot down several People’s Liberation Army Air Force (PLAAF) MiG-17s. From the Vietnam War through today’s on-going combat operations against violent extremists, automation technology allows American weapon systems to strike targets precisely and discriminately in accordance with American laws and values.

The United States has additionally employed AI/ML in critical combat support systems for over a decade, enabling policymakers and military commanders to make decisions not only on employing these smart munitions, but also discerning whether the American military is the right tool of national power to apply in a given context. The Intelligence Community relies on the increasing power of AI/ML to manage and interpret large and disparate data sets to gain key insights on adversary intentions, capabilities, and actions across many lines of effort. The insights garnered by the use of automation are invaluable in preventing strategic surprise while increasing the more efficient use of limited exquisite human capital for critical collection, analysis, and policy support functions.

These examples show that the Department of Defense has a long record of dealing with the moral and ethical challenges of advanced weapons and continues to improve in this area. As these enabling technologies mature, the U.S. military also continues to sharpen its ethical edge. During the last two decades, as technology has increased the accuracy and speed of our weapon systems and targeting processes, leaders developed and iteratively refined thorough investigative methods to discern key causes and solutions when mistakes are made, non-combatants are harmed, or unintended targets are struck. I am confident that the Department’s current review of these operations will show the essential soundness of the U.S. military’s actions in both its overwhelming record of success and in the professional way it addresses failures.

Because warfare is an inherently human activity, ethical challenges will always exist. The long-standing American military tradition of character formation in its military leaders continues to meet this challenge. The mission of the U.S. Naval Academy, my alma mater, states that the Academy’s purpose is “to develop midshipmen morally...and to imbue them with the highest ideals of duty, honor, and loyalty...to provide graduates...[that] have potential for future development in mind and character to assume the highest responsibilities of command, citizenship and government...” It is noteworthy that a mission statement that speaks with such ethical and moral force is not posted only on the “About Us” link or a leadership department website. It is found in places like the Academy’s Mechanical Engineering Department where future naval leaders are receiving the technical education required to develop future combat systems that will rely on automation technologies. The U.S. military is leading the morality discussion where it counts.

As it addresses the ethical issues of automated combat, the U.S. military is continually improving the accuracy of both surveillance and target acquisition, and weapon systems. Along with technological integration, military units constantly modify procedures and operational processes, and apply updates to hardware, software, and networks. All of these efforts are designed to leverage new technology for the best combat advantage. The result is that today’s U.S. military is more technically and tactically capable of killing legitimate enemy combatants and avoiding the deaths of innocents than at any other time in human history. The moral challenges of automation are not holding the U.S. military back.

At the same time, the U.S. military is failing to adapt to the automated battlefield in scope and speed. The Department struggles to apply AI/ML to military functions outside of the narrow tasks of reconnaissance, surveillance, target acquisition, and prosecution. The reason for this narrow scope and glacial pace of exploitation is data-illiteracy. Many in positions of influence on combat development and acquisitions decisions are uninformed of the nature, the value, and the possibilities that contemporary data science and AI/ML present. For some, this ignorance is not willful. The technology is moving at such a rapid rate that staying abreast of innovation in such a dynamic and technical environment is impossible while leading a military service. And many leaders lack a technical educational foundation that would otherwise expedite the embrace of AI/ML.

For others, ignorance is culpable. Nostalgic commitments to legacy capabilities and occupational fields are deep, widespread, and often rooted in a sincere but outdated belief in the exclusive and decisive role of the human in the close tactical fight—even to the exclusion of other critical capabilities that will make the human better. Some have irrational fears that humans will be completely replaced. This is especially true among occupational specialties in the ground combat arms and aviation communities whose best and brightest ascend to senior leadership positions through human battlefield exploits. These and other concerns that technology will fail at the decisive moment in battle combine with a misunderstanding of this complex technology to generate resistance to AI-related ideas for change.

This assessment is based on direct observations and feedback over the last several years as I have watched senior leaders wrestle with the waves of technological change as they decide how to optimally invest defense resources. A survey of the state of AI/ML progress corroborates these observations. The services have more available and precise data about sources of manpower than at any other time in history, yet few, if any, AI/ML applications are used in recruiting, training, education, administration, physical fitness, or talent management of service personnel. Where these efforts do exist, they are in research and development initiatives that must still cross the Department’s “valley of death” in the ponderous defense acquisition path. Instead, the services retain industrial-era methods and practices of raising and fielding forces. AI/ML use in logistics and maintenance is very nascent and emerging very slowly from niche communities. Algorithms can help to do these and many other things better. Thanks to dynamic hi-tech entrepreneurs, these tools are increasingly improving lives outside of America’s military bases and ships. Despite frequent discussions within the Department on concepts calling for wider developments in this area, the tools needed are simply not present for duty alongside those in uniform and their integration is nowhere in sight for the foreseeable future. Manned-unmanned teaming seems a long way off.