Artificial Intelligence And the Human Context of War

Artificial Intelligence And the Human Context of War

It is folly to expect the same conditions that have enabled AI success in commerce to be replicated in war.

 

Excitement and fear about artificial intelligence (AI) have been building for years. Many believe that AI is poised to transform war as profoundly as it has business. There is a burgeoning literature on the AI revolution in war, and even Henry Kissinger has weighed in on “The Age of AI And Our Human Future.”

Governments around the world seem to agree. China’s AI development plan states that “AI has become a new focus of international competition” and “is a strategic technology that will lead in the future.” The U.S. National Security Commission on AI warns that “AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy.” China and the United States are in a race for AI supremacy, and both nations are investing huge sums into lethal autonomous weapons to gain an edge in great power competition.

 

Scholars expect that authoritarians and democracies alike will embrace AI to improve military effectiveness and limit their domestic costs. Military AI systems will be able to sense, respond, and swarm faster than humans. Speed and lethality would encourage preemption, leading to strategic deterrence failures. Unaccountable killing would be an ethical catastrophe. Taken to an extreme, a superintelligence could eliminate humanity altogether.

The Economics of Prediction

These worrisome scenarios assume that AI can and will replace human warriors. Yet the literature on the economics of technology suggests that this assumption is mistaken. Technologies that replace some human tasks typically create demand for other tasks. In general, the economic impact of technology is determined by its complements. This suggests that the complements of AI may have a bigger impact on international politics than AI technology alone.

Technological substitution typically increases the value of complements. When automobiles replaced horse-carts, this also created demand for people who could build roads, repair cars, and keep them fueled. A drop in the price of mobility increased the value of transportation infrastructure. Something similar is happening with AI.

The AI technology that has received all the media attention is machine learning. Machine learning is a form of prediction, which is the process of filling in missing information. Notable AI achievements in automated translation, image recognition, video game playing, and route navigation are all examples of automated prediction. Technological trends in computing, memory, and bandwidth are making large-scale prediction commercially feasible.

Yet prediction is only part of decisionmaking. The other parts are data, judgment, and action. Data makes prediction possible. Judgment is about values; it determines what to predict and what actions to take after a prediction is made. An AI may be able to predict whether rain is likely by drawing on data about previous weather, but a human must decide whether the risk of getting wet merits the hassle of carrying an umbrella.

Studies of AI in the commercial world demonstrate that AI performance depends on having a lot of good data and clear judgment. Firms like Amazon, Uber, Facebook, and FedEx have benefitted from AI because they have invested in data collection and have made deliberate choices about what to predict and what to do with AI predictions. Once again, the economic impact of new technology is determined by its complements. As innovation in AI makes prediction cheaper, data and judgment become more valuable.

The Complexity of Automated War

In a new study we explore the implications of the economic perspective for military power. Organizational and strategic context shapes the performance of all military information systems. AI should be no different in this regard. The question is how the unique context of war shapes the critical AI complements of data and judgment.

 

While decisionmaking is similar in military and business organizations, they operate in radically different circumstances. Commercial organizations benefit from institutionalized environments and common standards. Military systems, by contrast, operate in a more anarchic and unpredictable environment. It is easier to meet the conditions of quality data and clear judgment in peacetime commerce than in violent combat.

An important implication is that military organizations that rely on AI will tend to become more complex. Militaries that invest in AI will become preoccupied with the quality of their data and judgment, as well as the ways in which teams of humans and machines make decisions. Junior personnel will have more responsibility for managing the alignment of AI systems and military objectives. Assessments of the relative power of AI-enabled militaries will thus turn on the quality of their human capital and managerial choices.

Anything that is a source of strength in war also becomes an attractive target. Adversaries of AI-enabled militaries will have more incentives to target the quality of data and the coherence of judgment. As AI enables organizations to act more efficiently, they will have to invest more in coordinating and protecting everything that they do. Rather than making military operations faster and more decisive, we expect the resulting organizational and strategic complexity to create more delays and confusion.

Emerging Lessons from Ukraine

The ongoing war in Ukraine features conventional forces in pitched combat over territorial control. This is exactly the kind of scenario that appears in a lot of AI futurism. Yet this same conflict may hold important lessons about AI might be used very differently in war, or not used at all.

Many AI applications already play a supporting role. Ukraine has been dominating the information war as social media platforms, news feeds, media outlets, and even Russian restaurant reviews convey news of Ukrainian suffering and heroism. These platforms all rely on AI, while sympathetic hacktivists attempt to influence the content that AI serves up. Financial analysts use AI as they assess the effects of crushing economic sanctions on Russia, whether to better target them or protect capital from them. AI systems also support the commercial logistics networks that are funneling humanitarian supplies to Ukraine from donors around the world.

Western intelligence agencies also use data analytics to wade through a vast quantity of data—satellite imagery, airborne collection, signals intelligence, open-source chatter—as they track the battlefield situation. These agencies are sharing intelligence with Kyiv, which is used to support Ukrainian forces in the field. This means AI is already an indirect input to battlefield events. Another more operational application of AI is in commercial cybersecurity. For instance, Microsoft’s proactive defense against Russian wipers, has likely relied on AI to detect malware.

Importantly, these AI applications work because they are grounded in peaceful institutions beyond the battlefield. The war in Ukraine is embedded in a globalized economy that both shapes and is shaped by the war. Because AI is already an important part of that economy, it is already a part of this war. Because AI helps to enable global interdependence, it is also helps to weaponize interdependence. While futurist visions of AI focus on direct battlefield applications, AI may end up playing a more important role in the indirect economic and informational context of war.

Futurist visions generally emphasize the offensive potency of AI. Yet the AI applications in use today are marginally empowering Ukraine in its defense against the Russian offensive. Instead of making war faster, AI is helping to prolong it by increasing the ability of Ukraine to resist. In this case, time works against the exposed and harried Russian military.

We expect that the most promising military applications of AI are those with analogues in commercial organizations, such as administration, personnel, and logistics. Yet even these activities are full of friction. Just-in-time resupply would not be able to compensate for Russia’s abject failure to plan for determined resistance. Efficient personnel management systems would not have informed Russian personnel about the true nature of their mission.

Almost everyone overestimated Russia and underestimated Ukraine based on the best data and assessments available. The intelligence failures in Russia had little to do with the quality of data and analysis, moreover, and more with the insularity of Russian leadership. AI cannot fix, and may worsen, the information pathologies of authoritarian regimes. AI-enabled cyber warfare capabilities would likewise be of little use if leaders failed to include a cyber warfare plan.

The Human Future of Automated War

It is folly to expect the same conditions that have enabled AI success in commerce to be replicated in war. The wartime conditions of violent uncertainty, unforeseen turbulence, and political controversy will tend to undermine the key AI conditions of good data and clear judgment. Indeed, strategy and leadership cannot be automated.

The questions that matter most about the causes, conduct, and conclusion of the war in Ukraine (or any war) are not really about prediction at all. Questions about the strategic aims, political resolve, and risk tolerances of leaders like Vladimir Putin, Volodymyr Zelenskyy, and Joseph Biden turn on judgments of values, goals, and priorities. Only humans can provide the answers.

AI will provide many tactical improvements in the years to come. Yet fancy tactics are no substitute for bad strategy. Wars are caused by miscalculation and confusion, and artificial intelligence cannot offset natural stupidity.