AI Competition With China Should Be Done the American Way

AI Competition With China Should Be Done the American Way

The United States is uniquely positioned to take advantage of a decentralized artificial intelligence model.

With much of the national security community's attention now turned to long-term competition with China, the race to sustain the United States’ global leadership position in science and technology has taken center stage. One of the most frequently discussed aspects of this competition is artificial intelligence (AI), which is often mentioned as a potentially transformative technology.

Last year, the National Security Commission on Artificial Intelligence found that China is determined to surpass the United States in AI leadership, particularly in military applications. Top academics and industry thinkers agree that China is making significant headway toward this goal and is rapidly catching up with the United States’ research progress, leading a disgruntled former Pentagon official to suggest that the United States is already losing the race for AI superiority. Close analysis supports claims that China has indeed established areas of advantage, especially in data collection technology for surveillance and facial recognition.

This anxiety extends to the U.S. military enterprise as well. The military’s focus on AI has played out not only through organizational changes like the creation of the Joint Artificial Intelligence Center (JAIC) and a focus on AI in recent strategic documents, but also through a shift in its approach to warfighting. The central Department of Defense effort to modernize its methods of fighting—slickly named Joint All-Domain Command and Control (JADC2)—identifies "AI, machine learning, and predictive analytics" as the core enablers that will allow the future military to “defeat any adversary at any time and in any place around the globe.” As part of its JADC2 efforts, the Department of Defense is seeking to enforce compliance with new universal standards, including by building a "joint operating system" that will force a top-down centralization of military data storage and accessibility.

On paper, there is a clear appeal to this approach. It feels efficient, and it allows us to dismiss the messy, intensely human reality of war by envisioning a seamless, rapid flow of data, knowledge, and decisions through the United States’ warfighting machine, allowing it to dominate the enemy. However, like outfitting an 1890s office building with pneumatic tubes, connecting a military enterprise to today's rather immature conception of AI might seem modern, but it has the potential to be a liability down the road. Indeed, it might play to U.S. interests to let China take the lead. Watch as they connect every aspect of their military to centralized systems that sweep up data for use in current AI approaches—approaches built around big data machine learning and today’s imperfect classification algorithms—and issue commands intended to destroy opposing forces without the need to trust their military operators. This science-fiction version of military AI could be a liability because, at the end of the day, military success has more to do with surprise than efficiency.

When considering the dynamics of competition between the United States and China, it is important to recognize that nations have asymmetric motivations and capabilities to develop and deploy AI. China's single-party political system still depends to some extent on the support of its people to maintain legitimacy. In the absence of democratic elections, surveilling a populace and using the data to measure sudden anomalies is a powerful tool to monitor—and respond to—the sentiment of the people. Moreover, China has strong incentives to leverage technology to overcome the failings of past one-party systems. While it also has a capable and sometimes freewheeling private sector, strong state control helps China ensure the people hew to its national mandates.

Chinese president Xi Jinping has voiced persistent doubts about the ability of Chinese military officers to make combat leadership decisions, asserting that some officers cannot understand higher authorities’ intentions or make operational decisions under pressure. Even though China’s military enterprise has made remarkable progress in building systems from stealth drones to hypersonic missiles, this foundational aspect of combat remains a key issue. This helps explain the Chinese military’s fascination with AI, which would seem to present a way to paper over leadership concerns, with machines dominating the battlefield in the same way that Chinese AI can beat the best video game players in the world.

Meanwhile, the United States draws from a highly decentralized system that supports and drives a vibrant commercial sector. American industry has harnessed the seemingly magical power of AI to drive online advertising. And in any given year, there are dozens of startups figuring out how to leverage machine learning to build and monetize better digital mousetraps. At the same time, the U.S. military struggles to operationalize AI. In the four years since the JAIC was founded, it hasn't made visible progress toward delivering access to troves of military data for users of its joint common foundation toolkit. That's no easy task, given the context-specific nature of data, complex security environment, and distinct service cultures that make sharing frustratingly difficult. However, perhaps this isn’t the catastrophe that corporate executives would like us to think.

It appears that leaders are distracted by powerful examples of AI applications that are not representative of actual warfare. For China, these examples include domestic surveillance efforts designed to ensure internal harmony. On the other hand, American leaders are drawn to the prevalence of AI in commercial applications, especially the applicability of AI for improving operations efficiency, corporate decision-making, and crossing information technology stovepipes. But despite a superficial similarity, the rule-bound world of video games has little to do with the horrors of actual military conflict.

The shortfalls of today’s AI technologies are clear. Machine learning technology’s dependence on black-box processing of historical data creates deep and systemic vulnerabilities. For instance, there are several well-publicized examples of self-driving cars misidentifying road signs due to subtle perturbations that humans can’t see. Voice commands are no more secure, as audio processing systems can also be fooled with faint noises that don’t sound like speech to humans. This is to say nothing of the threat of adversaries intruding into centralized AI command and control systems or the risks from communications interruption between algorithms and actions. Even when a computing environment is secure and algorithms aren’t fooled by hidden features, there is no way to know if adversarial data was accidentally collected a decade ago and used to improperly train a system. While research is underway to help mitigate these issues, a better approach is to leverage architectures that don’t present attractive centralized targets, don’t vacuum up data from all sources, and don’t force military users into common standards or platforms.

Conflict is an intrinsically human phenomenon, and people will always be a part of military decision-making. With the ubiquity of computing and communication, the real question for the U.S. military is not whether AI will be used in military applications, but how to architect it. The United States could choose to trust hierarchy, using centralized mandates and common standards and platforms to accelerate decisions, ultimately imitating—even chasing—our strategic competitor. Alternatively, it could embrace heterogeneity and pursue a federated model with an array of competing projects scattered across the Department of Defense. This latter model delegates trust to decisionmakers throughout the Pentagon, accepting the intrinsic risk it invites. Crucially, it also plays to the United States’ strengths.

In this alternative model, leaders would seek to promote the sharing of concepts and technology by centrally funding infrastructure like code repositories, but they wouldn’t seek to converge projects or standards. They would instead look to amplify success where it emerges.

The result might resemble something of a “zero trust” architecture for decision-making. In cybersecurity, a zero trust architecture recognizes that it is nearly impossible to make any system invulnerable to compromise and instead focuses on minimizing the risk and collateral damage of a successful attack. If any system is compromised, the effects can’t cascade and collapse the entire enterprise. While this sounds appealing, it comes with tradeoffs, namely that it forgoes the efficiency gains that centralized AI systems bring. Businesses that adopt modern tools like Palantir or C3.AI seek to place all of their enterprise data under a single pane of glass to maximize the operational payoff of decision support algorithms. Instead, a federated system offers the advantage of surprise—one can never be quite sure what individuals will do—for better or worse. But this unpredictability is resilient and inflicts costs on the adversary. Moreover, the United States is uniquely positioned to take advantage of a federated model.

The most valuable long-term potential of AI is not displacing humans from their tasks, automating decision-making, or modernizing weapon systems. Instead, over the decades to come, AI will permit us to richly recombine intelligence—in both human and machine forms—around collective problem-solving. We are only in the early stages of this journey. However, strategic advantages will come from harnessing the United States’ unique strengths, including the messy, cantankerous character of its system of governance and national history.

Melissa Flagg is a visiting fellow at the Perry World House, a fellow at the Acquisition Innovation Research Center, and an advisor to the Andrew W. Marshall Foundation. She is also a former Deputy Assistant Secretary of Defense for Research with over fifteen years in defense research and engineering. She can be found on Twitter @flaggster73