Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions

Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions

America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.

 

In March 2021, Google’s Eric Schmidt and former Department of Defense (DoD) deputy secretary Bob Work wrote in their preface to the 756-page report of the bipartisan National Security Commission on Artificial Intelligence (NSCAI), “America is not prepared to defend or compete in the AI era.” As chair and vice-chair of the NSCAI, respectively, they summarized the commission’s solution: “America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.”

Though the U.S. military is taking pains to ensure AI does not erode its ideal to fight wars ethically, it cannot afford to leave its allies and partners behind in this endeavor. DoD is working to ensure the U.S. military can deter and fight AI-infused armed conflicts as part and likely leader of future coalitions using ethical, or “responsible,” AI. Efforts have focused on establishing broad principles for AI development and use and have targeted the technical enablers of multinational uses of AI, like standardizing data-labeling processes and pursuing data-sharing agreements with partners.

 

This is not enough. On the coalition battlefield, the ethics of military AI come down to the choices leaders and commanders make about how to use AI-enabled weapons. But it is not clear that coordination and joint decisionmaking practices at the political and operational levels used in U.S.-led coalitions to date are well-suited to operations in an AI era. How will coalitions manage a more complex decision space, where different nations’ AI systems pass algorithm outputs to operators and analysts across a coalition? Will decisionmaking outcomes be consistent with our ethical ideals?

AI is making human judgment in war more, not less, important. This means the United States and its allies and partners will need to innovate together, focusing on more than broad ethical principles and technical solutions. The U.S. defense enterprise can take three concrete steps I describe below to ensure its own and its partners’ technology and ideals align with the organizational structures—that is, in coalitions—in which AI-enabled weapons will be put to use.

Foundations of AI Responsibility in U.S. Alliances and Partnerships

Because the United States fights in coalitions in most armed conflicts, focusing on developing partnerships to integrate military AI is a prudent approach. The NSCAI charged the DoD with achieving broad military AI readiness by 2025, including by “promoting AI interoperability with allies and partners,” and the Pentagon is heeding this call.

In September 2020, DoD had already convened representatives from thirteen countries from NATO, non-NATO alliances, and other defense partnerships to socialize its ethical principles for AI and coordinate on military AI ethics policy. This AI Partnership for Defense (AIPfD) aims to “promote the responsible use of AI, advance shared interests and best practices … establish frameworks to facilitate cooperation, and coordinate strategic messaging.”

Since then, engagement with international defense partners has broadened and deepened. By June 2021, AIPfD had added three additional member states to the group; in March 2022, it convened its fifth international dialogue. AIPfD cooperation has deepened from high-level conversations to discussions on AI-use scenarios, marking progress toward a key NSCAI recommendation the DoD focus on specific AI use-cases in exercises and wargames.

In addition, in October 2021, NATO adopted an alliance-wide AI strategy focused mostly on responsible use. Biden administration initiatives in the Indo-Pacific in 2021—reinvigorating the Quadrilateral Security Dialogue (Quad) with Australia, India, and Japan and concluding the Australia-U.K.-U.S. (AUKUS) technology-sharing agreement—also targeted AI cooperation. Early work in the Quad has included collaboration on AI technical standards more generally, while AUKUS members are cooperating on capabilities for use in contested military environments.

Finally, U.S. military services have also begun incorporating new AI systems into multinational operational exercises, experimentation that can help foresee and overcome the technical and operational challenges of using novel technology in coalitions.

Important early steps like these help enact standards, like keeping humans in AI systems’ decision loops and having strong technology-policy review processes, meant to avoid worst-case scenarios where uncontrolled, unvalidated systems are fielded in armed conflict.

 

But, the Department has more to do to avoid the misuse or failure of AI-enabled weapons in future coalition operations. Whether the employment of any weapons system in armed conflict is “ethical” or “responsible” ultimately depends on the assessments commanders and political leaders make. In multinational operations with AI tools at the “tip of the spear,” non-U.S. leaders and commanders will also be faced with choices that determine whether they use such tools to enact values, like proportionality and discrimination, in fighting alongside U.S. forces. It is in the U.S. interest that they do this. Guaranteeing that they do, however, is difficult.

Strength In Diversity?

Many of the national AI strategy documents of NATO allies and U.S. allies and partners in the Indo-Pacific prioritize responsible governance over AI in and, in some cases, AI-enabled warfighting. This apparent, high-level harmony notwithstanding, public polling data from 2019 and 2021 suggest that among U.S. security partners, specific concerns about the use of lethal autonomous weapons systems vary widely. Data from this year show that the public’s trust in AI more generally varies from quite high among some partners, as in India and Turkey, to quite low, among traditional NATO allies and Japan.

U.S. leaders should thus not assume their allies will be reading from an identical political or ethical playbook in future coalition operations with AI in the mix. Once shooting begins, coalition members frequently find they actually disagree about the policies and strategies that should guide operations. Domestic politics are often what shape the scope and limits of coalition members’ contributions to operations, and they can impact leaders’ strategic decisions in complex ways. This makes it worth thinking carefully about the benefits and risks of working in a coalition where views about military AI use and governance vary.

Intuitively, a diversity of perspectives is useful for creative problem-solving. In plotting a course for research and development on military AI, DoD set the goal of building “a robust national and global [responsible AI] ecosystem” among partner government, private sector, and academic institutional partners to maximize creative potential and interoperability.

In operational contexts, however, the stakes of navigating differences of the ethical frameworks and policies that inform leaders’ and commanders’ decisions are much higher. Without appropriate ways of managing coalition contributions, unforeseen mismatches in the skill levels and specialized capabilities of partner forces can have major negative effects on military effectiveness. National political differences about whether AI collaboration should be civilian- or military-focused, varied timelines over which militaries are adopting AI, and incompatibilities in legal and regulatory structures could all present challenges to U.S.-led coalitions cohesively enacting shared notions of military AI ethics. Is there a way to find operational strength in this diversity?

A Responsible AI Coalition

It is in the U.S. interest to leverage the creative potential of a diverse AI “ecosystem.” However, it is also necessary to establish habits that mitigate the risk that political, cultural, and organizational differences among future coalition partners might undermine collective, responsible AI use.

To do this, the Defense Department can take steps now to increase the reliability with which future coalitions will operationalize the foundations of international cooperation on military AI. The DoD should pursue the three objectives and consider specific actions to pursue them. Building these goals into the charter of the DoD’s new Office of the Chief Data and AI Officer (CDAO) that Deputy Secretary Kathleen Hicks directed be prepared by June 1, 2022, would help align institutional incentives to accomplish them.

Establish a common language. First, policymakers, commanders, and technical and legal experts in future coalitions must be able to speak a common language to communicate about how AI systems ought to be used on the battlefield—let alone about how they are developed and validated. A December 2021 report by the Center for Naval Analysis identifies 565 unique policy and ethics “risk elements” the use of autonomous systems pose in military applications. But, given the “bias that occurs when operating in coalition and allied environments … that stems from different sets of treaties, ROE, or cultural norms,” one imagines that the opportunities for miscommunication in the absence of agreement on the terms of debate will grow exponentially.

Despite China’s efforts to lead in setting international AI technical standards, it is clearly in the U.S. interest to pursue its own standards under which it collaborates with military partners. The DoD should task the CDAO to oversee a process to identify what resources would be necessary to engage partners to develop and baseline U.S. programs around a technical glossary for AI. Doing so would set the terms of debate among the international partners DoD seeks to recruit to the responsible AI ecosystem it seeks to establish. Without shared language, communicating about partners’ capabilities and intent to use AI responsibly will be difficult, posing risks for the strategic effectiveness and political cohesion of future coalitions.

Regularizing international collaboration. Second, AI systems need to be built for purpose and stress-tested in situations that approximate coalition use contexts. In the pipeline from research and development (R&D) and testing, evaluation, validation, and verification (TEVV) to operational use and sustainment, it is not clear how the DoD is considering the unique coalition command-and-control challenges of AI-enabled systems. The sample of ongoing DoD activities with partners described above tend to be country-, service-, or acquisition program-specific efforts. To systematically ensure lessons learned are not lost and forgotten in the massive Pentagon bureaucracy, the DoD needs to first identify when program offices, service branches, and country desk officers interact with partners in ways that should inform AI system development, testing, and fielding.