Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions
America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.
Until now, the department has not needed to understand how its vast network of partner governments and militaries are absorbing a general-purpose technology like AI. A February 1 DoD memorandum identifies roles the CDAO and the undersecretaries for Policy, Acquisition and Sustainment (A&S), and Research and Engineering (R&E) will play in international cooperation on AI. But, DoD lacks a cross-cutting process for collecting technical and policy knowledge derived from these international interactions and integrating it into coalition policy, planning, or technical cooperation efforts on a country-by-country or weapons system-by-system basis.
The DoD should task the CDAO, Policy, A&S, and R&E offices to create one. These offices should establish metrics in R&D, TEVV, and acquisitions processes that incentivize the bureaucracy to prioritize technical and organizational interoperability and consider unique requirements that might arise from ethical or policy questions likely to arise in multinational use scenarios. This would help channel international partner input to relevant points of contact across the department, optimizing the value of the international “responsible AI ecosystem” to U.S. coalition efforts.
Engage allied publics. Last, differences in public opinion about the inherent legitimacy and desired forms of accountability for AI-based weapons reflect real divides within and between the polities that comprise America’s alliance network. Bridging these gaps by monitoring public discourse and enhancing public diplomacy about military AI would have the effect of both educating the public at home and abroad and help raise expectations that transparency is the norm. In an era in which states select and enact military strategies before a global public audience, it is important for Americans, allies, and others to see that if the U.S. military technological edge must be used in conflict, its leaders and its partners choose to do so responsibly.
As China and Russia continue to use AI tools to enhance authoritarian control at home, it is becoming commonplace to argue that the values America and its allies share for responsible AI can represent a competitive edge of soft power. This might well be the case. Only if America and its allies are capable of enacting these values on the AI-infused battlefield together, though, will this advantage serve to help legitimize U.S.-led operations in the world’s eyes. A coalition’s ability to uphold the laws of armed conflict is ultimately bounded by the capability and willingness of its least able members to do so.
Casey Mahoney (@caseymahoney) is a U.S. Institute of Peace–DoD Minerva Peace & Security Scholar and a Ph.D. Candidate in political science at the University of Pennsylvania. He served as a Nunn-Lugar Fellow in the Office of the Secretary of Defense for Policy and in AT&L from 2013 to 2017. The views expressed are his own.
Image: Flickr.