How to Slow the Spread of Lethal AI

How to Slow the Spread of Lethal AI

Today, it is far too easy for reckless and malicious actors to get their hands on the most advanced and potentially lethal machine-learning algorithms.

 

Following the signing of the 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy by over sixty countries (spearheaded by the United States), negotiators at the UN and civil society organizations are making painstaking progress toward a treaty on lethal AI systems. Nevertheless, these efforts outline few mechanisms to enforce their rules or to prevent the “spoiler problem,” whereby unprincipled new entrants, rogue states, and non-state actors exploit the self-restraint of more scrupulous companies and polities. Without enforcement mechanisms in place soon—likely before any treaty is even finalized, and certainly before the most advanced lethal AI algorithms widely proliferate—the measures under consideration may be dead on arrival.

Currently, it is far too easy for reckless and malicious actors to get their hands on the most advanced machine learning algorithms, including foundation models whose applications are getting more general by the day. The status quo of open-source sharing of advanced algorithms and the exclusion of those algorithms from export-control regimes is untenable from the perspective of U.S. national security. The United States must update its export-control policy on advanced software, implement a sanctions regime on irresponsible actors—including those within allied countries—and establish policies on AI Dual-Use Research of Concern (DURC), akin to longstanding and emerging biosecurity practices to reflect the destabilizing risks of weaponized AI.

 

The United States should establish an interagency task force to define AI DURC precisely and establish policies for its responsible handling and operationalization. Practices like encryption, meaningful human oversight by design, robustness to misalignment, and supply chain auditing are examples of key guidelines on which the task force should weigh in. Given the slippery nature of algorithmic code and research, the United States needs to apply more coercive measures to enforce AI DURC guidelines than it does the present biosecurity DURC guidelines.

Among the goals of these policies would be to make it much harder for developers to open-source advanced algorithms with potential combat applications. The president can immediately make progress on this front by issuing an executive order barring federally funded researchers, contractors, and grantees from publishing or sharing code for such algorithms without a waiver from the Bureau of Industry and Security (BIS). The most important targets would be swarm intelligence algorithms trained on multi-media inputs and capable of exercising analytical judgment—key characteristics of AI algorithms that can automate the entire kill chain.

Given the current state of technological competition with foreign adversaries, the United States must remain a world leader in AI innovation. This framework should not hamper this goal. The National Science Foundation (NSF) is currently running a pilot project to connect U.S. AI researchers to educational resources with the goal of facilitating discovery and innovation. This infrastructure should be expanded to create a platform through which registered developers can share ideas on DURC topics. This forum would be closed to foreign adversaries while facilitating collaboration among American researchers. The United States should exempt other countries from some of the more stringent regulations proposed here and grant them access to the NSF exchange if they can credibly demonstrate enforcement of mutually acceptable AI DURC safety standards. Taken together, these new regulations and NSF exchange would balance the urgent national security imperative outlined above with the need to continue leading the world in AI innovation.

A vital benefit of this approach is that it would enable the president to immediately leverage American economic and technological prowess to build momentum toward international agreements on AI safety. It would signal sincerity to key players in those negotiations and ignite an iterative process of developing enforcement mechanisms for international AI safety standards, making negotiated treaties more plausible. Even without a grand international treaty, full implementation of these proposed policies would bring some order and due diligence to the American AI research space, help protect it from foreign exploitation, and preserve a competitive edge for American innovation and power. At the very least, this approach would slow down the adoption of autonomous weapons by adversaries of the American-led global order, giving the United States and allied militaries time to adapt.

America and its allies have an overwhelming interest in securing dominance in military AI and protecting international stability. The status quo puts these objectives in dire peril. The policies outlined above would not safeguard this dominance and stability alone, but they would represent a massive step in the right direction.

Anthony De Luca-Baratta is a Google Public Policy Fellow at the Aspen Institute, where his work focuses on AI governance. He worked as an intern at the Center for the National Interest during the summer of 2024, where his research centered on technology and defense policy. He is an MA International Relations student and Public Service Fellow at the Johns Hopkins School of Advanced International Studies (SAIS), where he focuses on AI governance, national security, and American grand strategy.

Josh Curtis is a member of Foreign Policy for America's NextGen Initiative and a Public Service Fellow and Master's candidate at the Johns Hopkins University School of Advanced International Studies, whose research focuses on science and tech diplomacy, AI safety, and international security. Before coming to SAIS, he served as Special Assistant for Policy & Strategy at the National Endowment for Democracy, where he helped shape the organization's emerging tech and digital democracy grantmaking strategies.

Image: Parilov / Shutterstock.com.