Is It Too Late to Stop the Spread of Autonomous Weapons?

Is It Too Late to Stop the Spread of Autonomous Weapons?

If autonomous weapons are the future of warfare, then the United States has no choice but to grapple with their complexities.

If the AI combat core cannot be meaningfully separated, arms trafficking of the F-AI is probably not a concern. Trade in major arms systems often gets significant public attention, such as the recent American decision to sell MQ-1C drones to Ukraine. So too would an attempted straw purchase of an F-AI. As a physically large platform, the F-AI no doubt would require a wide variety of maintainers, and someone would probably notice the F-AI was missing. Even if an adversary managed to pull off an Ocean's Eleven-style heist to nab an F-AI, ordinary export control measures will still matter: law enforcement investigations, verifying end-users and uses when the F-AI is transported; and inspecting shipments for contraband.

Of course, arms trafficking could still be an issue for smaller autonomous systems. In February 2021, Israel announced it was investigating twenty suspects for building and trafficking loitering munitions, which are not necessarily autonomous weapons but are “on the forefront of autonomous lethality.” Although normal export control measures are applicable here too—the Israel smuggling incident was seemingly a normal police investigation—the complexity comes if AI-based autonomy changes the rules. If the presence of AI changes the export control rules—who can buy the weapon, whether buyers can transfer, the types of approvals needed—then technical verification becomes a major challenge.

The presence of specialized AI chips might provide some indicators, but probably no smoking gun. What if the weapon uses AI to help guide human targeting, but a human still has to decide what to destroy? Other aspects of weapon design and specifications may also provide clues. An armed, unmanned platform without communication receivers or software to allow remote operation is likely, but not necessarily, capable of autonomous strikes. The details are likely to be idiosyncratic, and the burden falls on law enforcement, export control, and intelligence personnel to make judgments on the likelihood of autonomy.

Now What?

The United States is unlikely to field an F-AI any time soon (nor are adversaries and allies), but numerous states are building, developing, and increasingly deploying AI-based autonomous systems. Overall, the United States needs a nuanced, risk-informed approach to non- and counter-proliferation, recognizing autonomous weapons comprise a wide range of systems. This analysis points to several considerations for the United States and its allies.

First, system design is a critical factor in proliferation risk. If the AI and autonomy of the autonomous weapon cannot be meaningfully separated from the weapon as a whole, then proliferation risks are reduced, likely drastically. As part of broader test and evaluation activities, weapon designers need to consider how readily new autonomous features can be manipulated. Designers also need to consider how readily AI-based features can be protected, extracted, used, and destroyed in the event of system capture. Best practices will need to be codified into clear standards to apply across the autonomous weapons ecosystem.  

Second, nonproliferation policy should depend on a weapon’s relative military value, particularly the reliability of the system. In some cases, the United States may want to encourage the proliferation of reliable systems: provide technical assistance training to improve reliability, provide already-developed AI systems, or provide a complete weapon. Conversely, the United States should be very careful with systems that could affect delicate military balances. That means modeling, simulations, war games, and other analyses to figure out where autonomous systems are really most impactful. In other cases—particularly autonomous nuclear weapons and massive anti-personnel drone swarms—the United States should even consider binding limitations on its own development. Autonomous nuclear weapons create potential catastrophic risks for the United States, while massive drone swarms represent a potential major asymmetric threat, with risks akin to traditional weapons of mass destruction.

Third, cyberwarfare capabilities should play a key role in any non- and counter-proliferation efforts. On defense, AI combat core systems must be well-guarded from cyber theft, and networks protected from attempted attacks. This may create trade-offs: a separate AI combat core may be easier to guard, while a highly integrated system may need to be shared broadly with system designers. On offense, cyber-attacks may be used to exfiltrate data on adversary progress, poison AI training data, or sabotage adversary systems. If the real military value of AI stems from decision-making speed, then subtly slowing AI algorithm speed might have meaningful military effects.

Finally, the United States needs to better engage with the global conversation on autonomous weapons. The United States has largely opposed growing global calls to ban autonomous weapons, emphasizing the applicability of existing laws of war. The United States is right to oppose broad bans on autonomous weapons, especially restrictions on defensive systems and low-risk autonomous weapons like radar-seeking loitering munitions. But the United States also may need to work with, or at least minimize opposition from, ban proponents to build global non-proliferation efforts. To do so, the United States should consider supporting formal discussions on a treaty, but only on the precondition that a complete ban on all autonomous weapons is taken off the table. The International Committee on the Red Cross’ statement on autonomous weapons provides a risk-based approach to negotiations, with its emphasis on predictability and targeting. This would also serve as a vehicle to advance non-proliferation in areas where autonomous weapons pose significant threats to American security, such as autonomous nuclear weapons and drone swarms.

Ultimately, preventing autonomous weapons proliferation is likely to be a vexing challenge in which the technical and geopolitical context will play a major role. But if AI and autonomous weapons are the future of warfare, then the United States has no choice but to grapple with these complexities.

Thank you to Nick Colosimo for providing valuable feedback on the article.

Zachary Kallenborn is an author and analyst who specializes in WMD terrorism, unmanned systems, drone swarms, and homeland security. Zachary is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), an officially proclaimed U.S. Army "Mad Scientist," and a Senior Consultant at ABS Group. 

Image: DVIDS