Is It Too Late to Stop the Spread of Autonomous Weapons?
If autonomous weapons are the future of warfare, then the United States has no choice but to grapple with their complexities.
The congressionally appointed National Security Commission on Artificial Intelligence recently concluded that “we can expect the large-scale proliferation of AI-enabled capabilities.” The recent Libya conflict seems to validate that conclusion about artificial intelligence (AI), with soldiers loyal to Libyan Gen. Khalifa Haftar employing a Turkish-made Kargu-2 drone that “hunted down and remotely engaged” retreating forces, according to a United Nations report. It’s not clear whether the Kargu-2 employed its autonomous capabilities, but the Turkish government acknowledged it can field that capability. Likewise, Russia has employed the KUB-BLA loitering munition and reportedly the Lancet 3 during its invasion of Ukraine. Again, the details are murky—some analysts are skeptical that the KUB-BLA possesses AI-enabled autonomous capabilities, and the claims of Lancet 3 usage come from Rostec, the Russian state-owned defense conglomerate that includes the Lancet 3 manufacturer, not from independently-verified battlefield images. Adding to the confusion: in each case, autonomous operation is clearly an option, although it may not be exercised. That makes verification quite hard.
What’s far less murky is the need for the United States to think through policy regarding non- and counter-proliferation of autonomous weapons. Such a policy needs to be nuanced and risk-informed based on a specific weapon’s military value, effects on regional and global competition, and ease of acquisition. In some cases, the United States should treat autonomous weapons as just another tool in the foreign policy chest: share the weapons broadly to bolster allies and weaken adversaries. In other cases, the United States should aggressively pursue non- and counter-proliferation to even include adopting binding restrictions on American use of certain autonomous weapons.
Understanding Proliferation Risk
Strictly speaking, an autonomous weapon is simply a weapon that selects and engages targets on its own. Really, any conceivable weapons system could be autonomous. Loitering munitions, landmines, active protection systems, and counter rocket, artillery, and mortar systems are all in use and may fall under the autonomous weapons category depending on the details. Likewise, China recently unveiled unmanned motherships capable of launching drone swarms, Russia is working on undersea swarms for anti-submarine warfare, and India has demonstrated a seventy-five-drone swarm capable of autonomously striking soldiers and vehicles. The growth of artificial intelligence (AI) is a key enabler, allowing better object detection and improving decision-making, and AI-based autonomous weapons are a critical emerging issue. However, AI is not strictly necessary for a system to be autonomous, though this piece focuses on AI-based systems as the topic du jour.
The broad range of systems means that proliferation concerns vary widely. The United States probably doesn’t need to worry too much about the proliferation of defensive systems; it might even encourage proliferation to protect critical infrastructure against multi-drone attacks.
The will-they-won’t-they dynamic of the United States’ F-35 sale to the United Arab Emirates illustrates that the details matter. Experiments at the Defense Advanced Research Projects Agency (DARPA) found that an AI-piloted F-16 dominated a human pilot in dogfights. The technology is still very nascent, and a simulation isn’t reality, but a future operational system of similar capability could greatly affect regional balances of power. So, exports and broader proliferation concerns should be based on careful assessments of existing balances of power and how autonomous weapons may shape them (or not).
In general, the military value of AI-enabled autonomous systems comes from mass and speed. In theory, removing humans from the loop reduces system cost, allowing actors to field many more systems and potentially overwhelm large, expensive platforms. Or a state may create broad battle networks, distributing a large number of cheap autonomous weapons over an area. AI also may allow for much faster tactical decision-making, more quickly closing the sense-to-decision loop. The affordability of autonomous weapons suggests they likely favor defense, but the applications are so broad that definitive conclusions are difficult.
System reliability will be a critical factor too. Humanitarians like to highlight the risks of autonomous error—an autonomous weapon based on machine vision can be easily tricked, perhaps killing civilians. But reliability also has significant military consequences. A weapon that fails to destroy its target is a waste of resources and puts soldiers at risk. It can also have strategic effects too, such as the accidental escalation of a crisis or a failed attempt to demonstrate strength. Although weapons designers will surely aim to build reliable autonomous weapons, the unpredictability of AI, data-dependence, and compute restrictions will limit how reliable the system can be. States will also differ in their risk tolerance: the United States gave up chemical weapons decades ago, in part, because they were too unreliable; the government of Bashar al-Assad has used them extensively throughout the ongoing Syrian Civil War. For an authoritarian regime seeking to suppress a domestic opposition group, some unreliability may be more feature than bug.
Building simple AI-based autonomous weapons is almost trivial—instructions can be found online to build autonomous paintball gun-turrets. But building a robust system that can accommodate the numerous complexities of the battlefield is much more difficult. Recognizing people is one thing, but what about tanks or armored personnel carriers? What if it's raining, snowing, foggy, or just overcast? What if the target is partially obstructed? What if a tank has a big “Z” painted on it? If states adopt untested and unreliable autonomous weapons that do not adequately account for these factors, the United States may actually want to encourage the proliferation of more reliable systems. That could include providing technical assistance to assess and improve system reliability, providing well-trained AI algorithms, or providing a complete weapon.
The likelihood of AI-based autonomous weapons proliferating will vary too. To consider proliferation pathways, we will consider a hypothetical autonomous fighter jet, the F-AI.
Proliferation Pathways
The year is 2050 and the United States is fielding a new fighter jet, the F-AI. The F-AI is almost entirely autonomous: it can identify and engage adversary fighters, avoid adversary air defenses based on external data feeds, carry-out human specified missions, and perform basic activities like planning routes, aerial refueling, and landing. Although the F-AI includes some system upgrades over the last generation air dominance fighter, the main change is the AI combat core. The AI combat core allows highly precise autonomous movement during combat, anticipates enemy movement, and selects and fires a range of munitions. DARPA’s AI dogfight experiments proved prescient, and the F-AI utterly dominates the sky. Naturally, the United States does not want the fighter jet to fall into the wrong hands. The United States would need to worry about three things: system capture, illicit trafficking, and native development.
In 2011, Iran claimed to have captured a Lockheed Martin RQ-170 Sentinel Drone by tricking the global positioning system and jamming communications. The United States denies Iran did this, but the possibility needs to be considered, especially as systems incorporate greater autonomy. The brittleness of AI, in which a single pixel change can cause the system to see a dog instead of a stealth bomber, suggests system capture is a risk for the F-AI too. Even without that, the system may be seized during everyday combat.
System design will be a major factor in proliferation risk. Assuming the F-AI is reasonably intact, the adversary can break whatever cybersecurity measures are in place, and the software did not wipe itself, the code may be useless in supporting an adversary’s indigenous development. The combat core may be optimized to the F-AI’s particular parameters and components, such as thrust-to-weight ratios, turn rates, weapon system ranges and types, and flight controllers. Even if the AI combat core can be completely extracted, it may be wholly useless in a new system. System design will also be a major factor in illicit diversion risk. If the AI combat core can be meaningfully separated, arms trafficking becomes a concern. An adversary or otherwise interested state could attempt to steal just the AI control system. Like a cyber weapon, the combat core might just be smuggled out on a hard drive.
What if a state tries to build its own F-AI? The dual-use dilemma is likely to be quite strong, as civilian and military AI work may look quite similar from the outside, drawing on similar equipment, hardware, and talent. Now, the state may shout its development from the rooftops to signal its technological prowess, but let’s say the state decides to hide the program, or at least how far along they are.
Any autonomous weapon system must still go through the normal phases of weapons development. That means establishing requirements, procuring parts, building the plane, and making sure the bloody thing flies. That means test ranges, production facilities, acquiring technical experts, training users and maintainers, and the organizational support to manage the operation. In addition, AI-specific systems may need to meet other military standards, such as radiation hardening for microchips to withstand radiation from nuclear explosions. Few civilian AI applications require microchips to be radiation hardened (mostly civil space programs), so facilities for the mass production of rad-hardened AI chips (or for rad-hardening commercial AI chips) may be needed, too. All of those may be targets for intelligence collection and analysis, or, if need be, counter-proliferation through the interdiction of specialized parts or sabotage.
If the AI combat core cannot be meaningfully separated, arms trafficking of the F-AI is probably not a concern. Trade in major arms systems often gets significant public attention, such as the recent American decision to sell MQ-1C drones to Ukraine. So too would an attempted straw purchase of an F-AI. As a physically large platform, the F-AI no doubt would require a wide variety of maintainers, and someone would probably notice the F-AI was missing. Even if an adversary managed to pull off an Ocean's Eleven-style heist to nab an F-AI, ordinary export control measures will still matter: law enforcement investigations, verifying end-users and uses when the F-AI is transported; and inspecting shipments for contraband.
Of course, arms trafficking could still be an issue for smaller autonomous systems. In February 2021, Israel announced it was investigating twenty suspects for building and trafficking loitering munitions, which are not necessarily autonomous weapons but are “on the forefront of autonomous lethality.” Although normal export control measures are applicable here too—the Israel smuggling incident was seemingly a normal police investigation—the complexity comes if AI-based autonomy changes the rules. If the presence of AI changes the export control rules—who can buy the weapon, whether buyers can transfer, the types of approvals needed—then technical verification becomes a major challenge.
The presence of specialized AI chips might provide some indicators, but probably no smoking gun. What if the weapon uses AI to help guide human targeting, but a human still has to decide what to destroy? Other aspects of weapon design and specifications may also provide clues. An armed, unmanned platform without communication receivers or software to allow remote operation is likely, but not necessarily, capable of autonomous strikes. The details are likely to be idiosyncratic, and the burden falls on law enforcement, export control, and intelligence personnel to make judgments on the likelihood of autonomy.
Now What?
The United States is unlikely to field an F-AI any time soon (nor are adversaries and allies), but numerous states are building, developing, and increasingly deploying AI-based autonomous systems. Overall, the United States needs a nuanced, risk-informed approach to non- and counter-proliferation, recognizing autonomous weapons comprise a wide range of systems. This analysis points to several considerations for the United States and its allies.
First, system design is a critical factor in proliferation risk. If the AI and autonomy of the autonomous weapon cannot be meaningfully separated from the weapon as a whole, then proliferation risks are reduced, likely drastically. As part of broader test and evaluation activities, weapon designers need to consider how readily new autonomous features can be manipulated. Designers also need to consider how readily AI-based features can be protected, extracted, used, and destroyed in the event of system capture. Best practices will need to be codified into clear standards to apply across the autonomous weapons ecosystem.
Second, nonproliferation policy should depend on a weapon’s relative military value, particularly the reliability of the system. In some cases, the United States may want to encourage the proliferation of reliable systems: provide technical assistance training to improve reliability, provide already-developed AI systems, or provide a complete weapon. Conversely, the United States should be very careful with systems that could affect delicate military balances. That means modeling, simulations, war games, and other analyses to figure out where autonomous systems are really most impactful. In other cases—particularly autonomous nuclear weapons and massive anti-personnel drone swarms—the United States should even consider binding limitations on its own development. Autonomous nuclear weapons create potential catastrophic risks for the United States, while massive drone swarms represent a potential major asymmetric threat, with risks akin to traditional weapons of mass destruction.
Third, cyberwarfare capabilities should play a key role in any non- and counter-proliferation efforts. On defense, AI combat core systems must be well-guarded from cyber theft, and networks protected from attempted attacks. This may create trade-offs: a separate AI combat core may be easier to guard, while a highly integrated system may need to be shared broadly with system designers. On offense, cyber-attacks may be used to exfiltrate data on adversary progress, poison AI training data, or sabotage adversary systems. If the real military value of AI stems from decision-making speed, then subtly slowing AI algorithm speed might have meaningful military effects.
Finally, the United States needs to better engage with the global conversation on autonomous weapons. The United States has largely opposed growing global calls to ban autonomous weapons, emphasizing the applicability of existing laws of war. The United States is right to oppose broad bans on autonomous weapons, especially restrictions on defensive systems and low-risk autonomous weapons like radar-seeking loitering munitions. But the United States also may need to work with, or at least minimize opposition from, ban proponents to build global non-proliferation efforts. To do so, the United States should consider supporting formal discussions on a treaty, but only on the precondition that a complete ban on all autonomous weapons is taken off the table. The International Committee on the Red Cross’ statement on autonomous weapons provides a risk-based approach to negotiations, with its emphasis on predictability and targeting. This would also serve as a vehicle to advance non-proliferation in areas where autonomous weapons pose significant threats to American security, such as autonomous nuclear weapons and drone swarms.
Ultimately, preventing autonomous weapons proliferation is likely to be a vexing challenge in which the technical and geopolitical context will play a major role. But if AI and autonomous weapons are the future of warfare, then the United States has no choice but to grapple with these complexities.
Thank you to Nick Colosimo for providing valuable feedback on the article.
Zachary Kallenborn is an author and analyst who specializes in WMD terrorism, unmanned systems, drone swarms, and homeland security. Zachary is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), an officially proclaimed U.S. Army "Mad Scientist," and a Senior Consultant at ABS Group.
Image: DVIDS