The congressionally appointed National Security Commission on Artificial Intelligence recently concluded that “we can expect the large-scale proliferation of AI-enabled capabilities.” The recent Libya conflict seems to validate that conclusion about artificial intelligence (AI), with soldiers loyal to Libyan Gen. Khalifa Haftar employing a Turkish-made Kargu-2 drone that “hunted down and remotely engaged” retreating forces, according to a United Nations report. It’s not clear whether the Kargu-2 employed its autonomous capabilities, but the Turkish government acknowledged it can field that capability. Likewise, Russia has employed the KUB-BLA loitering munition and reportedly the Lancet 3 during its invasion of Ukraine. Again, the details are murky—some analysts are skeptical that the KUB-BLA possesses AI-enabled autonomous capabilities, and the claims of Lancet 3 usage come from Rostec, the Russian state-owned defense conglomerate that includes the Lancet 3 manufacturer, not from independently-verified battlefield images. Adding to the confusion: in each case, autonomous operation is clearly an option, although it may not be exercised. That makes verification quite hard.
What’s far less murky is the need for the United States to think through policy regarding non- and counter-proliferation of autonomous weapons. Such a policy needs to be nuanced and risk-informed based on a specific weapon’s military value, effects on regional and global competition, and ease of acquisition. In some cases, the United States should treat autonomous weapons as just another tool in the foreign policy chest: share the weapons broadly to bolster allies and weaken adversaries. In other cases, the United States should aggressively pursue non- and counter-proliferation to even include adopting binding restrictions on American use of certain autonomous weapons.
Understanding Proliferation Risk
Strictly speaking, an autonomous weapon is simply a weapon that selects and engages targets on its own. Really, any conceivable weapons system could be autonomous. Loitering munitions, landmines, active protection systems, and counter rocket, artillery, and mortar systems are all in use and may fall under the autonomous weapons category depending on the details. Likewise, China recently unveiled unmanned motherships capable of launching drone swarms, Russia is working on undersea swarms for anti-submarine warfare, and India has demonstrated a seventy-five-drone swarm capable of autonomously striking soldiers and vehicles. The growth of artificial intelligence (AI) is a key enabler, allowing better object detection and improving decision-making, and AI-based autonomous weapons are a critical emerging issue. However, AI is not strictly necessary for a system to be autonomous, though this piece focuses on AI-based systems as the topic du jour.
The broad range of systems means that proliferation concerns vary widely. The United States probably doesn’t need to worry too much about the proliferation of defensive systems; it might even encourage proliferation to protect critical infrastructure against multi-drone attacks.
The will-they-won’t-they dynamic of the United States’ F-35 sale to the United Arab Emirates illustrates that the details matter. Experiments at the Defense Advanced Research Projects Agency (DARPA) found that an AI-piloted F-16 dominated a human pilot in dogfights. The technology is still very nascent, and a simulation isn’t reality, but a future operational system of similar capability could greatly affect regional balances of power. So, exports and broader proliferation concerns should be based on careful assessments of existing balances of power and how autonomous weapons may shape them (or not).
In general, the military value of AI-enabled autonomous systems comes from mass and speed. In theory, removing humans from the loop reduces system cost, allowing actors to field many more systems and potentially overwhelm large, expensive platforms. Or a state may create broad battle networks, distributing a large number of cheap autonomous weapons over an area. AI also may allow for much faster tactical decision-making, more quickly closing the sense-to-decision loop. The affordability of autonomous weapons suggests they likely favor defense, but the applications are so broad that definitive conclusions are difficult.
System reliability will be a critical factor too. Humanitarians like to highlight the risks of autonomous error—an autonomous weapon based on machine vision can be easily tricked, perhaps killing civilians. But reliability also has significant military consequences. A weapon that fails to destroy its target is a waste of resources and puts soldiers at risk. It can also have strategic effects too, such as the accidental escalation of a crisis or a failed attempt to demonstrate strength. Although weapons designers will surely aim to build reliable autonomous weapons, the unpredictability of AI, data-dependence, and compute restrictions will limit how reliable the system can be. States will also differ in their risk tolerance: the United States gave up chemical weapons decades ago, in part, because they were too unreliable; the government of Bashar al-Assad has used them extensively throughout the ongoing Syrian Civil War. For an authoritarian regime seeking to suppress a domestic opposition group, some unreliability may be more feature than bug.
Building simple AI-based autonomous weapons is almost trivial—instructions can be found online to build autonomous paintball gun-turrets. But building a robust system that can accommodate the numerous complexities of the battlefield is much more difficult. Recognizing people is one thing, but what about tanks or armored personnel carriers? What if it's raining, snowing, foggy, or just overcast? What if the target is partially obstructed? What if a tank has a big “Z” painted on it? If states adopt untested and unreliable autonomous weapons that do not adequately account for these factors, the United States may actually want to encourage the proliferation of more reliable systems. That could include providing technical assistance to assess and improve system reliability, providing well-trained AI algorithms, or providing a complete weapon.
The likelihood of AI-based autonomous weapons proliferating will vary too. To consider proliferation pathways, we will consider a hypothetical autonomous fighter jet, the F-AI.
The year is 2050 and the United States is fielding a new fighter jet, the F-AI. The F-AI is almost entirely autonomous: it can identify and engage adversary fighters, avoid adversary air defenses based on external data feeds, carry-out human specified missions, and perform basic activities like planning routes, aerial refueling, and landing. Although the F-AI includes some system upgrades over the last generation air dominance fighter, the main change is the AI combat core. The AI combat core allows highly precise autonomous movement during combat, anticipates enemy movement, and selects and fires a range of munitions. DARPA’s AI dogfight experiments proved prescient, and the F-AI utterly dominates the sky. Naturally, the United States does not want the fighter jet to fall into the wrong hands. The United States would need to worry about three things: system capture, illicit trafficking, and native development.
In 2011, Iran claimed to have captured a Lockheed Martin RQ-170 Sentinel Drone by tricking the global positioning system and jamming communications. The United States denies Iran did this, but the possibility needs to be considered, especially as systems incorporate greater autonomy. The brittleness of AI, in which a single pixel change can cause the system to see a dog instead of a stealth bomber, suggests system capture is a risk for the F-AI too. Even without that, the system may be seized during everyday combat.
System design will be a major factor in proliferation risk. Assuming the F-AI is reasonably intact, the adversary can break whatever cybersecurity measures are in place, and the software did not wipe itself, the code may be useless in supporting an adversary’s indigenous development. The combat core may be optimized to the F-AI’s particular parameters and components, such as thrust-to-weight ratios, turn rates, weapon system ranges and types, and flight controllers. Even if the AI combat core can be completely extracted, it may be wholly useless in a new system. System design will also be a major factor in illicit diversion risk. If the AI combat core can be meaningfully separated, arms trafficking becomes a concern. An adversary or otherwise interested state could attempt to steal just the AI control system. Like a cyber weapon, the combat core might just be smuggled out on a hard drive.
What if a state tries to build its own F-AI? The dual-use dilemma is likely to be quite strong, as civilian and military AI work may look quite similar from the outside, drawing on similar equipment, hardware, and talent. Now, the state may shout its development from the rooftops to signal its technological prowess, but let’s say the state decides to hide the program, or at least how far along they are.
Any autonomous weapon system must still go through the normal phases of weapons development. That means establishing requirements, procuring parts, building the plane, and making sure the bloody thing flies. That means test ranges, production facilities, acquiring technical experts, training users and maintainers, and the organizational support to manage the operation. In addition, AI-specific systems may need to meet other military standards, such as radiation hardening for microchips to withstand radiation from nuclear explosions. Few civilian AI applications require microchips to be radiation hardened (mostly civil space programs), so facilities for the mass production of rad-hardened AI chips (or for rad-hardening commercial AI chips) may be needed, too. All of those may be targets for intelligence collection and analysis, or, if need be, counter-proliferation through the interdiction of specialized parts or sabotage.