How to Slow the Spread of Lethal AI

How to Slow the Spread of Lethal AI

Today, it is far too easy for reckless and malicious actors to get their hands on the most advanced and potentially lethal machine-learning algorithms.

 

Technology reporter Paul Mozur sounded shocked as he described his firsthand experience of First Person View (FPV) drones in Ukraine during a recent podcast appearance on The Daily. During the interview, Mozur recounts being taken to a park just outside Kyiv by a group of young men who had started an autonomous drone company. He describes the company CEO getting on a motorcycle and speeding off down a dirt road, one of the firm’s AI-powered autonomous drones in hot pursuit thanks to its inbuilt complex tracking algorithm. After allowing a brief chase, one of his teammates turns off the autopilot, and the drone ascends into the air to the young men’s laughter. “It’s a funny moment,” Mozur recalls, clearly alarmed, “but the darker reality of it is that if this was an armed drone with a shell, and if they hadn’t hit that button at the end, their CEO would have been a goner.”

Many Ukraine watchers and longtime AI observers would not have shared Mozur’s shock. Both Ukraine and Russia have heavily relied on FPV drones—small, unmanned aerial vehicles guided by a pilot watching a video feed through goggles or on a screen—for reconnaissance, anti-personnel, anti-armor, and anti-artillery operations during the current war. They have played a role in the destruction of thousands of pieces of Russian equipment. It was only a matter of time before AI entered the picture.

 

AI-enabled autonomous weapons threaten to destabilize the international system. Their significant cost advantages, the widespread availability of the algorithms that power them, and the tactical problems they solve will incentivize their adoption by all manner of actors. These weapons have the potential to save soldiers’ lives. Still, their spread will also empower rogue states, criminal networks, terrorists, and even private corporations and citizens long locked out of the market for precision weaponry. The United States must do what it can to slow their spread.

The decentralization and democratization of warfare thanks to FPVs is already starting to play out in Ukraine, including in Kyiv’s ongoing Kursk offensive. First, FPVs help avoid the massive costs of acquisition and maintenance of a highly-trained surveillance and targeting bureaucracy. Second, FPV drones themselves are significantly cheaper than traditional artillery. Unguided artillery shells cost between $800 and $9,000. GPS-guided shells cost around $100,000, and Javelin anti-tank missile price tags can reach over $200,000, while the typical FPV costs around $400. Given their use as the ultimate guided artillery shell, this cost differential is substantial.

However, the explosive power of payloads carried by FPVs is much smaller than that of a typical round of heavy artillery. The latter can deliver 10kg or so of explosive ordnance with a blast radius of approximately 50 meters. In their current state, similar destructive power can only be delivered by dozens of FPVs striking the same target, each guided by its own pilot. For this reason, FPVs have not completely supplanted artillery, though they have increasingly complemented it. For instance, FPVs have finished off enemy troops escaping fortified positions partially destroyed by artillery fire. Over time, FPVs’ substantial cost advantage will offset their disadvantage in destructive power, especially when “swarm” technology reaches maturity.

Throughout the war, the Ukrainians have had to operate with a severe shortage of artillery shells. In March 2024, Ukraine was firing 2000 shells per day, roughly a fifth of the Russian rate. FPVs, many of them made by volunteers or soldiers using simple electronics and commercial components, have helped close this significant firepower gap. FPVs’ low cost compensates for their relatively low success rate of 50 to 80 percent in destroying targets. Javelin anti-tank missiles enjoy a rate of approximately 90 percent. Since an FPV’s price tag is a minuscule percentage of the cost of the average Javelin, militaries can afford to use more of them, netting the same number of successful strikes at a fraction of the cost.

Compounding this cost advantage is an increase in tactical flexibility thanks to FPVs’ high degree of maneuverability, which enables forces to harass and destroy targets that are beyond the reach of traditional artillery. For example, Ukrainian FPVs have been used to dive into tunnels to attack enemy tanks, chase and destroy speeding vehicles, infiltrate buildings through doorways, attack enemy trenches that are immune from vertical bombing, and chase and eliminate enemy troops.

​​Finally, FPVs have significantly compressed kill chains. The detection, selection, engagement, and elimination of targets are all carried out by a single operator in a relatively short timeframe. This capability is especially valuable in hostile environments. In Ukraine, neither side has gained uncontested control of the air, dramatically dampening the effectiveness of traditional airstrikes. Contrary to analysts’ early expectations, Ukrainian integrated air defenses, bolstered by Western aid, have been able to deter Russian near-border and cross-border aircraft attacks. As a result, continuing air and missile attacks have discouraged troops on both sides from holding fixed positions for too long. FPVs have helped the clashing armies compensate for this inconvenience by pursuing and destroying moving units, vehicles, and personnel. 

These documented cases all involve human pilots, but AI is beginning to play a more prominent role. When developers perfect an algorithm that can effectively control swarms rather than individual drones, massively increasing FPVs’ destructive potential, AI’s role will become larger still. The recent successful test of swarm technology by Swarmer, a software company based in Wilmington, Delaware, is a significant step in this direction. Swarms are superior to individually operated drones because of drone-to-drone communication, which allows sensory information gathered by one drone to be transmitted directly to the entire group and adjust its behavior without requiring further input from the operator or commanders. The Defense Advanced Research Projects Agency (DARPA) is currently heavily invested in developing swarm technology to perform reconnaissance and ground troop protection in hostile urban environments. Their research agenda specifically envisions autonomous swarms, suggesting that AI will play a crucial role.

FPVs are not the only class of autonomous weapons powered by AI. Developers in Ukraine have also begun testing automated machine guns, which can identify targets and aim at them automatically, only requiring a soldier to press a button on a console to take the shot. Notably, the soldier in question can sit in a bunker at some distance from the gun and thus be protected from counterfire. Israel has used similar weapons platforms at the Gaza border and West Bank checkpoints for several years. Another example is the Autonomous Collaborative Combat Aircraft (CCA), a class of autonomous fighter planes currently in development by the U.S. Department of Defense. These AI-powered, uncrewed aircraft will operate alongside crewed fifth- and sixth-generation fighters, receiving and implementing orders for a variety of missions, including electronic warfare, reconnaissance, and dogfighting. The direction of travel is clear: the future of warfare is increasingly automated, and AI will be front and center in this new world.

As the Ukrainian case makes clear, AI-powered autonomous weapons have the potential to significantly lower the barriers to acquiring high levels of precision firepower. Whereas traditional, high-yield, GPS-guided munitions will likely remain prohibitively expensive to acquire and operate in large quantities for all but the wealthiest states, AI-powered drones will not. Though the United States and its allies are surely grateful for this fact in the Ukrainian case, they may be less so when their adversaries begin to follow Kyiv’s example. Russia’s and Iran’s use of the Shahed-136 drones to wrest back momentum on the frontlines in Ukraine, devastate Ukrainian civilian infrastructure, and terrorize Israeli civilians at long range attest to this. As both the examples of the Ukraine war and the metastasizing conflagration in the Middle East highlight, the lethal AI revolution is quickly overturning long-established military power relations. These are precisely the volatile geopolitical conditions under which disastrous miscalculations can occur.

Even more concerningly, in the age of lethal AI, disastrous escalation does not need to involve any geopolitical calculation whatsoever. Ukrainian military officials have highlighted their efforts to immunize their drones against Russian electronic warfare by handing over the targeting and engagement process to onboard AI modules, then severing telecommunications once the operator has locked onto a selected target. Yet researchers have expressed serious doubts about manufacturer assurances of their algorithms’ ability to discriminate between friendly and enemy, or military and civilian, targets. Given these technical deficiencies, the risk of unintended escalation seems unacceptably high.

The escalation problem of uncontrolled, lethal AI systems extends beyond drone targeting. Secretary of the Air Force Frank Kendall has admitted that the decisive advantage of fully autonomous systems over those with a “human in the loop” is likely too great for military powers to ignore completely. Despite almost a year of negotiation and one of Henry Kissinger’s dying wishes, the world’s AI superpowers, the United States and China, have still not agreed to ban autonomous command-and-control of nuclear weapons. Though this lack of public commitment may simply be posturing, the threat is simply too grave to dismiss out of hand. It is risky enough for the U.S. and China to integrate AI into military command-and-control systems across domains separately. It is a recipe for disaster if scores of nation-states, and eventually non-state actors, field an array of weapons systems launched, piloted, and fed intelligence in split seconds by integrated AI systems without an opportunity for meaningful human deliberation.