AI Weapons Are Here to Stay

U.S. Marine Corporal Workman controls a Multi-Utility Tactical Transport (MUTT) as he covers a position on a hillside as part of Rim of the Pacific (RIMPAC) 2016 exercises held at Camp Pendleton, California United States, July 13, 2016. REUTERS/Mike Blake

AI Weapons Are Here to Stay

Once the AI genie got out of the bottle, it wasn't going back in.

Many have been debating the morality of developing weapons with artificial intelligence. Will this lead to a taboo against their use?

The debate around the ethics of AI weapons has involved everyone from advocacy groups, to government officials, to Google engineers. Many agree that AI weapons carry very significant ethical concerns. Which begs the question, will these concerns, and the efforts of anti-AI weapons advocacy groups, result in a ban of their use or a strong taboo? Some seem to think that an international agreement will be enough to stop their adoption in the world's militaries. However, the development of a taboo around the use of AI weapons depends on something much more straightforward, their effectiveness on the battlefield.

In April, Google employees very publicly protested the company’s participation in a Pentagon program that used AI to interpret images and improve the targeting of drone strikes. Certainly, the risks and ethical concerns of AI weapons are very real (much like any new technology of war). Furthermore, most opponents of AI weapons usually point to the ethical problems of having a computer algorithm both selecting and eliminating human targets without any human involvement in controlling the process.

However, the risks associated with AI weapons stretch beyond the ethics of war. Some have pointed to crisis instability if AI weapons were to proliferate throughout the world. If two states involved in a crisis have access to weapons that can so easily engage in such rapid destruction, the first mover advantage will likely push those states toward war rather than away from it. For instance, it is often argued that the first mover advantage was the cause of the start of World War I. The rapid mobility of troops and advancement in weapons led military planners to believe that those who moved first would have an insurmountable advantage. Therefore, if you think someone else is getting ready to move, you have a higher incentive to move before they do. AI could create similar incentives.

Most assume that the taboo around chemical weapons exist because they're a particularly terrible weapon of war and the international community has agreed to prohibit their use. But the prohibition of something by the international community isn't sufficient to stop its use. Ignoring whether a ban on AI weapons is practical, or whether it would even be effective, the single determinant of its use, or non-use, on the battlefield is the result of something much more straightforward—states will use it if it's effective on the battlefield.

Previous development of new tools of war demonstrates as much. When most new weapons are developed, calls for their banned use often follow. In John Ellis’ book, The Social History of the Machine Gun, he explains why the United States was unique in its early adoption of the machine gun and why it eventually supplanted the feudal conception that only brave men can win wars—not fancy weapons—and how the machine gun became ubiquitous on the battlefield.

Similar attempts to ban weapons and set rules for the appropriate conduct of war provide examples, from which we can conclude what to expect from AI weapons. In 1898, delegates from around the world convened in Geneva to discuss rules and laws in war. Of particular concern was the use of chemical weapons, a new weapon being developed at the time. Chemical weapons weren't operational for mass use at the time of the convention, but the delegates were concerned about their use nonetheless.

The convention would eventually call for a ban on asphyxiating, poisonous, or other gases. Only two countries were against banning their use, the United States and (in support of America) the United Kingdom. The argument the United States used against prohibiting chemical weapons was that states should use all means at their disposal to end a war as quickly as possible. They assumed that banning chemical weapons would only serve to extend the length of the conflict, thus ultimately causing more people to die. The U.S. Civil War was also fresh in their memory, having ended only thirty-three years prior, a war with substantial casualties and one that was expected to end quickly but didn't.

Alfred T. Mahan, the U.S. delegate to the convention in 1898, said, “the objection that a warlike device is barbarous has always been made against new weapons, which have nevertheless eventually been adopted.” So why does a taboo form around some barbarous weapons and not other barbarous weapons? Richard Price observed, “Throughout history, numerous weapons have provoked cries of moral protest upon their introduction as novel technologies of warfare. However, as examples such as the longbow, crossbow, firearms, explosive shells, and submarines demonstrate, the dominant pattern has been for such moral qualms to disappear over time as these innovations became incorporated into the standard techniques of war.”

Ultimately, a taboo would form around the use of chemical weapons. While this doesn't prevent their use—they were used multiple times most recently by the Assad regime in Syria and by Saddam Hussein in the Iran-Iraq war—it certainly has a deterrent effect on their widespread use. But what is driving its limited use? Is the "taboo" actually preventing some states from using them while not stopping other countries?

The argument that the taboo is the only thing stopping the widespread use of the weapon is made much less convincing by the fact that chemical weapons are not a particularly effective or useful tool on the battlefield. The creation of gas clouds of sufficient concentration to be effective is difficult. Targeting gets even more difficult in the fog of war and under various levels of “friction.” The inability of the weapon to hit discrete targets seems especially useless in the era of precision weapons. Furthermore, there is no guarantee that your own forces won’t be affected if the wind picks up or changes direction. These limitations make the weapon only attractive to leaders and regimes that are already in a very desperate position.

Now contrast that with the development of submarines. Submarines were first being developed for wide-spread use around the same time as chemical weapons, and submarines were also viewed similarly. They were seen as a dishonorable weapon, lurking unseen below the surface and killing their targets anonymously. The was especially the case with the rise of unrestricted submarine warfare. Calls were also made to ban the use of submarines, and the 1936 London Protocol sought to limit unrestricted submarine warfare.

So why did chemical weapons develop a taboo and submarines did not, and what does this tell us about the potential development of a taboo around AI weapons? All three new technologies of war considered here created and create significant ethical dilemmas and are still viewed by many as unethical. However, of the two historical examples, the ineffective tool developed a taboo while the effective tool did not. So, will AI be an effective tool of war?

It seems highly likely that it will be a useful tool of war. Although one should avoid the use of terms like a "revolution in military affairs," it's easy to see the promise of this new type of weapon system. However, one shouldn't expect an upheaval that ends the "modern system” of warfare. While AI weapons are likely to advance military effectiveness, the subsequent development of countermeasures will likely prevent them from upending the modern system of war.

One of the promises of AI in the military that seems to guarantee its adoption is its broad applicability. AI can be used to increase effectiveness and efficiency for more than just combat operations. AI can improve supply lines, enhance the training of new soldiers, and increase the effectiveness and efficiency of intelligence gathering and processing.

But their effectiveness in combat operations seems especially promising. AI is not a wholly revolutionary idea to be applied to the military domain, and it is merely the next logical step in the digitization and mechanization of the modern battlefield. The adoption of electronic and robotic weapon systems into modern militaries was an increasingly rapid process following 9/11. According to one article, “Already 90 states and non-state groups possess drones, and 30 have armed drones or programs to develop them.” The notable example of ISIS adopting the use of drones demonstrates the potential of such weapons.

Finally, AI weapons will be critical to the Third Offset Strategy being pursued by the U.S. Military. Machine learning, human-machine teaming, and drone swarms are all potential systems that could upset the balance between states. We already see some of these tactics being employed on the battlefield. For example, Russia claimed that a drone swarm attacked one of its bases in Syria.

The early adoption of AI-like strategies on the battlefield, the sheer number of states investing in such technology, and current weapon systems that offer insight their potential, all seem to indicate that AI weapons will be highly effective tools of war. Therefore, while a robust discussion of the tool’s ethics and limits should be pursued, it is unlikely to force the development of a taboo around the tool’s use.