Autonomous Weapons Are Coming, This is How We Get Them Right
Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979.
Commercial firms are already deploying autonomous air taxis and ground vehicles based on a range of ever more effective, precise, and cheaper sensors which have obvious applications in improving the hunting capability of autonomous drones. Aerialtronics just put a new AI-driven camera on sale that is 4 inches by 4 inches by 3 inches and weighs only 1.5 pounds yet has a 30x magnification HD camera with an integrated forward looking infrared camera. It can integrate the two images to provide better target identification. In March 2018, researchers announced they had developed a 3D printed, hyperspectral imager light enough to mount on a small drone—for only $700. Hyperspectral imagery can be used to characterize the objects in the scene with great precision and detail. Google has released its MobileNets family of lightweight computer vision models that can identify objects, faces and landmarks. Each of these technologies can be applied to improve autonomous targeting.
It is too late to argue about whether weapons should be autonomous. Further, rapid technological advances mean their widespread employment on the battlefield is inevitable. With some exceptions in the area of weapons of mass destruction, if weapons are practical, affordable and advance a nation’s interest, they are adopted. Even a papal bull couldn’t stop the spread of the crossbow.
If we really want to fulfill our ethical responsibilities concerning these weapons, it is essential we expand the discussion beyond on-the-loop or in-the-loop. Rather than continuing to focus on the merits of these systems, which do not effectively deal with the issue of time-critical engagements, it is essential we focus on establishing procedures and parameters that will maximize the probability that autonomous systems will act in accordance with the moral and legal parameters as well as the user’s intent. These weapons are already present and proliferating rapidly.
Each conflict will present a unique set of terrain, weather, opponents, political conditions, rules of engagement and strategic objectives. Therefore the guidance for each must be carefully considered and tested in simulations and exercises. For instance, the threat to naval forces in the Gulf is vastly different than the threat in a conflict with China. Thus an Aegis equipped ship operating in the confines of the Persian Gulf will set different air and sea engagement parameters than a ship operating far out in the Second Island Chain against a potential Chinese threat. Similarly, the guidance for that same ship will be different if it operates closer to China’s shore. As engagement times decrease, autonomous defensive systems will become more important.
Nor should the discussion be limited to defensive weapons. Offensive systems under development will have capabilities far beyond the decades old Harpy. As these systems operate in a communications-denied environment, careful consideration must be given to the minimum sensor correlations necessary to confirm the target before the weapon decides to attack. And, the parameters for different types of targets in different situations will have to be thought through. While some will argue that we should not allow autonomous offensive weapons, the increasing range and capability of new weapons make it impossible to define a weapon as purely offensive. For instance, an anti-air or anti-ship missile is considered defensive if it engages a target 100 miles away, why is it offensive if a smart, autonomous weapons kills the same target at its home airfield or in port?
Aegis crews already go through the process of establishing parameters before they deploy. The process they use can be an initial template for developing guidance for each autonomous weapons system as it is developed. And like Aegis guidance, it must be subject to regular updating based on experience and new system capabilities. But it needs to extend beyond that. Even in its automatic mode, the Aegis system is monitored by the crew who will intervene if they perceive it is malfunctioning. Yet they know they may not be fast enough to interrupt the kill chain and thus work very hard at getting the processes and programs right before they activate the system. We have to assume tomorrow’s autonomous weapons will often operate under conditions that prohibit human oversight after launch.
Just as important as getting the weapon’s AI code correct is training our operators on the decision process for putting the weapon in an autonomous mode. What criteria allow an operator to shift the system to fully autonomous? When does he/she either change the autonomy guidance provided to the weapon or take it out of autonomous mode? What are key indicators the tactical situation is changing and may require a change in the concept of employment? How do these weapons change the responsibility of the commander for what actions they take after launch?
The fact is autonomous weapons are being fielded around the world. We are no longer dealing with a theoretical question. Rather than continuing to debate whether autonomous systems will be employed or what level of human supervision they will receive after launch, we need to focus our intellectual energies on how we will refine the guidance provided to operators and systems prior to launch. Only through careful study, experimentation, and testing will we have reasonable confidence that after we launch them our autonomous systems will engage within the ethical, legal, operational, strategic and political parameters we desire. It’s time to get on with it.
Dr. T. X. Hammes served 30 years in the U.S. Marine Corps and is currently a Distinguished Research Fellow at the U.S. National Defense University. The views expressed are his own and do not reflect the official policy or position of the National Defense University, the Department of Defense, or the U.S. government.
Image: Reuters.