Is the Pentagon Prepared for Lethal Robotic Weapons Systems Unconstrained by Ethics?
This could put U.S. forces at a disadvantage.
Weaponized artificial intelligence (AI) capability, is an ominous-sounding term Pentagon leaders use to explain the serious and growing risks presented by technologically advanced adversaries increasingly capable of building artificial intelligence-enabled, lethal robotic weapons systems unconstrained by ethics, moral consideration or human decisionmaking.
The concern centers upon a single question as countries such as Russia and China operate capable and fast-evolving AI-empowered robots, drones and weapons systems potentially detached from human decisionmaking.
Citing the possibility that “an authoritarian regime, like Russia,” could develop a weaponized AI-capability, the Pentagon’s commander of the Joint Artificial Intelligence Center, Lt. Gen. Michael Groen said U.S. and friendly forces may not be able to use comparable capability, particularly in the event that a potential adversary attacked with AI-empowered weapons absent ethical or humanitarian concerns. This could put U.S. forces at a disadvantage in a scenario that requires decisions about the use of lethal force given that a human must always be “in the loop,” which is a stipulation noted in a Pentagon doctrine.
However, Groen seemed to suggest that advanced weapons developers, scientists and futurists are now working with an international group of like-minded allies interested in accommodating ethical concerns and yet still prevailing in combat. Part of this pertains to doctrinal and technological development, it would seem, which might seek to balance, orient or integrate the best of technical capability with optimal tactics sufficient to repel an AI-driven enemy attack.
“We think that we actually gain tempo and speed and capability by bringing AI principles and ethics right from the very beginning, that we’re not alone in this,” Groen said to reporters, according to a Pentagon transcript. “We currently have an AI partnership for defense with sixteen nations that all who embrace the same set of ethical principles have banded together to help each other think through and to work through how do you actually develop AI in this construct.”
Referring to what he called an “ethical baseline,” Groen said that there are ways to engineer and use highly effective, AI-enabled weapons in alignment with established ethical parameters.
One such possibility, now under consideration by scientists, engineers and weapons developers at the Pentagon, is to architect and AI-systems able to instantly employ “defensive” or “non-lethal” attacks against non-human targets such as incoming anti-ship missiles, mortars, rockets or drones.
Yet another interesting nuance to this is, given the pace and procedural efficiency with which AI-enabled sensors, fire-control, weapons systems and data analysis can operate, rapid human decision-making may not necessarily always “slow-down” a decision about whether to attack. Advanced AI analysis can, for instance, compare data regarding an incoming attack against a vast historical database and make instant determinations about which course of action might best address the situation. This would happen because advanced algorithms could draw upon a historical database comparing how particular attack circumstances were addressed in the past, in relation to a host of impactful variables such as weather, terrain, range and available “effectors.” Done this way, which was the case during the Army’s Project Convergence exercise last Fall, the kill chain can be completed in a matter of seconds with human decisionmakers still operating “in the loop.”
“If we can make good decisionmaking and have informed decision-makers, we think that is the most significant application of artificial intelligence,” Groen said. “And then we'll continue to go from there into other functions. And the list is endless from every, you know, moving logistics successfully around the battlefield, understanding what's happening based on historical pattern and precedent, understanding the implications of weather or terrain, on maneuvers, all of those things can be assisted by AI.”
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master's Degree in Comparative Literature from Columbia University.“
Image: Reuters