Laser weapon operations pose a friendly fire risk. Lethal laser beams can unintentionally harm nearby friendly forces (aircraft, ships, etc.) or civilian entities in the vicinity. Deconfliction planning is a critical function in the laser weapon kill chain to ensure that the “coast is clear” so that the path of the laser beam is free of friendly and civilian assets. NPS studies are developing concepts for ensuring and managing deconfliction for different military laser weapon applications (Kee et al. 2020, Clayton et al. 2021). In time-critical tactical operations, laser weapon operations will require a cognitive laser approach to ensure for proper deconfliction.
The realization of a cognitive laser requires advances in human-machine teaming research to ensure the effective and safe employment of AI methods. Several studies at NPS are researching different aspects of applying AI to the tactical domain. Jones et al (2020) studied the air and missile defense kill chain to show that human-machine teaming arrangements can adapt in response to the threat situation timeline. The threat will dictate how much time the operator has to react, and this can be incorporated into the design of AI-enabled automated decision aids. Burns et al (2021) are embarking on a research project to map specific AI methods to the specific functions of the kill chain. Tactical kill chains (including laser weapon kill chains) require a variety of cognitive skills and decisions. These include data fusion, assessment, knowledge discovery, addressing uncertainty, developing course-of-action alternatives, predicting system performance, weighing risks, and gaming second- and third-order strategies.
A wide variety of AI methods will be needed to support these kill chain functions. Cruz et al (2021) are studying the potential safety risks and failure modes that may be introduced as AI and automation is adopted in the tactical domain. Safety risks may be inherent to the AI systems and their decision recommendations, or they may come in the form of cyber vulnerabilities as AI is introduced into tactical systems, or they may arise from the interactions of humans with intelligent machines. Peh (2021) is taking a deep dive into the complex dynamics of trust between humans and AI systems by researching methods to engineer AI systems for tactical operations. Peh’s research mission is to engineer AI systems as tactical decision aids that are trustworthy and achieve an effective trust balance to avoid both over-trust (humans blindly trusting AI) and under-reliance (humans disregarding AI).
Two emerging technologies are pairing up to provide new capabilities for the warfighter of the future: laser weapons and AI. Laser weapons are becoming an operational reality for defending ships and fleets, but they also pose an operational challenge in the form of decision complexity. AI is the necessary companion that can tackle this decision complexity and support effective human-machine teaming to operate laser weapons effectively and safely. A cognitive laser solution marries these two emerging technologies. The cognitive laser concept opens a diverse and challenging field of research for innovations in the application of AI methods to both laser weapon operations and the military tactical domain in general.
Dr. Bonnie Johnson is a senior lecturer of systems engineering at the Naval Postgraduate School. She was previously a senior systems engineer in the defense industry from 1995–2011 working on naval and joint air and missile defense systems. A graduate of Virginia Tech with a bachelor of science in physics and a graduate of Johns Hopkins with a master of science degree in systems engineering, Dr. Johnson received her PhD in systems engineering from the Naval Postgraduate School.
This article first appeared at the Center for International Maritime Security in August 2021 and is being republished here due to reader interest.
Image: U.S. Navy Flickr.