Key point: AI in warfare is increasingly a source of global concern. Yet it looks like Israel went ahead with a bomb that can recognize and pick its targets... doesn't that seem like a bad idea?
An Israeli company has unveiled a smart bomb that is truly smart.
The SPICE-250 glide bomb has the ability to autonomously recognize and select targets. But how safe is a bomb that can pick its own targets?
This first appeared in 2019 and is being reposted due to reader interest.
Israeli manufacturer Rafael calls this Automatic Target Recognition, which relies on electro-optic sensors (which process light into sensor data) and Artificial Intelligence. “The newly-unveiled ATR feature is a technological breakthrough, enabling SPICE-250 to effectively learn the specific target characteristics ahead of the strike, using advanced AI and deep-learning technologies,” according to a Rafael announcement. “During flight, the pilot selects the target type to be attacked and allocates a target to each weapon. The weapons are launched towards the vicinity of the targets, using their INS [inertial navigation] for initial navigation. When approaching the target area, the weapons use the ATR mode for detection and recognition of the targets. Each weapon homes-in on the pre-defined target, either autonomously or with a human-in-the-loop, aided by the ATR algorithm.”
The SPICE-250 is a glide bomb with a range of 75 kilometers (47 miles) and armed with a 75-kilogram (165-pound) warhead. A single F-16 can carry sixteen of these weapons.
The SPICE-250 uses terrain data, 3-D models and algorithms to identify targets amid the surrounding clutter of objects and terrain in the kill zone, Rafael deputy marketing manager Gideon Weiss told IT magazine Insight Analytics. A two-way data link and video stream enable a pilot to retarget the weapon until just seconds before impact.
Yet most significant is that if the primary target cannot be hit, the SPICE-250’s AI algorithms can select a secondary target. “This goes into the area of user-defined policies and rules of engagement, and it is up to the users to decide on how to apply the weapon, when and where to use it, and how to define target recognition probabilities and its eventuality,” Weiss said.,
And that’s important, because as Rafael emphasizes, the SPICE-250 was designed to find targets “without depending on GPS navigation in GPS-denied environments.” Given the enormous efforts that Russia, China and other nations are devoting to GPS jamming and spoofing, the world is entering an era where people and weapons can no longer rely on satellite navigation.
Which means the SPICE-250—and similar weapons that are inevitably coming—will be on their own in some situations. The problem is that smart devices aren’t always smart: heat-seeking missiles that home in on the engines of friendly jets, computers that mistake the Moon for a Soviet ICBM strike, and facial recognition sensors that mistake Congressmen for wanted criminals.
Smart bombs have come a long way since the first laser-guided weapons of the Vietnam War, especially with the advent of GPS guidance and AI autonomy. But how well they function autonomously, without a human in the loop to make the judgment calls, will depend on factors such as the quality of sensor data and of the AI algorithms.
These are the same concerns that already spur the “killer robot” wariness about autonomous aircraft and tanks. It is more than plausible to imagine scenarios where a smart bomb, bereft of GPS targeting and human guidance, confuses one hilltop for another and hits the wrong target. Or, mistakes a school bus for a tank.