The Third Offset Must Update Asimov's Laws of Robotics

July 21, 2016 Topic: Security Region: Americas Blog Brand: The Buzz Tags: RoboticsScience FictionDefenseTechnologyDrones

The Third Offset Must Update Asimov's Laws of Robotics

Applying sci-fi ethics to modern warfare.

 

Things tend to happen in threes. An unlikely triumvirate on the surface, it would appear that Asimov’s laws on robotics and the UN Convention on Conventional Weapons (CCW) will outflank the Third Offset—the nation’s search for its next silver bullet in war fighting is robotics—knowing that many nations will agree on moral grounds. These nations will reject Asimov based on semantics, and though the debate might be perceived as strictly academic, or even rhetorical, it is worth discussing for the sake of a good cautionary tale. Because, whether we like it or not, killer bots are coming to a theater of operation near you.

Before we get deep in the weeds, let’s get some clarity. First, let’s outline Asimov’s robotic laws. The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. They were introduced in his 1942 short story “Runaround,” although they had been foreshadowed in earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.” The laws are as follows:

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Now, let’s compare and contrast these laws with the United States’ offset strategies.

The First Offset Strategy: President Eisenhower, when confronted with Soviet aggression in Europe, requested that the military find a way to deter the Russians without having to match them man for man, or tank for tank. We had an atomic dominance, and so we miniaturized that edge into a tactical nuclear capability on the battlefield down to battalion level—where mere light colonels could engage in fission fighting, if need be. This lasted less than a decade.

The Second Offset Strategy: eventually the Soviets achieved nuclear parity and we needed to regain the advantage to defeat all the tanks, planes, ships and men. So we sought a quality capability (though Stalin once quipped: “Quantity has a quality all its own”). Thus we developed precision-guided munitions, synced up to a deep battle network and uplinked to space-based systems. This allowed us to seek out and destroy the enemy before he knew what hit him, as evidenced in the 1991 Gulf War. And now great power rivals outrival us in these high-tech domains and have aggressively pursued cyber and space. So we’re back to the drawing board.

The Third Offset Strategy: The U.S. Department of Defense’s Defense Innovation Initiative was announced in November 2014. A core component of the initiative is the formation of a Long- Range Research and Development Planning Program that will purportedly target several promising technology areas, including robotics and system autonomy, miniaturization, big data and advanced manufacturing, while also seeking to improve the U.S. military’s collaboration with innovative private sector enterprise.

Robert Work, the Deputy Secretary to the Department of Defense, described the best way to approach this next offset rather succinctly in a recent speech: “only the Germans put everything together into an operational concept called Blitzkrieg,” Work said, referring to the German use of radios, planes and other technology. “Now we were all fast followers. As soon as we saw it we all said ‘God, why didn’t we think of that!’ By 1944 we were out Blitzkrieging the Germans.”

Finally, what is the focus of the Third Convention on Conventional Weapons? The United Nations Convention on Conventional Weapons prohibits or restricts certain types of conventional weapons of concern. At issue for the Third CCW are weapons systems that would be capable of selecting targets and using force—lethal or otherwise—without any human input or interaction. They are often referred to as “human-out-of-the-loop” weapons.

As for assimilating Asimov’s Laws into the Third Offset, the military, by default, will replace the word “human” with “friend” into the robotic law—a highly complicated IFF (Identify Friend or Foe) methodology.

1. A robot may not injure a friend or, through inaction, allow a friend to come to harm.

This concept will likely be employed on the battlefield, where “good guys” glow blue on a commander’s flat screen and “bad guys” radiate red as drones do the dirty work—with the caveat that an enemy can and will hack, corrupt or recruit these unmanned machines to do its bidding. And given that, one can be wary that there are some really bad actors out there, right now, programming swarms of killer bots to seek out and sting Americans like bees at a picnic.

Sure, one can be confident that the Department of Defense does not envision deploying fully autonomous killer drones anytime soon—surveillance, logistical delivery and medevac being a more practical application in the near term. That said, the recent (and perhaps historic) use of a bomb disposal robot as an explosive weapon to kill a suspect in the Dallas police shootings sheds a bright light on the problem.

What may have caught one’s eye in the description of the Third Offset was the convergence of a need for a new warfighting advantage and an earnest attempt by the Department of Defense to recruit from the outside (Silicon Valley) instead of the inside (DARPA), and the inevitable outcry from the outcome of programming machines to kill (with the likely erosion of human interface). But until then, let’s discuss the merits of each, starting with Asimov.

Asimov promotes the philosophical view of John Locke, who advocates that, deep down, man knows right from wrong. The military—any military—doesn’t have the luxury to divine the intentions of others, and thus must take a more Hobbesian view to deter potential adversaries from further aggression—after all, it is the threat to break things and kill people that is the real deterrent.

To be fair, the esteemed science fiction writer’s laws of robotics are more a literary device than an actual operational codex for millennials currently being recruited by the Pentagon—a clash of culture, to be sure. And to add insult to injury, Asimov, later in life and perhaps feeling an ethical tug after witnessing his hopeful edict being challenged for real, added a fourth law of robotics:

4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

But the Fourth Law is so obtuse and difficult to get one’s head around, as a practical matter, that it is best left on the shelf.

To be clear, the mere mention of “harm no human” is a nonstarter for military doctrine. More appropriately, the definition of an “enemy” (human or otherwise) is key to developing stratagems and tactical advantages in an offset strategy—as it has been since the days of the hominids (pre-cavemen) who racked their primitive brains to gain an edge over the prowling saber-tooth tiger, which is now extinct.

So one hint to approaching the headline—whether the Offset Strategy is compatible with the CCW and Asimov’s Laws of Robotics—may reside in the way we identify friend from foe (IFF).

IFF is a longstanding protocol used by militaries to quickly verify targets and avoid “blue-on-blue” or friendly fire casualties. It’s certainly more sophisticated now than during the Napoleonic Wars, when British infantry dressed in red coats and the French wore blue.

In this new paradigm we see ever-increasing deployment of armed drones, their triggers pulled by over-the-horizon desk pilots safely ensconced thousands of miles away. Now imagine the future where unmanned combat platforms (think James Cameron’s Skynet from the sci-fi film Terminator) have the green light to pick and execute targets based on predesignated criteria.

The ethical challenge in pursuing the robotic aspect of the Third Offset is in deciding who gets to decide. A formalized kill list (let’s call it the disposition matrix) is an example of how far the current administration has gone to establish protocols for armed drone combat. These meetings are usually conducted in the White House on a Tuesday—known to insiders as Terror Tuesdays—where a matrix of criteria is cross-indexed on individuals that are suspected of waging a terror war on the United States and her allies. So what’s in the forecast for armed drones? In a recent online article from Foreign Policy magazine:

“Dozens of countries now have fleets of unmanned aircraft, and at least nine governments have armed drones: China, France, Iran, Israel, Nigeria, Pakistan, South Africa, Britain, and the United States. The report said Washington’s reliance on secretive drone strikes could lead other countries to cite the U.S. raids as justification for their own unilateral strikes, without a clear basis in international law.”

The next logical step in this symbiotic relationship between man and machine would be to program killing machines to make their own selections based on input from a higher authority, with specific guidelines and criteria. Think of these hunter robots deploying on search-and-destroy missions.