The Third Offset Must Update Asimov's Laws of Robotics

July 21, 2016 Topic: Security Region: Americas Blog Brand: The Buzz Tags: RoboticsScience FictionDefenseTechnologyDrones

The Third Offset Must Update Asimov's Laws of Robotics

Applying sci-fi ethics to modern warfare.

The declared “foe” downloaded into the mind of a machine is what we will face sooner than we are willing to admit in polite company—and to think how happy we are to have, at our fingertips, autonomous parallel parking or Amazon delivering our precious parcels. And what about the risk of collateral damage? How do we decide to what binary category—friend or foe—a neutral party belongs? But that is a complex question of its own, beyond the scope of this present discussion.

So, in answer to the question: yes, we can assimilate Asimov’s Laws of Robotics into the Third Offset Strategy, provided we replace the word “human” with “friend” and, to appease the United Nations CCW, we do not forgo the human trigger puller.

This is not the clean answer most hoped for—even if our best and brightest adhere to Asimov’s robotic laws and refrain from building automated killing machines, you can bet that other nations and the occasional bad actor will. After all, it’s not as hard, nor as costly, as developing nukes. Perhaps far off in our future is the idea that the autonomous machines themselves will have the intellectual capacity and sense of self to decide whether the human race, as a whole, is a friend or foe. That is the scary part.

JG Randall is a former Marine. He is currently a stockbroker.

Image: Research robot in the MIT Museum. Wikimedia Commons/@Daderot.