Employees at Google, Amazon and Microsoft Have Threatened to Walk Off the Job Over the Use of AI

A picture shows wires at the back of a super computer at the Konrad-Zuse Centre for applied mathematics and computer science, in Berlin August 13, 2013. REUTERS/Thomas Peter
August 5, 2018 Topic: Security Region: Americas Blog Brand: The Buzz Tags: Artificial IntelligenceICERobotsWarGoogle

Employees at Google, Amazon and Microsoft Have Threatened to Walk Off the Job Over the Use of AI

"The engineers are right to worry. But the stakes are higher than they think."

Call it an “engineering insurgency.” In the last few weeks employees at Google, Amazon and Microsoft have threatened to walk off the job over the use of artificial intelligence (AI) products.

Google employees are upset that the company’s video interpretation technology could be used to optimize drone strikes. Amazon workers are insisting they don’t want law enforcement to have access to the company’s face recognition technology. Microsoft staff are threatening to quit if plans to make software for ICE go forward. The dissent is part of a growing anxiety about AI: from concerns raised by academics and NGO’s about “killer robots” (consider the Slaughterbots video produced by Stuart Russell of Berkley, and the Future of Life Institute, which garnered over two million YouTube views in a short time) to misgivings about inequity and racial profiling in the deployment of AI (see, for example, Kathy O’Neil’s excellent book Weapons of Math Destruction which documents the adverse impact of AI on private and public sector decisionmaking).

There is certainly a lot to worry about. Widespread use of facial-recognition technology by law enforcement can spell the end of speech, association and privacy rights (just think about the ability to identify, catalogue and store thousands of facial images from a boisterous political rally). As O’Neill reminds us in her book, the algorithms employed in large chain store hiring processes and credit worthiness decision are opaque and lack self-correction mechanisms. They give off an air of objectivity and authority while encoding the prejudices of the people who programmed them. Weapons systems combining face recognition and social-media access can pick off opponents more efficiently than the most ruthless assassin. The images of swarm-drone warfare in Slaughterbots are the stuff of nightmares.

But profound as these concerns are, a focus on the safety and equity of AI diverts our attention from a series of still more fundamental questions: how does AI change the way we experience ourselves and others? Which of our basic capabilities does AI alter? Does it revolutionize how we relate to others? Suppose that all anxieties about safety and equity are laid to rest; that AI applications are safe, well-regulated and used equitably. Suppose face recognition is strictly controlled and subject to meaningful consent. Suppose the collateral damage caused by autonomous weapons is smaller than that caused by traditional arms and that we could control their proliferation. Suppose that algorithms governing decisions about credit, hiring, and police force allocation are based on transparent, nondiscriminatory self-correcting proxies. Entertain, in short, as a thought experiment, that AI is safe, transparent and decent. Is there anything left to worry about?

There is. Our engagement with AI will transform us. Technology always does, even while we are busy using it to reinvent our world. The introduction of the machine gun by Richard Gatling during America’s Civil War, and its massive role in World War I, obliterated our ideas of military gallantry and chivalry and emblazoned in our minds Wilfred Owen’s imagery of young men who “die as Cattle.” The computer revolution beginning after World War II ushered in a way of understanding and talking about the mind in terms of hardware, wiring and rewiring that still dominates neurology. How will AI change us? How has it changed us already? For example, what does reliance on navigational aids like Waze do to our sense of adventure? What happens to our ability to make everyday practical judgments when so many of these judgments—in areas as diverse as credit worthiness, human resources, sentencing, police force allocation—are outsourced to algorithms? If our ability to make good moral judgments depends on actually making them—on developing, through practice and habit, what Aristotle called “practical wisdom”—what happens when we lose the habit? What becomes of our capacity for patience when more and more of our trivial interests and requests are predicted and immediately met by artificially intelligent assistants like Siri and Alexa? Does a child who interacts imperiously with these assistants take that habit of imperious interaction to other aspects of her life? It’s hard to know how exactly AI will alter us. Our concerns about the fairness and safety of the technology are more concrete and easier to grasp. But the abstract, philosophical question of how AI will impact what it means to be human is more fundamental and cannot be overlooked. The engineers are right to worry. But the stakes are higher than they think.

Nir Eisikovits is a political philosopher and the director of UMass Boston’s Applied Ethics Center.

Dan Feldman is the senior research fellow at the center and a software engineering executive with more than forty years of experience developing leading edge computing systems in a wide variety of industries.

The Applied Ethics Center at UMass Boston is launching a project titled AI and Experience (AIEX), which will examine the ways in which artificial intelligence changes our understanding of ourselves and our environment.

Image: A picture shows wires at the back of a super computer at the Konrad-Zuse Centre for applied mathematics and computer science, in Berlin August 13, 2013. REUTERS/Thomas Peter