Is Google’s Artificial Intelligence Bot Sentient? Not Everyone Is Buying It

Is Google’s Artificial Intelligence Bot Sentient? Not Everyone Is Buying It

Although the artificial intelligence community often disagrees on key issues, it has mostly rejected the assertion that the LaMDA bot is sentient.

 

The Washington Post ran an article last week that seemed to portend something reminiscent of the scene in Terminator 2: Judgment Day in which sentient machines take over the world.

When Blake Lemoine, an engineer for Google, recently chatted with LaMDA, Google’s artificial intelligence (AI) chatbot generator, LaMDA—which stands for “Language Model for Dialogue Applications”—appeared smarter than Lemoine originally thought. The bot even managed to change Lemoine’s mind about Isaac Asimov’s third law of robotics: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

 

This led Lemoine to inform Google’s Responsible Innovation department that he believes the bot had become sentient. They “looked into his claims and dismissed them,” per the Post, at which point he was put on administrative leave and then went public.

“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices,” Lemoine told the newspaper. The Post added the current generation of AI tends to “rely on pattern recognition—not wit, candor or intent.”

A report from the Economist may have backed up Lemoine’s claims.

However, there’s also a great deal of skepticism about the assertion. In his Substack newsletter, The Road to AI We Can Trust, Gary Marcus referred to the claims as “nonsense on stilts.”

“Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent,” he wrote. “All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.”

He said that LeMoine “appears to have fallen in love with LaMDA, as if it were a family member or a colleague,” adding that he doesn’t expect the future of AI to look anything like that Google tool.

"Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context.”

Marcus’ piece also noted that even though the AI community often disagrees on issues, it has mostly rejected the assertion that LaMDA is sentient. “Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,”  Google spokesperson Brian Gabriel told the media. "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Stephen Silver, a technology writer for The National Interest, is a journalist, essayist and film critic, who is also a contributor to The Philadelphia Inquirer, Philly Voice, Philadelphia Weekly, the Jewish Telegraphic Agency, Living Life Fearless, Backstage magazine, Broad Street Review and Splice Today. The co-founder of the Philadelphia Film Critics Circle, Stephen lives in suburban Philadelphia with his wife and two sons. Follow him on Twitter at @StephenSilver.

 

Image: Reuters.