Large Language Models Are Small-Minded

Large Language Models Are Small-Minded

Tamping down unreasonable fears will allow us to attend to the serious matters of the economic and social impacts of the latest advances in artificial intelligence.

 

After the initial near-euphoria about Large Language Models, or LLMs, that power generative artificial intelligence (AI), the mood has gone sour. The spotlight shines now on doomsday scenarios where LLMs become self-aware, go out of control, and extinguish humanity.

Fear of sentient robots is hardly new. In an 1899 short story, Ambrose Bierce conjured a robot created by an inventor named Moxon. It looked like a person, if a dour one, but it wasn’t smart enough even to beat Moxon at chess. And when it lost, the robot revealed deep wells of uncontrolled emotion: it murdered Moxon.

 

This fear has maintained its popularity ever since in books, plays, and movies. Some bad robots appeared simply as machine systems, like homicidal HAL in Stanley Kubrick’s classic 2001: A Space Odyssey. Some robots look human, like the Terminators. And beyond the murderous robots, there are sometimes big networks of robotic systems, such as in The Matrix, whose aim is to enslave humanity. Even Isaac Asimov, who tried to rein in robots with three laws that forbade doing harm to humans, worried that robots could circumvent such strictures.

ChatGPT and Bard are two prominent examples of LLMs that amaze with sophisticated answers to questions. These systems have sparked a huge wave of investment in new services powered by LLMs. And they have unleashed a torrent of anxiety about how their proneness to “hallucinate” (make stuff up) might create havoc with fake news, stolen elections, massive job losses, undermined trust in business, or even destabilization of national security. The worst fears concern the potential for the machines to become sentient and subjugate or exterminate us. A chorus of leading voices from the worlds of high tech and politics has made a case, best summed up by Henry Kissinger, that current advances in AI have put the world in a “mad race for some catastrophe.”

Our assessment is that the furor over the extinction prophecy has gotten the better of us and is distracting from the important work of learning how to use an extremely valuable but inherently error-prone technology safely.

The core of ChatGPT is a huge artificial neural network of 96 layers and 175 billion parameters, trained on hundreds of gigabytes of text from the Internet. When presented with a query (prompt), it responds with a list of the most probable next words. A post-processor chooses one of the words according to their listed probabilities. That word is appended to the prompt and the cycle repeated. What emerges is a fluent string of words that are statistically associated with the prompt.

These strings of words are drawn from multiple text documents in the training set, but the strings do not appear in any single document. ChatGPT is incapable of verifying whether a response is truthful. Its responses that make no sense are called “hallucinations” when all they are is statistical inference from the training data.

Despite their unreliability, LLMs can be useful for amusement and for initial drafts of documents, speeches, research projects, and code. The smart thing is to use them for these purposes but not in any application where harm can result from invalid answers. In fact, it is not hard to imagine harnessing the machine impartiality of ChatGPT to solve contentious problems. For example, we think a robotic approach to gerrymandering would be a great way to build confidence in AI. Task competing LLMs with designing congressional districts that look like simple geometric forms rather than exotic reptiles. The main guidance would be that the districts would have to be as balanced as possible between the registered voters of the two major parties. Our bet is that bots will succeed wildly where humans have failed.

What about the fears of sentience? Can LLMs eventually absorb so much text that they possess all human knowledge and are smarter than any of us? Are they the end of history? The answer is a clear no. The claim that all human knowledge can eventually be captured into machines makes no sense. We can only put into machines knowledge that can be represented by strings of bits. Performance skills like sports, music, master carpentry, or creative writing are prime examples of knowledge that cannot be precisely described and recorded; descriptions of skill do not confer a capability to perform. Even if it could be represented, performance skill is in forms that are inaccessible for recording—our thoughts and reflections, our neuronal memory states, and our neuro-muscular chemical patterns. The sheer volume of all such nonrecorded—and unrecordable—information goes well beyond what might be possible to store in a machine database. Whatever functions can be performed by LLMs are small compared to human capabilities.

In addition to this, statistical inference is surely not the whole story of human cooperation, creativity, coordination, and competition. Have we become so mesmerized by Large Language Models that we do not see the rest of what we do in language? We build relationships. We take care of each other. We recognize and navigate our moods. We build and exercise power. We make commitments and follow through with them. We build organizations and societies. We create traditions and histories. We take responsibility for actions. We build trust. We cultivate wisdom. We love. We imagine what has never been imagined before. We smell the flowers and celebrate with our loved ones. None of these is statistical. There is a great chasm between the capabilities of LLMs and those of human beings.

And beyond LLMs, there is no sign on the horizon of a more advanced, close to intelligent, technology.

 

So, let’s take a sober attitude toward LLMs, starting by curbing the sensational talk. What if we use the phrase “statistical model of language” instead of “Large Language Model”? Notice how much less threatening, even silly, the extinction prophecy sounds when expressed as, “Humanity goes extinct because of its inability to control statistical models of language.”

Tamping down unreasonable fears will allow us to attend to the serious matters of the economic and social impacts of the latest advances in artificial intelligence, and of LLMs’ penchant for inaccuracy and unreliability. Let us also address the geopolitical stresses between the United States, China, and Russia, which could be exacerbated by an unbridled military arms race in AI that might make going to war seem more thinkable—and which would actually heighten the risks of nuclear escalation by the side losing a machine-based conflict. In this respect, we concur with Kissinger that advanced AI could catalyze a human catastrophe.

Above all, as with previous periods that featured major technological advances, the challenge now is to chart a wise path around fear and hype.

John Arquilla and Peter Denning are distinguished professors at the U.S. Naval Postgraduate School. John Arquilla’s latest book is Bitskrieg: The New Challenge of Cyberwarfare (Polity, 2021). Peter Denning most recently co-authored Computational Thinking (MIT Press, 2019).

The views expressed in this article are solely theirs.

Image: Shutterstock.