Sam Harris's Guide to Nearly Everything

February 23, 2011 Topics: EthicsReligionTechnologySociety Regions: United States

Sam Harris's Guide to Nearly Everything

Mini Teaser: Contrary to Harris’s latest screed, there is no such thing as a science-based universal morality. And abolishing religion will do nothing to rid mankind of its ills.

by Author(s): Scott Atran

Sam Harris, The Moral Landscape: How Science Can Determine Human Values (New York: Free Press, 2010), 304 pp., $26.99.

[amazon 1439171211 full]FOR SAM Harris morality is “an un-developed branch of science” that is all about separating lies from truth. Evil stems from lies, willfully blind to facts and reason. Good comes from rational, evidence-based standards for debunking lies and evaluating truths about the human condition. In this worldview, “Only a rational understanding of human well-being will allow billions of us to coexist peacefully, converging on the same social, political, economic, and environmental goals.”

But here’s the rub: the road to redemption is blocked by religious conservatives who “believe that values must come from a voice in a whirlwind.” Then, seeping from “the ivory tower,” come “secular liberals,” with their “multiculturalism, moral relativism, political correctness” borne of collective guilt “for the crimes of Western colonialism, ethnocentrism, and racism,” which leads to cowardice in the face of dogmatic bullies. So blow ye the trumpet and sound the alarm: if we don’t act soon in the ways this man suggests, then Western civilization could well succumb: “The juxtaposition of conservative dogmatism and liberal doubt . . . has hobbled the West in its generational war against radical Islam; and it may yet refashion the societies of Europe into a new Caliphate.”

Over the last few centuries, many scientists and scientifically minded thinkers have expressed the hope that science might lead to a more peaceful, prosperous and happier world. In The Impact of Science on Society, Bertrand Russell wrote:

There are certain things that our age needs, and certain things that it should avoid. It needs compassion and a wish that mankind should be happy; it needs the desire for knowledge and the determination to eschew pleasant myths; it needs, above all, courageous hope and the impulse to creativeness.

Science, Russell argued, could help determine what is needed for happiness and what should be avoided. Religion, especially Christianity, should be shunned for the great harm it has done humankind and because it “encourages stupidity.”

Harris believes that recent advances in understanding the human brain now more reliably point the way to a “science of human flourishing,” that is, “a global civilization based on shared values” where religion and other forms of false and irrational beliefs that are responsible for cruelty and injustice in the world are banished forever. Today, though, Islam is Public Enemy Number One.

Perched on high with a “privileged view of the ‘culture wars’” in the wake of his religion-bashing best seller, The End of Faith, and armed with a freshly minted PhD in neuroscience, Harris leads the charge in ways that Russell, a scientific thinker of great insight and nuance (albeit with a fundamentally flawed view of how children learn language and knowledge of the world), would not likely have ventured.

For the method of good science is doubt; the religion of the sanctimonious is certainty. Yet for Harris, “the primacy of neuroscience and the other sciences of mind on questions of human experience cannot be denied.” And neuroscience, or rather Harris’s own two dissertation experiments on a few dozen people who live around UCLA, tell us that (in all times, places and contexts) “the division between facts and values is intellectually unsustainable.” One neuroimaging study purportedly slam-dunks the conclusion that religious beliefs are simply false beliefs about “the nature of reality.” Since “it seems clear that as societies become more prosperous, stable, and democratic,” the more they stop promoting religion, then “contrary to the opinions of many anthropologists and psychologists, religious commitment ‘is superficial enough to be readily abandoned when conditions improve to the required degree.’” And “clearly, religion is largely a matter of what people teach their children to believe about the nature of reality,” so unlearning religion just requires reeducation.

Despite expressions like “clearly” and “cannot be denied,” which here obfuscate complicated matters that Harris deems irrelevant, ridicules or simply ignores, this work contains precious little science. There is, however, much playacting at science to justify a peculiar sort of Brave New World where atheism will help do away with female genital mutilation and lie detectors will preclude pleading the Fifth Amendment.

There is also much that is politically pernicious here: you don’t have to read Machiavelli to understand how Osama bin Laden and Glenn Beck seem to need one another to rile audiences to their sides, and now Sam Harris means to bring even the more sober and saner members of our society squarely into the ideological fray by pretending to be heavens above it. There is, however, one consolation: because the hysteria around 9/11 has abated a bit, this book is not likely to have the enormous success of Harris’s earlier effort (it is also not as lively and more plodding in its preaching).

HARRIS BELIEVES that experts who think like he does can guide the way to the greatest good for mankind. He thus pretends to refute Enlightenment thinker David Hume’s famous distinction between what is, which is the province of science, and what ought to be, the province of morality. To do this, Harris adopts a version of utilitarianism (specifically, hedonic utilitarianism), a philosophy made influential over two centuries ago by English philosopher Jeremy Bentham, an early advocate of gender equality, welfare economics, animal rights, and the “principle of utility” or “the greatest good for the greatest number.”

From Bentham’s vantage, if grown men kicking around an inflated pigskin gave more pleasure to more people than poetry or participating in politics, then society should devote more of its resources to kicking inflated pigskins around than to these other pursuits. British social theorist John Stuart Mill redubbed the principle of utility “the principle of greatest happiness.” Mill also thought that one ought always act so as to produce the greatest happiness for the greatest number. But only “within reason,” meaning that the opinions of educated intellectuals—like scientists and moralists—should be given more weight than the pleasures of the masses.

But even forgetting over two millennia of unresolved philosophical debate about morality, there are huge problems with utilitarianism, none of which Harris seriously addresses. How, even in principle, should we maximize utility, for whom and for how long? By considering foremost our own pleasures and preferences, some average of everyone’s or those decided by moral experts? Should strangers have equal weight with family and friends; should minorities be compelled to accept the decisions of a majority; should concern for the nation be subject to concerns voted for at the United Nations; should we seek to maximize global respect for human rights? And which rights: to bear arms and hunt (no, for Harris), to health care (yes), to abortions (yes), to extramarital affairs (no)?

As economists, political scientists, game theorists, psychologists and philosophers have long noted, there are intractable problems with any general standard of happiness. Harris does not have an answer to these utility-maximization quandaries, however much he may protest to the contrary. He just cites some easy cases of bad behavior—Nazis, mutilators of female genitalia, child abusers, suicide bombers—waves his hands, and assures us that only fools or those ignorant of his neuroimaging experiments could deny that morality is measurable and that goodness can be objectively determined with the help of analogies to health and chess.

Nobel Prize–winner Daniel Kahneman studies what gives Americans pleasure—watching TV, talking to friends, having sex—and what makes them unhappy—commuting, working, looking after their children. So this leaves us where . . . ? For Harris, like “good health,” it may be hard to pin down “well-being” precisely (and standards can change over time), but most people know it when they experience it, so that a scientific community of (honest, unbiased, Harris-like) “moral experts” can safely provide society with a consensus on how well-being may be best achieved. You can bet the bank that Harris’s committee of moral experts would not come down on the side of the French or Mark Twain. (“The only way to keep your health is to eat what you don’t want, drink what you don’t like, and do what you’d rather not.”)

Also, like in a chess match, depending on previous choices and developments there may be several good or bad strategies (“peaks and valleys” in Harris’s moral landscape). But, the author warns, the varying objective contexts that lead to different moral equilibriums mustn’t be confused with cultural traditions and preferences, which Harris generally ignores or derides as fodder for “moral relativism.”

MORAL RELATIVISM is the notion that there is no universal moral standard by which to judge others: that we should tolerate the behavior of others if it makes sense relative to their cultural traditions, no matter how counter to our own moral standards. Moral relativism is obviously a bad thing for Harris, and he closely allies it with “cultural relativism,” another taboo that accepts the inherent value of less modernized societies. As Harris explains, it is largely the fault of American anthropologists:

Robert Edgerton performed a book-length exorcism on the myth of the “noble savage,” detailing the ways in which the most influential anthropologists of the 1920s and 1930s—such as Franz Boas, Margaret Mead, and Ruth Benedict—systematically exaggerated the harmony of folk societies and ignored their all too frequent barbarism.

Hogwash. Boas instituted a “four-field” regime of science in anthropology (linguistics, archaeology, human biology, social science), and his students, Benedict and Mead, were implacable foes of fascism and Stalinism as well as of brutality in any society. Like British social anthropologists who worked for the colonial regime, these American cultural anthropologists believed that each society had its own moral system, which kept the group together as a functioning whole (there are theoretical problems with this organismic view, but that’s another story). Unlike the British colonial anthropologists, whose knowledge aimed, in part, to allow the British Empire to better manage cultural diversity, American anthropologists argued that understanding cultural diversity better allowed the world community to make informed moral choices. After all, other cultures could have moral values that might help us better understand the consequences of our own actions. You can see where Harris is going. By buying into cultural relativism, soft-sided intellectual apologists are excusing religion’s horrors and many other evil belief systems to soothe their politically correct souls.

I previously debated Harris (at the 2006 and 2007 “Beyond Belief” conferences at the Salk Institute), arguing that both religion and science have helped to accomplish good as well as bad things in the recent history of our species. And in full disclosure, Harris briefly takes aim at my research in his book. Indeed, I too am labeled a “moral relativist.” Moral bankruptcy is perhaps evident in my argument that torture is wrong, whether practiced by the previous American administration or the current Iranian regime. I guess I cater to cultural relativism in suggesting that we should understand Taliban culture in order to better engage them so that we can leave Afghanistan. Actually, it’s not because I think the Taliban have their own moral goodness, but because our continued presence in the region is destabilizing nuclear Pakistan, which is becoming a true strategic menace, and because no matter how creepy and cruel Taliban views and practices are (like those of our Saudi or Uzbek allies), the Taliban, unlike al-Qaeda, are interested in their homeland, not ours.

WHAT HARRIS really aims to “prove” is that humankind can (or should) be on an inexorable path to moral enlightenment. It is one that moves away from religion and toward science—dropping the sacred in favor of the rational. And if we only show people how ridiculous and “unbelievable” their hallowed beliefs are, they will discard them with alacrity. What Harris misses is that the “truth” he would now deem unquestionable in fact bubbled up from the primordial ooze of the sacred—sometimes to the worst of ends, but sometimes to the best.

Harris tells us: “I find reasons for hope” because “moral progress seems to me unmistakable. . . . Consider the degree to which racism in the United States has diminished in the last hundred years.” Yet it was not utilitarianism or science that drove America’s nineteenth-century abolitionist movement, observes Columbia University historian Simon Schama in The American Future, or the twentieth century’s civil-rights movement. It was a religious reckoning against “the national sin.” Secular intellectuals later helped to rouse support for civil rights, but it was the black churches and the inspiration to sustained struggle and sacrifice from preachers like Martin Luther King Jr. and his forebears that began creating a color-blind America, or at least a rainbow with no hard lines.

Recent work by teams of anthropologists, psychologists, political scientists and behavioral economists indicates that every cultural group entertains “sacred” and transcendent values that belie the logic of consequentialism, defying cost-benefit calculations and motivating costly commitments that involve undertaking actions independently of, or all out of proportion to, prospects of success. Thus, taking on the mightiest empire against all odds, the signers of the Declaration of Independence concluded: “with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.” That is no more a factually driven sentiment than: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

No reasonable study of human history up to the time of the American Revolution would have supported such an outlandish declaration. Indeed, human rights—including equality before the law and freedom to pursue happiness—are anything but inherently self-evident and natural in the life of our species. Cannibalism, infanticide, slavery, racism and the subordination of women are vastly more prevalent across cultures and over the course of history. It wasn’t inevitable or even reasonable that conceptions of freedom and equality should emerge, much less prevail among genetic strangers. These, when combined with faith and imagination, were originally legitimized by their transcendent “sacredness.”

I have no doubt that science produces knowledge, the validity of which is independent of human observation. What I do doubt is “that in ethics, as in physics, there are truths waiting to be discovered,” to use Harris’s words from 2004. Human rights weren’t discovered but invented for social engineering of a kind unprecedented in human history. The American and French Republics began to render real the fictions of individual and equal rights through new mores, laws and wars, and not through independent scientific discoveries.

There is an irony of history that completely escapes Harris and other new atheists in their evangelical quest for a global morality rooted in scientific truth. As philosopher John Gray of the London School of Economics convincingly argues, it is universal forms of monotheism, such as Christianity and Islam, that merged Hebrew tribal belief in one God with Greek faith in universal laws applicable to the whole of creation that originated the inclusive concept of Humanity in the first place.

Universal monotheisms created two new concepts in human thought: individual free choice and collective humanity. People not born into these religions could, in principle, choose to belong (or remain outside) without regard to ethnicity, tribe or territory. The mission of these religions was to extend moral salvation to all peoples, whether they liked it or not. Secularized by the European Enlightenment, the great quasi-religious isms of modern history—colonialism, socialism, anarchism, fascism, communism, democratic liberalism and accompanying forms of messianic atheism—have all tried to harness industry and science to continue on a global scale the Stone Age human imperative “cooperate to compete” (against the other-isms, that is). These great secular isms, often relying on the science of the day to justify their moral values, have produced both massive killing to save the mass of humanity as well as great progress in human rights (lest I be accused of moral relativism again, my own preference is for a form of democratic liberalism less prone to wars of choice).

Harris’s own messianic moral absolutism, based on devotion to “truth,” leads to some rather nutty proposals that defy common sense and are justified by made-up history that is patently untrue. For example, he writes that with a proper lie detector in hand, “civilized men and women might share a common presumption: that wherever important conversations are held, the truthfulness of all participants will be monitored.” And this mind-reading technology will help “raise the quotient of justice in our world.” He ignores what life actually teaches—and what all great novelists have known—that people sometimes require big and small lies to survive.

In the spirit of Big Brother, Harris’s truth machines would ensure conformity to the law, as determined by the recommendations of the scientific community of moral experts:

In fact, the prohibition against compelled testimony itself appears to be a relic of a more superstitious age. It was once widely believed that lying under oath would damn a person’s soul for eternity, and it was thought that no one, not even a murderer, should be placed between the rock of Justice and so hard a place as hell.

Well, no. In fact, protection against compelled testimony was an Enlightenment concept, and the United States was the first country in the world to adopt it as a shield against injustice. It was to be a buckler in the face of tyranny for people of various political faiths.

HARRIS DISDAINS the role of “merely ‘sacred values’” without bothering to understand them. He also misconstrues the preposterous nature of core religious beliefs as forever false ideas that people are brainwashed into buying. Harris shares with religious dogmatists the illusion that religious beliefs are fixed, truth-valuable propositions. And to back up his point, Harris touts an experiment he performed that went like this: Our author brought a group of Christians and a group of nonbelievers into the lab. He and his colleagues then asked the subjects to indicate whether certain statements—one set “religious” and one “factual” in nature—were true or false. The same area of the brain lit up whether the subject believed the religious statement or the fact to be true. So, in Harris’s words, the experiment shows “that the difference between belief and disbelief is the same, regardless of what is being thought about.” What’s more, the line between facts and values dissolves. This means that if the truth-knowers like Harris and his ilk could reeducate all the religious believers they would summarily drop their illusions.

Now, most psychologists and neurobiologists will tell you that neuroscience is still in a very early stage of scientific development; that neuroimaging—which simply tracks blood flow (or blood oxygen levels) to areas of the brain—is a blunt instrument for testing cognitive theories mostly developed by other means. To claim that neuroimaging is a science of mind is like saying physics is a science of meter readings. And to say that facts and values, or secular and religious truths, are the same because their mental processing lights up some common area of the brain is as if readings off of a rudimentary light meter showing no difference between a light bulb and the sun proved that the distinction between light bulbs and the sun “is unsustainable” and “does not exist.”

What Harris neglects to point out is that numerous experiments in numerous cultures indicate that core religious beliefs are “counterintuitive”; that is, they are logically impossible to understand based on the meaning of the words alone and so, therefore, by their very nature can’t be captured in his study—phrases like, “God is bodiless but sentient,” “He is one in three” and so on. Aristotle first observed that such notions, like “the four-footed wind,” violate the natural logic responsible for keeping track of the meaning of words in language. Their meaning can never be fixed and remains open to interpretation (therein also lies the flexibility of religious beliefs to survive over time). This does not entail that religious belief “therefore, cannot actually influence a person’s behavior,” a ridiculous claim that Harris attributes to me. It implies that the relationship between belief and behavior can vary enormously according to context (think early Christianity, which won over the marginalized masses of the Roman Empire through charitable works, versus the militarized Christianity that developed after Constantine’s conversion).

Anthropologists and psychologists have shown that people attend to, and remember, things that are unfamiliar and strange, but not so strange as to be impossible to assimilate. Ideas about God or other supernatural agents tend to fit these criteria. All the world’s cultures have religious myths that are attention arresting because they are counterintuitive. Still, people in all cultures also recognize these beliefs to be counterintuitive, whether or not they are religious believers. In our society, Catholics and non-Catholics alike are manifestly aware of the difference between Christ’s body and ordinary wafers and between Christ’s blood and ordinary wine. Catholics are no more crazed cannibals for their religious beliefs than Muslims are sick with sex when they invoke the pretty girls floating in paradise.

Religion is psychologically “catchy”—cognitively contagious—because its miraculous and supernatural elements grab attention, stick in memory, readily survive transmission from mind to mind and so often win out in the cultural competition for ideas. Like other human productions that are easy to think about and good to use, religious beliefs spontaneously reoccur across cultures in highly similar forms despite the fact that they do not directly evolve by natural selection and are not innate in our minds.

The religious and ideological rise of civilizations—of larger and larger agglomerations of genetic strangers, including today’s nations, transnational movements and other “imagined communities” of fictive kin—seems to depend upon what English empirical philosopher Thomas Hobbes referred to as the “privilege of absurdity” and what religious philosopher Søren Kierkegaard deemed the “power of the preposterous” (as in Abraham’s willingness to slit the throat of his most beloved son to show commitment to an invisible, no-name deity). Humankind’s strongest social bonds and actions, including the capacity for killing and allowing oneself to be killed, are borne of commitment to causes and courses of action that anthropologist Roy Rappaport describes as “sacred” and “ineffable”; that is, fundamentally immune to logical assessment for consistency and to empirical evaluation for likely costs and consequences.

Harris ignores or disdains other, far more data-driven analyses of religion across history and cultures as imagined worlds that try to find a way through humankind’s logically and empirically unsolvable existential dilemmas, including death, deception, catastrophe, loneliness and inequality; that psychologically attempt to manage the contradictory yearnings, impulses and needs of human nature; and that attempt to maintain lasting cohesion among genetic strangers in the face of constant opportunities for defection and social dissolution. Religion may not be, or no longer be, necessary for any of this. Yet its creative role in getting us out of the caves and begetting civilization is evident (archaeological research by the University of Michigan’s Joyce Marcus and others shows growing ritual complexity predicting the formation of increasingly larger and more complexly organized societies). All that is sacred of course hasn’t stopped us batting one another over the head with clubs. To be sure, the in-group solidarity that religion promotes often exacerbates differences with outside groups which underscore hostility and conflict. But religion does not require war, nor is it responsible for most of history’s violent conflicts and war deaths. In the Encyclopedia of Wars, Charles Phillips and Alan Axelrod surveyed nearly one thousand eight hundred violent conflicts throughout history, and less than 10 percent were religious. Religious motives accounted for few of the more than 100–150 million deaths in twentieth-century wars (mostly caused by World Wars I and II, Russia’s and China’s civil wars, along with Stalin’s and Mao’s purges).

But what we do need to understand is how the religious and the sacred play a part in conflicts today. Neuroimaging research, directed by Emory University’s Gregory Berns, suggests that sacred values are preferentially processed in those parts of the brain that deal with rule-governed behavior (rather than cost-benefit analyses) and that they are associated with greater emotional activity consistent with sentiments of “moral outrage.” This accords with experimental and survey research, led by Jeremy Ginges of the New School, in conflict situations between Israelis and Palestinians, Hindus and Muslims in Kashmir, Iran and the West, and between pro-life and pro-choice groups in America. Results reveal that material incentives (money, foreign aid) or disincentives (sanctions, collective punishment) offered by a group proposing that norms associated with sacred values be relaxed or abandoned generate moral outrage and increase people’s readiness to support violence. Such sacred values appear to be somewhat immune to the rationality of realpolitik or the marketplace.

Negotiation work by political scientist Robert Axelrod and myself with Middle East leaders suggests that a rational, “business-like” approach in conflicts involving sacred values may well backfire when sacred values are in play. For example, offering to provide material benefits in exchange for giving up a sacred value (think rights over Jerusalem) actually makes settlement more difficult because people see the offering as an insult rather than a compromise, and so become even more disgusted, angry and violent toward the other side. But the fact that sacred values are often associated with core religious beliefs means that they may be open to interpretation. Although people usually cannot be persuaded to “give and take” on sacred values, any more than they can be convinced to give up a piece of themselves or their child, they sometimes can be prodded into “reframing” sacred values through reinterpretation (Jerusalem is less a place than a portal to heaven, and open access to the portal is sufficient here on earth).

IN ONE of the world’s best-selling works, The God Delusion, Richard Dawkins, a world-class evolutionary biologist who strongly endorses Harris’s book (and to whom Harris repays homage), writes of the slavish gullibility of jihadis, which he also finds in children and Bible believers:

Suicide bombers do what they do because they really believe what they were taught in their religious schools: that duty to God exceeds all other priorities, and that martyrdom in his service will be rewarded in the gardens of Paradise. And they were taught that lesson not necessarily by extremist fanatics but by decent, gentle, mainstream religious instructors, who lined them up in their madrasas, sitting in rows, rhythmically nodding their innocent little heads up and down while they learned every word of the holy book like demented parrots.

What’s the scientific evidence presented for the idea that jihadis—or children for that matter—are robotic learners? None . . . Counterevidence is overwhelming but willfully ignored. Certainly madrassas exist that do shun secular education and encourage rote learning. But terrorist groups rarely draw from their students because they lack the needed social, linguistic and technical skills (computers, GPS, timers) to successfully carry out operations in hostile territory. Most madrassas cater to poorer elements of society. Stephen Burgess of the U.S. Air War College finds that in Pakistan, for example, the state spent a whopping 239 times more on military and security than on education and health even before 9/11, so that charity-funded madrassas have provided the only education for much of the rural poor population. No global jihadist group worth its salt is really interested in such people.

Very few suicide bombers ever attended a madrassa, apart from the poor rural madrassas of the Taliban and a few associated with Indonesia’s Jemaah Islamiyah (mostly three elite madrassas, but also occasionally involving alumni from up to fifty, out of some thirty thousand madrassas in the country—far less than 1 percent). Indeed, none of the 9/11 pilot-bombers or Madrid train bombers and just one of the London Underground bombers (only briefly before the operation) spent time in these religious schools. True, the 2009 Christmas Day airline underwear bomber, a secularly educated university student, did attend a radical madrassa in Yemen for a few weeks. But this hardly confirms what Dawkins (or Harris) suggests.

This is also a demonstration that great scientists, like Dawkins, can be fundamentally antiscience when it comes to moral matters: Isaac Newton, perhaps history’s greatest scientist, not only believed in witches and demons, but his letters at St. John’s College in Cambridge reveal how reprehensibly he treated students and colleagues in personal and ethical matters; William Shockley, the Nobel Prize–winning physicist who invented the transistor, argued that “the application of scientific ingenuity to the solution of human problems” clearly and undeniably demonstrated that the civil-rights movement, by empowering inferior races, would lead to an overall degradation of human intelligence; and Dawkins himself wonders whether science should seriously study prospects for banning fairy tales, like Harry Potter, if they involved “bringing up children to believe in spells and wizards.”

According to Harris, Dawkins and other prominent neoatheists (Christopher Hitchens and Daniel Dennett round out the self-styled “Four Horsemen of the Apocalypse”), science education is a natural antidote to sacred terror. But independent studies by Oxford sociologist Diego Gambetta, forensic psychiatrist Marc Sageman, and journalist and political scientist Peter Bergen indicate that a majority of al-Qaeda members and associates went to college, that the college education was mostly science oriented, and that engineer and medical doctor are the professions most represented in al-Qaeda. Much the same has been true for Hamas.

Now, “Consider the thinking of a Muslim suicide bomber,” as Harris, who appears never to have met a jihadi in his life, must know so well:

If one fully accepts the metaphysical presuppositions of traditional Islam, martyrdom must be viewed as the ultimate attempt at career advancement. . . . We know quite a lot about how such people think—indeed, they advertise their views and intentions ceaselessly—and it has everything to do with their belief that God has told them, in the Qur’an and the hadith, precisely what the consequences of certain thoughts and actions will be.

So much for fantasy; here are some facts. Examination of available cases of Muslim suicide bombers shows that few ever had a traditional religious education. Most jihadis, including suicide bombers, are usually “born again” into radical Islam in their late teens and early twenties, with little knowledge of the hadith and Koran. The idea that suicide bombing is a preferred form of jihadis restricted to a very small minority of Muslims (invoking jihad against infidels as the “sixth pillar of Islam”—on par with the five traditional pillars of belief in God, prayer, alms for the poor, fasting at Ramadan and pilgrimage to Mecca—is considered heretical by most religious Muslims). Most data-driven studies find that religion is not a highly significant predictor of who becomes a terrorist. Other factors, including friendship and family networks (Marc Sageman), perceived foreign meddling and occupation (the University of Chicago’s Robert Pape), and a sense of “national humiliation” (Ariel Merari, retired from Tel Aviv University), prove far more significant. Religion is not even close to the most powerfully predictive cause. And yet Harris wants to rid the world of Islam.

An analogy illustrates Harris’s distortions: Most fatal moving-vehicle accidents involve cars, particularly cars whose drivers happen to be drunk, sleep deprived or talking on a cell phone. One could propose banishing all cars (or all drivers) because they “clearly cause” most accidents, as Harris might put it. Although the facts suggest that it would be wiser to focus on drunk drivers, someone hell-bent on banning cars might reply: “This guy who finds drinking to be the most significant factor in fatal accidents is making the bizarre claim that no cars are involved in car accidents and that we would do better prohibiting all drinking, even hot toddies for people sick in bed with flu.” But, of course, that interpretation would be as silly as saying that I propose soccer be outlawed because research shows that participation in the sport and other action-oriented activities is a good predictor of which radicals bunch into violence.

Harris’s proposal to ban Islam (and wage war on it, if necessary) to stop suicide bombing would be terribly ineffective and wasteful if put into practice, given that the actual number of jihadist terrorists is some few thousand out of well over a billion Muslims. That’s about one per one hundred thousand, far from “10 percent of Muslims are terrorists” as Glenn Beck baldly asserted on his radio show last December. (To put the actual threat in a bit of perspective: the odds of an American getting killed on an airplane by a terrorist are about 10 million to 1, or less probable than death by lawn mower.) Even if 10 percent of Muslims were to buy into jihadist ideology then, given the need to prioritize the use of available resources in combating terrorism, the Harris-Beck proposal to end the current scourge of suicide bombing by ridding the world of Islam could prove harmful to the well-being of our society and most others. It could also help to resuscitate bin Laden’s flagging viral movement, which is rooted in a superficial but powerful message that political leaders and other influential people in the West want to kill Islam.

Image: Essay Types: Book Review