On December 10, the United Nations’ Universal Declaration of Human Rights (UDHR) will reach its three-quarters of a century milestone. Adopted on the same day by the UN General Assembly in 1948, the document enshrines fundamental civil, political, social, economic, and cultural rights. Its preamble begins with a “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family,” followed by thirty articles detailing these rights.
The UDHR provided what scholars Margaret Keck and Kathyrn Sikkink call a “common language” for later transnational human rights activism. In Seyla Benhabib’s words, it serves as “the closest document in our world to international public law.” Jack Donnelly similarly observed that the UDHR sets “the basic parameters of the meaning of “human rights” in contemporary international relations,” marking its foundational status.
Yet, the promise of universal human rights is threatened from all sides. As we approach the UDHR’s seventy-fifth anniversary, the Russian invasion of Ukraine—inclusive of alleged Russian war crimes, including torture, rape, and the systematic abduction of Ukrainian children—will soon mark two full years of large-scale conflict. The October 7 attack by Hamas on Israel, furthermore, echoes this brutality in gut-wrenching ways, while Israel’s retaliation in Gaza has moved rapidly from a desire for revenge to an Israeli airstrike on a refugee camp in Jabalya. For over a decade, an intensifying feeling of hopelessness and pessimism has pervaded the cause of human rights from Egypt to China and even the United States.
The intellectual environment increasingly reflects this downcast perspective. The New York Times—just weeks before the Hamas attack—reported on an uptick in public doubt among think tank analysts, economists, and diplomats that there truly is a universal set of values that underpin international human rights norms and laws. These doubts come on the backs of speculations and assertions that the “liberal” or “rules-based” international order founded in part upon these values is fraying. The tumult of world politics and the perceived shift in the global distribution of power appear to undermine the universalist idea. Readers of the National Interest will intuitively recognize that existing debates over whether the international system underwent “great transformations” in the post-World War II era have intensified.
What are we to make of these claims and characterizations? Are universal human rights a bankrupt idea? Are the values that underpin such rights a mere illusion?
An idea in cognitive science—in the modern study of the architecture of the human mind—challenges these doubts: the values undergirding human rights, it indicates, are rooted in human nature. More specifically, the character of moral psychology is such that human rights are its optimal expression—not inevitable social constructs but the result of distilling shared cognitive resources into a social and political idea.
Quiet work done in cognitive science provides reason to believe that human beings possess a cognitive system responsible for the distinctive moral qualities of human life. This system provides the building blocks of human rights, with such rights representing the clearest view of this capacity to date.
The compliance—or lack thereof—of organizations, governments, and other actors with human rights is not, in this view, the proper metric to gauge the accuracy of the idea of human rights. Instead, we should look to those conditions in which our moral cognition has been put to the most sincere, rigorous, and sustained test by a representative sampling of humanity.
The drafting of the UDHR fits these conditions. On its seventy-fifth anniversary, we should reflect on its significance to human nature and international human rights. From this, we take away a central lesson about the future of human rights: that they always have existed in a conflicted world; the point, especially if one wishes to rescue them, is to understand why.
This idea in cognitive science—“Universal Moral Grammar”—sounds like a lofty one, exactly the loftiness with which some proponents of universal human rights have become disillusioned. However, Research in this domain started with exactly zero connections to international conventions like the UDHR. It instead operates on an apolitical, scientific, and bland assumption: that the study of moral cognition should proceed in the same way as other aspects of human biology. If we are prepared to assume that capacities like vision or hearing occupy distinctive roles within the human mind, why would we forgo this assumption in the study of moral cognition? Put another way, why would our capacity to hear be “grown” biologically, but our ability to morally evaluate be “learned” through culture?
We are attuned to inflate differences between individuals or groups given the importance of morality in human social life; not only do we conflate morality per se with cultural practices, but we also perceive moral diversity in a way that we would not with other cognitive mechanisms—it is akin to viewing near- and far-sighted individuals in possession of two, radically distinct visual systems. Few would accept this. The argument here is that morality is fundamentally no different.
This assumption is “boring” because we do not get excited by visual or auditory judgments. Morality is thought of differently—its role in human life is fundamental to organizing institutions, distributing resources, and interacting with one another. It is both commonplace and, at times, visceral. As philosopher and legal scholar Matthias Mahlmann puts it, there is a “mental space that has a normative dimension…a specific mental domain of morality….” Morality, as a human cognitive characteristic, exists.
The evidence is all around us. Virtually everyone in possession of the basic facts has a moral reaction to the October 7 Hamas attack and the subsequent Israeli response. Indeed, these moral judgments often feel as if they exist more in the gut and the heart than in the intellect—perhaps feeling more like facts than preferences. Individuals differ in their moral approval of these actions, but this does not detract from the normative dimension of their responses.
How individuals acquire this moral sense cannot be reduced merely to cultural particularities. As philosopher Susan Dwyer recognized, individuals do not learn the structure of moral dilemmas—the morally salient aspects of people, actions, and objects interacting with one another—but instead intuit it. We intuitively frame the world in normative terms.
Moreover, the various qualities individuals infer from or impose on moral dilemmas are not explicitly learned during development. Many scenarios involve interactions between people that can have multiple possible outcomes. Yet when tested, both adults and young children infer a “presumption of innocence,” or a good intention, from those performing the actions—despite not being told such intentions are present. (One can find this presumption embodied in Article 11 of the UDHR.) Humans may also possess an “acute sensitivity” to the legally defined actions constituting harmful battery “as a property of the human mind,” as legal scholar John Mikhail argues.
Relatedly, social psychologist Daniel Sznycer and legal researcher Carlton Patrick find experimental evidence indicating that criminal law originates, in part, in an innate “valuation grammar” of the mind, finding that “multiple types of lay justice intuitions vary in lockstep” across cultures and over long periods of time with respect to criminal legislation. Finally, and more broadly, International Relations scholar David Traven articulates connections between cognitive moral architecture and the laws and norms of war, arguing they “are a by-product of an evolved cognitive system in a changing contextual environment.”
Research such as this goes to show, as Sydney Levine, Alan Leslie, and Mikhail note, that how we cognize morality goes beyond “heuristics and biases.”
There is a complexity to our moral judgments that is frequently underappreciated—something the philosopher John Rawls observed in a substantive analogy to Noam Chomsky’s work on linguistics in his classic A Theory of Justice. There is an informational gulf between only the content of our moral judgments and the ability to morally evaluate that cannot be traced back to moral education or culture.
The outlines of an explanation for this remarkable ability thus posits that human beings are naturally endowed with a cognitive mechanism that is principally grown, not learned.
Recognizing even this requires a tricky distancing from ordinary life: we cannot simply pick our favorite examples of moral good or evil and move from there to understand morality. Nor can we ask individuals their opinions on social and political issues (e.g., “Do you approve or disapprove of the United States’ support for Israel?”). We cannot even begin with cliché ethical taxonomic categories, like the ethic of “community” contrasted with the ethic of “individualism.” These all unintentionally recruit far more cognitive action than is desired in the study of moral cognition.
The goal, as Mikhail puts it with a reference to Rawls, is to pinpoint those moral judgments that allow our moral capacity “to be displayed without distortion,” namely, those judgments made under conditions of sincere, rigorous, and sustained deliberation among culturally diverse individuals.
From Moral Cognition to Human Rights
All well and good, one might say, but what’s the point? Many people, after all, see the contestation of human rights and their uneven compliance as undermining the idea that they, or their moral underpinnings, could be universal.