The History of Fake News

The History of Fake News

Why can’t America reliably separate out fact, falsehood, opinion and reasoned analysis?

Roosevelt died, in all probability, ignorant of the map’s true provenance, but whether or not he, like Assistant Secretary Berle in Foggy Bottom, personally harbored any suspicions about the map’s authenticity, on that sunny day in October 1941, Roosevelt needed it to be true, and that was what mattered. Thus, although the British put forward the fake news, Roosevelt was a willing political vessel for it. Several commentators have observed that fake news is not a fraud perpetrated on the unsuspecting, but rather willful belief. A shrewd political operator, Roosevelt was no novice to narrative shaping, but was likely willing to suspend disbelief for his policy goals. Indeed, the mud of deception often slides into self-deception.

One commentator asserted that the purpose of fake news “is not to pose an alternative truth . . . but to destroy truth altogether, to set us adrift in a world of belief without facts, a world where there is no defense against lies.” Actually, the purpose of fake news isn’t to destroy truth; it is to manipulate, to weaponize information, made out of whole cloth at times, to achieve political or societal goals. America is no more “post-truth” than it is “post-gravity”; it’s just that the terrible repercussions will take longer to drop. Information alchemy is about weaving straw into golden political outcomes. Several commentators have suggested that, during the 2016 presidential election, Russian president Vladimir Putin sought to engender a general crisis of civic confidence in the American electoral system. That’s a nice byproduct from his point of view, but even he knows full well that he can’t destroy American democracy—he just wanted to manipulate it toward his own ends.

Likewise, the saga of the fake map wasn’t a British assault on truth as such; it wasn’t intended to cloud the American people in an epistemological fog in which it was unclear who were the aggressors in Europe. The British needed a political—and by extension military—outcome and they assessed that the best way to do this was through bespoke disinformation.

In today’s deluge of information and disinformation, enabled in part by social media as news propagation outlets, the solution most proffered is “consider the source” as a way to separate wheat from chaff. Media outlets are trying to outcompete each other to earn their reputational halo. But, in the case of the fake map, Little Bill was a usually reliable source, and, if the British couldn’t be trusted, who could be? Indeed, the fall comes hardest when betrayed by trusted friends, and whom we admire. CIA’s own webpage homage dedicated to Stephenson is notably silent on the specifics of his greatest deception.

CIA has matured immeasurably from the heady and freewheeling days of the OSS, partly through the progression of intelligence officers from glorious amateurs to seasoned professionals, and partly in response to lessons learned from mistakes. Professional intelligence analysts are put through a rigorous analytical training pipeline that includes how to structure analysis, how to weigh sources, and how to consider competing hypotheses. They are taught that one analytical conclusion isn’t equally as valid as another, and that nuances matter. They are taught to figuratively interrogate sources and to consider the source’s purpose in providing information, and who was the intended audience? On the operational side, most raw intelligence generated by CIA’s case officers bears a health warning, a sort of caveat emptor, reminding analysts of what they should already know: “The source of the following information knew their remarks could reach the U.S. Government and may have intended to influence as well as inform.”

And in fact, many tools that intelligence analysts use every day are those that are borrowed from the practice of history, with critical thinking and a skeptical mind at the top of the list. The analytical cadre of Donovan’s nascent intelligence bureaucracy was staffed with the best minds from leading universities, raising questions about whether Donovan, in his haste to please his intelligence consumer in chief and scoop his rivals, even stopped for any analytical analysis on what would be considered raw liaison intelligence.

Not everyone needs to be professionally trained as an intelligence officer or historian to wade through sources, but Hugh Trevor-Roper was both. To apply his craft to approaching a primary source, he listed three questions that should be asked about every document: Is it genuine? Was the author in a position to know what he was writing about? And, why does this document exist? Answers to these questions are the handmaidens of trusting information and halting the malign influence of fake news. Perhaps, before passing the map to Roosevelt, Donovan should have heeded the wise counsel of a different British subject, the historian E.H. Carr, who commanded: “interrogate documents and . . . display a due skepticism as regards their writer’s motives.” Indeed, what intelligence analysts have in common with historians is that the best of the bunch are skeptics.

One practical way that skepticism ought to manifest itself in considering the source was offered by historian and strategist B. H. Liddell Hart: “On every occasion that a particular recommendation is made, ask yourself first in what way the author’s career may be affected.” Or, as the Romans may have inquired, “cui bono?” Who benefits? Maybe this level of skepticism sounds paranoid, but as the aphorism goes, you’re only paranoid if there is no plot. Or applied to the twenty-first century information wars, deception.

While considering the source is necessary, it is not sufficient—it’s a shortcut that too often turns into a handicap. Fact-based and objective reporting and analysis is surely the gold standard, but information consumers also have a role, even a civic obligation as citizens to take some responsibility for what they allow themselves to consider as truth. It is the manifestation of this shortcut crossed over to handicap that demands Facebook or Twitter do a better job of curating information on their platforms. It elides society’s individual responsibility for skepticism and critical thought in the evaluation of evidence and argument. For the same reason that diet pills don’t work, it’s just not that easy. Seeing results is going to take some discipline. Social-media sites, amongst others, are appropriately required to weed out extremist or illegal content, but filtering information is a more challenging feat. It would be convenient if they can run an algorithm and block bots and trolls, but disingenuous information and especially fraudulent analysis of facts would still remain. There is no Internet filter or setting that can remove conspiracy theory from the digital public square. Moreover, that might not be desirable in any case. It may be worth considering whether technological convenience, rapidly morphing into dependence past the point of no return, may have a causal relationship to America’s contemporary intellectual helplessness.

Perhaps technology companies will develop a genius algorithm to filter out Russian bots and disable some troll accounts, but this will not stop overly credulous people from retweeting, sharing and reposting “news” that bears as much semblance to reality as CheezWhiz does to cheese. Despite significant strides in artificial intelligence, artificial intelligence remains ineffective against intellectually dishonest analysis, non sequitur conclusions and ideological spin. It is therefore dubious to hope social-media sites will become guardian curators of fact-based knowledge and objective journalism. But there is no reason to rely on technology companies to solve the problem of fake news. The do-it-yourself tools are readily available.

How to begin to learn how to discern fake news? By rediscovering the broad civic applicability of the historical method. It starts with modifying the national epistemological approach to acquiring knowledge, and, applied across the population of the United States, the impact could be profound.

Quite when America started deviating from critical thinking is unclear, but a test of American college students, the College Learning Assessment Plus (CLA+) shows that, in over half of the universities studied, there is no increase in critical thinking skills over a four-year degree. The reasons for this are far from clear, but the pursuit of knowledge has become more argumentative, opinion-based and adversarial than illuminating. Research papers are reminiscent of watching the prosecutor layout a criminal case on Law and Order.

The CLA+ findings track with an informal survey of professors’ experience at over a dozen American (and some international) universities. In the main, here is how a research paper usually unfolds: Students set out a thesis, about which they know very little at the outset, but about which they already seem to have well-developed or even passionate opinions as if they have skin in the game, as if their thesis is personal and deeply held. They spend the rest of the paper proving the validity of these opinions, like a court case, beyond a reasonable doubt. They comb through material, hunting for those key nuggets of evidence that support their thesis, and ignoring those equally important discoveries that don’t support their narrative. In the worst cases, logical fallacies are waived away because a conclusion “feels right” or “should be true.” Once enough similarly suggestive nuggets are accumulated, they are listed like items entered into evidence, often devoid of argumentation or theoretical framework. Moving onto their conclusions, they again restate their strongest bits of evidence, and pronounce their thesis proved; case closed. Rediscovering the historical method and teaching the difference between argument and assertion offers promise.