How America's Generals Can Learn from Their Errors
Can they do it?
Downplaying the possibility of future loss in war games and military planning is also hazardous.
SECRETARY OF Defense James Mattis reportedly said: “I don’t lose any sleep at night over the potential for failure. I cannot even spell the word.” To paraphrase Trotsky, American generals may not be interested in failure, but failure is interested in them. In recent decades, the United States has suffered a number of stalemates and defeats in Vietnam, Iraq and Afghanistan. Despite the recurrent experience of military fiascos, there is a puzzling discrepancy in how U.S. officials think about past versus future loss. When leaders learn from historical cases, debacles often loom large and powerfully shape policy. But when officials plan prospective operations, they tend to neglect the possibility of disaster. As a result, military planners focus too much on avoiding a repeat of prior reversals, and not enough on the possibility that the new strategy will itself unravel.
One solution is to take inspiration from the business realm, where the school of “intelligent failure” encourages a healthier relationship with loss. By adopting the right set of tools, the military can become more adaptable and resilient.
CIVILIAN AND military officials in the U.S. national security community tend to learn more from past failure rather than success. By failure we mean military operations that did not achieve the intended core aims, or where the balance sheet was skewed toward costs rather than benefits. Leaders, for example, often draw historical analogies with prior policies to clarify the strategic stakes in a current issue or suggest the optimum path forward. Strikingly, these analogies are overwhelmingly negative (do not repeat past errors), rather than positive (copy past successes). No more Munichs. No more Vietnams. No more Iraqs. And so on.
(This first appeared in December 2018.)
What’s more, failure is the primary catalyst for organizational or doctrinal change. It often takes a fiasco to delegitimize standard procedures. For example, America’s negative experience in Vietnam in the 1960s and 1970s, as well as in Lebanon in 1982–1984, spurred the Weinberger-Powell doctrine, which outlined a set of principles to assess the wisdom of military operations. More recently, the desire to avoid a repetition of the Iraq War lay at the core of the Obama doctrine.
The tendency to learn more from failures than successes is rooted in what psychologists call “negativity bias,” which is a core predisposition in the human brain where bad is stronger than good. Negative factors loom larger than positive factors in almost every realm of psychology, including cognition, emotion and information processing, as well as memory and learning. Bad events are recalled more easily than good events, lead to more intense reflection and “why” questions, and have a much more enduring impact. “Prosperity is easily received as our due, and few questions are asked concerning its cause or author,” observed David Hume. “On the other hand, every disastrous accident alarms us, and sets us on enquiries concerning the principles whence it arose.”
Recent failures are especially salient because of the “availability heuristic,” where people recall vivid events that just happened, and then mistakenly think these events are representative or likely to reoccur. For example, the purchase of earthquake insurance tends to increase immediately after an earthquake and then drop off as people forget the disaster.
GIVEN THAT past failure is salient in memory and learning, we might expect that planning for future military operations would also highlight the possibility of loss. But, in fact, the opposite happens. When considering prospective uses of force, officials tend to downplay the possibility of disaster and focus instead on taking the first steps in the strategy of victory. Put simply, past failure is illuminated in bright lights whereas future failure is hidden.
U.S. military war games, for example, often neglect the potential for loss. A 1971 review of education in the U.S. Army discovered that war games and other exercises were, “generally euphoric in nature—the U.S. Army always wins with relative ease.” By 2001, the war games were more sophisticated, but the outcome was the same. According to a study by Robert Haffa and James Patton: “the good guys win convincingly and no one gets hurt.”
When war games do provide a cautionary tale, the results may simply be ignored. In the early 1960s, before the United States sent ground troops to Vietnam, the RAND Corporation ran a series of war games called SIGMA to simulate a multi-year U.S. campaign in Southeast Asia. Chairman of the Joint Chiefs Maxwell Taylor led the communist side to a crushing victory. Despite this outcome, the United States pressed ahead with an intervention in Vietnam and Taylor maintained confidence that the United States would win—perhaps because the communists would lack the benefit of his leadership.
Preparation for real war may also neglect the possibility of failure. Planning for the Iraq War, for example, was overly optimistic about the stabilization phase and the possible risks of disorder or insurgency. The special inspector general for Iraq reconstruction concluded that, “when Iraq’s withering post-invasion reality superseded [official] expectations, there was no well-defined ‘Plan B’ as a fallback and no existing government structures or resources to support a quick response.”
Why do officials downplay the possibility of future failure? Psychologists have found that mentally healthy people tend to exhibit the psychological bias of overconfidence, by exaggerating their perceived abilities, control over events and likely upcoming success. Positive illusions in war games and military strategy are consistent with the well-established “planning fallacy” or the tendency to adopt optimistic estimates of the time and costs required to complete future projects. The Sydney Opera House was supposed to cost AUD$7 million and be completed in 1963, but it was actually finished a decade late at a cost of AUD$102 million. Interestingly, people tend to be too optimistic about the success of their own projects, but more realistic when assessing other people’s projects.
Overconfidence varies, for example, due to national culture. Americans are particularly prone to positive illusions because self-confidence, a “can-do spirit” and winning are all valued traits in the United States. On the eve of D-Day, General George Patton told U.S. soldiers, “the very idea of losing is hateful to an American.” Studies suggest that Americans are more likely than Chinese people to believe they can control the environment and actualize their plans. Henry Kissinger noted that U.S. officials tend to see foreign policy problems as “soluble,” and pursue “specific outcomes with single-minded determination.” By contrast, Chinese officials are comfortable handling extended deadlock, and “believe that few problems have ultimate solutions.”
U.S. military culture reinforces overconfidence by lauding success and stigmatizing failure. At the heart of the military’s ethos is a commitment to achieve the mission. As Douglas MacArthur put it: “There is no substitute for victory.” Colin Powell declared that, “perpetual optimism is a force multiplier.” James Stavridis, the former Supreme Allied Commander in Europe, told me, “U.S. military culture is not particularly compatible with failure planning.”
Men also tend to be more overconfident than women. In a wide variety of domains, from salary negotiations to performance on game shows, men are more positive about their own skills, quicker to congratulate themselves for success and more likely to ignore external criticism in self-evaluations.
High stakes issues can exacerbate overconfidence. Rationally, as the potential costs of a decision rise, there ought to be greater consideration paid to the risks. But this is not always the case in planning for the use of force. At the operational level—for example, when refueling a ship—the U.S. military focuses on potential hazards through a process known as Operational Risk Management. But at the strategic level, the willingness to confront scenarios of failure may actually decline. If officials chose to place American lives on the line, the notion of identifying flaws with the plan can trigger cognitive dissonance and a desire to avoid second-guessing. According to the journalist George Packer, any U.S. official who raised problems with the Iraq invasion plan risked “humiliation and professional suicide.”
In addition, downplaying the potential for future failure may serve organizational interests. The armed services compete for resources, prestige and autonomy, which encourages them to confidently predict they can handle any mission.
These various causes of overconfidence matter because U.S. national security officials may check most or all the boxes, being male, American, in the military, with organizational incentives to play up the odds of success, and facing high-stakes issues.
Overconfidence can also be retrospective. In general, historical debacles loom large but there is an exception when people consider their own personal responsibility for failure. To protect their self-image (and their image in the eyes of others), people often adopt hagiographic autobiographies, downplay their role in causing negative outcomes and blame external forces. The “attribution error” in psychology predicts that people explain triumphs in terms of their own talents and wise strategy, and disasters in terms of environmental factors and random events beyond anyone’s control.
IN SUMMARY, historical debacles are highly salient and preparation for the use of force is often based on avoiding the last big mistake—with the caveat that any personal role in loss is downplayed. At the same time, the potential for future failure tends to be discounted in war games and military planning. Therefore, as the temporal focus shifts from past to future, the salience of loss declines.
For example, the desire to avoid a repeat of the Iraq War profoundly influenced the U.S. strategy in Libya in 2011. The Obama administration tried to make Libya the opposite of Iraq by constructing a broad international coalition, gaining support from the United Nations and the Arab League, and rejecting nation-building with U.S. ground forces. But the U.S. war plan in Libya also neglected the possibility of future loss, including the collapse of civil order following a rebel victory. In 2014, Barack Obama said: “we [and] our European partners underestimated the need to come in full force if you’re going to do this.” The lessons of Iraq were extremely prominent—except for the lesson that the war plan itself might be too optimistic.
How do we reconcile the visibility of past failure with the invisibility of future failure? The answer is that people are assessing different domains. In general, when people survey the world around them, including past events, they highlight negative information. But there is an exception when people judge their own personal capabilities and future prospects, when they are prone to overconfidence.
Are these biased ways of thinking about failure dangerous? Fixating on past negative experiences may produce under-learning, where officials suppress painful memories. For example, following the Vietnam War, the U.S. military tried to forget the entire experience of battling guerrillas, and failed to institutionalize the lessons learned. Another danger is over-learning, where officials try to avoid a repeat experience at any cost. Negative memories of the humanitarian mission in Somalia in 1992–1994 deterred Washington from acting to stop the genocide in Rwanda in 1994. Under-learning and over-learning might seem to be contradictory effects, but they are similar to an individual’s experience of trauma producing either amnesia or phobia.
Downplaying the possibility of future loss in war games and military planning is also hazardous. Of course, confidence is integral to military leadership and can encourage officials to persevere in the face of adversity, sometimes producing ultimate success. But overconfidence may also raise the odds of conflict. The belief in an easy victory can lead officials to initiate a campaign of choice like the Iraq War that might otherwise be avoided. Furthermore, the assumption of rapid success means that policymakers may be unprepared for negative contingencies like an insurgency and forced to improvise. The risk of being unready for a worsening mission is particularly noteworthy given that recent wars in Vietnam, Afghanistan and Iraq all deteriorated unexpectedly.
ONE SOLUTION to these problems is to take inspiration from the school of “intelligent failure” in the business world. Of course, the military and corporate realms are very different. When planning for war, the stakes are higher—life and death. But there are also important similarities. In both the military and business arenas, decisionmaking occurs in a highly competitive and uncertain environment, where leaders face the possibility of psychological bias. And business decisions can certainly have a major impact. One-third of the companies in the Fortune 500 in 1970 no longer existed just thirteen years later.
The school of intelligent failure promotes the effective anticipation and handling of negative experiences. In 2008, the nonprofit group Engineers Without Borders Canada created an annual “failure report” to document ventures that did not work. The following year—with consummate timing given the great recession—entrepreneurs set up a conference known as “FailCon” to exchange stories about mistakes that they (and other start-ups) had made. In 2011, the Harvard Business Review devoted an issue to the art of failing well: “Failure is inevitable and often out of our control,” noted the editors. “But we can choose to understand it, to learn from it, and to recover from it.” Groups like “Fail Forward” help organizations to handle failure more effectively and become more resilient. Entrepreneurs have even distributed “failure resumes” or descriptions of projects that tanked. Meanwhile, a number of recent books like Megan McArdle’s The Upside of Down argue that failure is essential to success in business and provides the wreckage for gains.
Of course, the goal of intelligent failure is not loss itself. Rather, the objective is to encourage a growth mindset where negative outcomes—if dealt with appropriately—can generate innovation and hardiness. We can consider specific lessons and tools that are relevant for military scenarios.
LOSS IS a major opportunity—and sometimes the only real chance—for big institutions to make fundamental reforms. Organizations are often slow to evolve in response to changing environments, partly because vested interests try to maintain the status quo. Here, success may encourage businesses to stick to a winning formula, triggering complacency and risk aversion. For example, one study of the airline and trucking industries found that past success led to greater confidence in the validity of current strategies, persistence in the face of environment change and a subsequent decline in performance. Although the negativity bias and the tendency to fixate on past loss is dangerous, it can also be utilized for progressive reform. Failure is disruptive. It draws attention to issues, encourages experimentation and allows organizations to break through the barriers to change.
National security officials should therefore seize the upside of down. Difficult military campaigns are a critical chance for leaders to challenge accepted wisdom, think creatively and even embrace a radical new approach. Following World War I, the winning coalition of Britain and France expected the next war to resemble the last, and favored a defensive doctrine based on fortifications like the Maginot Line. By contrast, the defeated state, Germany, was more open to innovation and created a new model of armored warfare known as Blitzkrieg.
OFFICIALS MUST systematically investigate the causes of good performance as well as bad. A major problem in business is the tendency to examine the data only when things go wrong. In the wake of success, enterprises often decline to ask tough questions, and simply move on to the next challenge. But a positive outcome does not mean the process was fundamentally sound. Perhaps the organization got lucky because a competitor unexpectedly made a misstep. Or success might be due to factors the organization would not want to replicate. The animation studio Pixar, for example, had a string of movie hits in the 2000s, but rather than simply celebrate each accomplishment, the studio engaged in tough reviews of what to copy and what to avoid.
The U.S. military can also interrogate success with the same urgency as failure. In 2012, the Joint Staff issued a major report on learning lessons but chose to only examine struggling U.S. counterinsurgency missions in Iraq and Afghanistan. The report could also have explored more successful missions during the post-Cold War era like the Gulf War in 1991 and the air campaigns in the former Yugoslavia in 1995 and 1999. These earlier cases could inform several of the core issues highlighted in the report, such as improving coalitional operations.
IN BOTH the business and military realms, there is the danger of under-learning, or avoiding difficult experiences because they are acutely painful. Therefore, loss must be confronted. The review of army education in 1971 concluded: “A strong element of every curriculum should be historical studies which frankly analyze unsuccessful American military efforts…[including]…an objective discussion of what we did, what went wrong, and why.” The challenge is that organizations also face the opposite danger of overlearning, where the most recent debacle dominates analysis.
The answer is to situate the last failure in a broader sample of cases, including successes (see above), additional cases of failure, as well as the experience of other actors. For instance, the U.S. military tends to disdain learning from other countries. During the 1960s, the Joint Chiefs of Staff dismissed the relevance of the earlier French defeat in Vietnam: “The French also tried to build the Panama Canal.” However, the experience of other countries that faced insurgent adversaries, like Israel, India or the Soviet Union in Afghanistan, may offer valuable instruction.
THE SCHOOL of intelligent failure encourages senior business figures to set the tone by acknowledging their own personal failures and how they learned from them, and by making clear that mistakes are opportunities for improvement that can allow for—and even facilitate—career advancement. After taking over at Ford in 2006, Alan Mulally asked the managers to color code their reports: green meant good, yellow meant caution and red meant there were problems. Unsurprisingly, the first set of reports were all green. Mulally reminded the managers that the company was losing billions of dollars. When a brave manager offered the first report that was coded yellow, the room went quiet, until Mulally broke into applause. Reports were soon submitted in the full spectrum of colors. Indeed, personal knowledge of loss is deemed to be so valuable that some venture capitalists will not invest in a new project unless the entrepreneur has experienced failure.
Leaders in the national security community can also emphasize that failure is not an automatic indication of unfitness. After all, loss could result from environmental factors or a willingness to take risks, whereas a string of apparent successes could be a sign of incremental innovation. Therefore, rather than declaring that they cannot spell the word “failure,” generals might discuss their own mistakes and the lessons they learned, host a convention on failure or even issue a failure CV.
BUSINESSES ARE often tempted to use pilot projects to deliver a satisfying success, by introducing a new product in an optimum scenario with hand-picked staff and savvy customers. However, the goal of pilot projects is not to attain a pleasing triumph, but to reveal new knowledge. Pilots may be more instructive when the conditions are deliberately challenging, and businesses can discover what does not work.
War games have a similar goal to pilot projects: discovering novel information before launching a larger endeavor. The aim is not success but instead to create original insights. Oftentimes, new knowledge is more likely to arise by exploring negative scenarios of stalemate and loss.
ONE SOLUTION to overconfidence and the planning fallacy is task segmentation, or breaking down a project into constituent parts. Researchers have found that when people are asked to estimate the time required to complete an overall task, they tend to be too optimistic. But when they are asked to estimate the time required to complete each individual subtask, and then sum the parts together, the estimates are more realistic. People are less likely to miscalculate the time needed for short tasks, and working through a checklist of jobs means that no assignment is ignored in the overall appraisal.
Military planners can employ task segmentation to create a more realistic estimate of the operational timeline. For example, planners originally intended to reduce U.S. troop levels in Iraq to thirty thousand by September 2003, just four months after major combat operations ended. If planners had unpacked the process of post-conflict stabilization into separate subtasks and made plausible estimates of the time needed for the completion of each element, the overall estimate might have been more accurate.
Another tool to ward off overconfidence in the business realm is a pre-mortem where actors imagine that a future project has failed and then analyze the most likely causes. As Thomas Gilovich wrote, “The idea is to suppose that your idea bombed. What would you be saying to yourself right now about how or why you should have foreseen it?”
This approach helps officials to clarify exactly what success and failure look like, and how to translate mission goals into measurable metrics. It readies planners to anticipate and detect failure early on. It also empowers skeptics to raise doubts and forces even zealous champions of a policy to think through potential hazards. Simply asking planners to review a strategy for flaws may lead to a half-hearted effort because people are in a tunnel vision mindset and act like partisans of the chosen course of action. By contrast, planners engaged in a pre-mortem step outside of the project, are less emotionally attached to its success and more willing to actively search for problems.
The pre-mortem also encourages officials to plan for potential failure and create a plan B if the campaign goes off the rails. Thinking through contingencies and exit strategies in case of disaster is not defeatist, any more than building a fire escape is defeatist. And if officials have a strategy to mitigate loss in their back pocket, they are better positioned to take risks.
TO COUNTERACT positive illusions, businesses may employ a devil’s advocate, or someone who is tasked with critiquing the prevailing plan of action and finding disconfirming information, before the enterprise is committed to an unsound endeavor. Another approach is a murder board where an entire committee seeks to identify flaws in a plan and “kill” it. The murder board idea originated with the Central Intelligence Agency after World War II, and the Joint Chiefs have also employed this tool to evaluate projects. But murder boards are used inconsistently. For example, the assumptions behind the Iraq invasion plan were not subject to appropriate critical vetting.
These structures will only work in a hierarchical organization like the U.S. military if the devil’s advocate or the murder board are empowered to critique senior officials, who, in turn, are willing to listen. One solution is to bring in outsiders. Researchers have found that the best way to check hubristic CEOs is an independent and active board of directors. In the military, the same role can be played by retired generals or other respected external figures.
Furthermore, since people tend to be realistic when assessing other actor’s projects and overconfident about their own projects, it may be helpful to ask allies to stress test a war plan. Before the invasion of Iraq, for example, Washington could have invited British officials to systematically interrogate the strengths and weaknesses of the U.S. strategy.
OFFICIALS IN the national security community often think about failure in biased and dangerous ways. Leaders fixate on past loss when learning and neglect future loss when planning. Of course, there is variation among different actors; for example, civilian leaders in the White House may be more sensitive to domestic political pressures. But the broad pattern of biases holds true across institutions.
The military needs a paradigm shift. One answer is to study the school of intelligent failure in business, which encourages people to view loss as the midwife of progressive change. A culture of intelligent failure can be embedded in all parts of the U.S. military, including the hiring and promotion process (where candidates are evaluated on their capacity to learn from past negative experiences); training programs (which distinguish between acceptable and unacceptable errors); and leadership roles (where senior figures describe their own experiences with loss). Increasing the chance of success means engaging honestly and openly with the possibility of failure.
Dominic Tierney is associate professor of political science at Swarthmore College, a Templeton Fellow at the Foreign Policy Research Institute, a contributing editor at The Atlantic and the author of four books, most recently, The Right Way to Lose a War: America in an Age of Unwinnable Conflicts (Little, Brown, 2015).
Image: Wikimedia