But matters are more complicated than this statement suggests. The attribution problem
actually entails three distinct kinds of potential errors, associated with different challenges.
– There is a false alarm if a state perceives an attack when no attack occurred. In 2008, a worm gained access to U.S. war planning materials. The prime suspect was Russian foreign intelligence. But others, noting the worm’s relative unsophistication, argued that it could have ended up on Department of Defense networks without malicious intent. This may, then, have
been a false alarm.
– There is detection failure if a state fails to perceive an attack that did occur. The Stuxnet worm caused centrifuges to malfunction at the Iranian nuclear facility at Natanz for more than a year. The Iranians, though, believed the failures were the result of engineering incompetence or domestic sabotage. This was a case of detection failure by Iran.
– And there is misidentification if a state assigns responsibility for an attack to the wrong adversary. The hack of Democratic National Committee servers during the 2016 U.S. presidential election was initially attributed to a lone Romanian hacker who went by the moniker Guccifer 2.0. Later, U.S. authorities determined the hack was the work of Russian security agencies who tried to cover their tracks by pretending to be Guccifer 2.0.
Policy arguments regarding the benefits of improved attribution typically do not distinguish between these three dimensions. But our analysis shows that innovations in technology or intelligence that affect different dimensions of attribution can have critically different impacts on deterrence.
Reducing detection failure, for example, has competing effects. On the one hand, improved detection increases our ability to retaliate. On the other hand, improved detection may entail discovering more attacks that are hard to attribute to a specific adversary, increasing concerns about misidentification.
As a consequence of that second effect, such technological progress could backfire—making us more reluctant to retaliate and our adversaries more aggressive. The same sort of logic applies to many kinds of improvements in attribution. For a technological innovation to strengthen deterrence, it must make us more willing to retaliate—for instance, by simultaneously improving
detection and identification or by reducing false alarms.
Perhaps most surprisingly, sometimes getting worse at attribution can actually improve deterrence. For instance, if we are reluctant to retaliate following certain types of attacks because they are so difficult to attribute to a specific adversary, then it is better not to detect them at all. By not detecting such attacks, we make attribution more certain and retaliation more attractive, following those attacks that we do detect. This strengthens deterrence even
while worsening attribution.
The hypothetical scenario that began this essay, about stolen military secrets, illustrates the risks of an overly narrow, muscular approach to cyber deterrence. Focusing just on China and Russia, and swinging a big cudgel in response to every cyberattack, tempts other adversaries to be more aggressive in cyberspace, risks retaliation against the wrong party, and can escalate into a potentially catastrophic conflict. A controlled, confident approach that defends the national interest aggressively when the right information is available, while acknowledging that we cannot deter or retaliate against every cyberattack, is the right path forward in our transformed strategic landscape.
Sandeep Baliga is the John L. and Helen Kellogg Professor of Managerial Economics and Decision Sciences at the Kellogg School of Management, Northwestern University.
Ethan Bueno de Mesquita is the Sydney Stein Professor and Deputy Dean at the Harris School of Public Policy at the University of Chicago.
Alexander Wolitzky is Associate Professor of Economics at the Massachusetts Institute of Technology.
This essay is based on results from their technical paper on the topic.