Most people have been thinking the wrong way about cybersecurity. As a concept, cybersecurity remains too dependent on understandings of “threat” or “cost” that go to the intentional, the direct, and the physical. This is in keeping with Western, and in particular North American, culture. Moreover, we’ve underestimated the net costs of our interactions across cyberspace. As a result, we’ve ended up with bad strategy and bad policy, including an increased risk of harm at the individual level, as well as in international political outcomes.
But first, some definitional issues.
“Cyber” means networked computers, or any technology enabled by networked computers, such as machine learning-artificial intelligence, robotics, autonomous weapons, the internet of things, and so on.
Cyberspace and the internet are closely related, but in this definition, they are not the same thing. The internet is the web of digital connections enabled by satellites, hard lines ranging from old copper to new fiber, mobile phone, and radio spectrum signals. Cyberspace is the internet plus human and non-human agents, along with the myriad of cyber-enabled devices connected to it. A lot of those devices are computers, but increasingly, other devices with computers embedded in them, such as cars and refrigerators, have connected themselves to people and to the internet.
Defining security is more complicated, but let’s start by recalling that the English word comes from the Latin securas, meaning “without anxiety.” In this sense, we can usefully think of security in three dimensions: physical, economic, and ideational (roughly corresponding, historically, to soldiers, merchants, and priests). That’s a key move for two reasons. First, it allows us to think more broadly about power—or in political terms, the ability to get someone to do something that person otherwise wouldn’t consent to do. Second, it allows us to ask an important question: When it comes to cyberspace and our interactions there, are we thinking about the relationship of these dimensions and power as we should?
A Ladder, Mandala, or Dao?
For example, in Western culture extending from the Greeks and especially Romans, we inherit an often useful “ladder” concept made most famous in the work of political philosopher Thomas Hobbes. At the lowest rung of this imaginary ladder resides the power of ideas; concerns about identity. At the next rung up, we see concerns over poverty and wealth. Finally, at the highest rung, we think of the use of physical violence—in particular lethal violence—as the highest representation of power. Hobbes would have approved of Sting’s sentiment in the Cold War-era song Russians: “We have the same biology, regardless of ideology.” If we map this ladder onto interstate politics, we clearly see that when we wish to “get things done,” we often disparage diplomacy and economic sanctions in favor of the use of armed force. Persuasion, bribes, and the like are reduced to poor stepchildren of the ultima ratio: organized physical violence.
Notably, Hobbes was translating Thucydides’s Peloponnesian War from acient Greek at a time when Europe and England were in the midst of two civilization-ending religious wars: the Thirty Years’ War (1618–48) and the English Civil War (1642–51). Hobbes’ argument, therefore, that rational people invariably fear death above all else must have made sense, then and there, as a kind of Archimedean point in a nascent positive social science. But Hobbes’ vision remains too limiting for us in how we think about power, coercion, and security. His ideas about power are necessary, certainly, but not sufficient. We know historically that a persuasive idea or a bribe can accomplish a goal when a credible threat to kill may actually be counterproductive.
A better way to think of power is along the lines of a mandala or Dao; imaginaries derived from Eastern conceptions of power. Instead of a low-to-high hierarchy, the three dimensions of power rotate around each other, becoming more or less effective depending on circumstances.
This is where cyberharm as a useful concept comes into its own as compared to terms like cyberwar, cyberattack, and more broadly, cybersecurity. Cyberharm is defined simply as injury which extends from human and non-human interaction in cyberspace. Unlike most contemporary challenges to cybersecurity, cyberharm remains agnostic about whether the harm is physical, economic, or ideational. Cyberharm can be direct or indirect, intentional or inadvertent, consensual or nonconsensual. Hundreds of millions of people worldwide consent, second by second, for example, to self-injury in cyberspace. They suffer cyber-enhanced anxiety, depression, distraction, sleep deprivation, bullying, body dysmorphia, radicalization, polarization, conversion, and commercialization. They also consent to the transfer of intimate information about their assets, debts, locations, relationships, and desires to corporations and governments. Thus, cyberharm can happen even when systems are “secure.”
As always, a key question is whether the perceived or expected benefits of interaction in cyberspace outweigh the harm. Given that North American culture has long confused technical innovation with “progress,” that’s not an easy calculation to make; especially as more and more of our lives are enacted or performed in cyberspace. We’ve started to recognize that cyberharm at the individual level is material to our individual mental, financial, and social well-being.
If we extend the concept of cyberharm to great power competition, we see that over-emphasizing threats to physical infrastructure and, to a lesser extent, threats to the global digital finance system over the threat of ideas has put us at risk. Our ladder rightly warns us that a major power outage or threat to our water supply infrastructure could cause fatalities. We already have at least one fatality attributable to cyberharm. In 2020, a ransomware attack left a woman bound for emergency surgery in Germany dead. Undermining our confidence in our financial system could also wreck our economies.
In 2015 and 2016, the Russian Federation used disinformation, inadvertently assisted by three American tech companies (Facebook, Google, and Twitter), to manipulate Britain into leaving the European Union, and to deny the U.S. presidency to Hilary Clinton in favor of Donald J. Trump, a vocal opponent of U.S. support of international organizations and of NATO. Election interference is still not considered an act of war (and in fairness, the United States and its allies have a long history of interfering in democratic elections; it’s just that the FSB proved much better at it), but democracies worldwide remain incredibly vulnerable to such manipulation wherever broadband and social media penetration are high.
Finally, this is not an argument for violence and economic threats becoming irrelevant. Rather, cyberharm enables us to recall that ideas like Marxism, capitalism, Protestantism, nationalism, patriotism, and Salafism have always represented real power, just as militaries, gold, diamonds, and scarce commodities have. In today’s internet age, idea entrepreneurs have now gained the power to achieve political outcomes formerly associated only with great wealth or large-scale violence.
Ivan Arreguin-Toft, Ph.D. currently teaches war and cybersecurity strategy and policy at Brown University’s Watson Institute of International and Public Affairs, where he also serves as Director of the Security Track for the undergraduate International and Public Affairs concentration. He is formerly a founding member of the Global Cyber Security Capacity Centre at Oxford University’s Martin School, where he served as Associate Director of Dimension 1 (cybersecurity policy and strategy) from 2012–2015.