Achieving True Cybersecurity Is Impossible
Cybersecurity is not a switch, and automating our defenses—computer network defense, national defense—is as likely to destroy us as save us.
Cybersecurity the way we like to think of it is actually impossible to achieve. That’s not to say we shouldn’t try hard to achieve it. Nor is it the same thing as saying that our costly efforts to date have been wasted. Instead, if our aim is to make our interactions in cyberspace more secure, we need to recognize two things.
First, part of our troubles has to do with a culture that defines things like success, victory, and security as dichotomous rather than continuous variables. Think of a switch that’s either on or off. Second, speed is hurting us, and calls to replace humans with much faster and “objective” machines will continue to gain momentum, putting us at extreme risk without increasing either our security or prosperity. Let me explain.
[Cyber]security is Not a Switch
In my time in Norway a few years ago, I had the great fortune to be hosted by the Norwegian Institute for Defense. As I toiled to recover the history of Norway’s experience under occupation by the Third Reich, I was able most days to join my Norwegian colleagues for a communal lunch. My colleagues did me the great courtesy of carrying on most conversations in flawless English. As an American academic accustomed to research abroad, I anticipated that sooner or later I’d encounter a classic opening sentence of the form, “You know, the trouble with you Americans is…” And after a month or so my unfailingly polite and generous colleagues obliged. But what ended that sentence has stuck with me since then; and underlines a core value of study abroad at the same time: “You know, the trouble with you Americans is, you think every policy problem has a solution; whereas we Europeans understand that some problems you just have to learn to live with.”
The idea that part of our mission was research intended to support policies that solved problems was never something I’d thought of as varying by culture. But as I reflected more and more on the idea, I realized that insecurity—and by extension cyber-insecurity—would be something we Americans would have to learn to live with.
This “switch” problem is mainly due to the relentless infiltration of market capitalist logic into problem framing and solving. For example, corporations hire cybersecurity consultants to ensure that corporate profit-making operations are secure from hacking, theft, disruption, and so on. When corporations pay money to someone to solve a problem, they expect a “deliverable”: some empirical evidence that corporate operations are now “secure.” It should go without saying that this same corporate logic infiltration—the largely North American idea that governance would be more “effective” if run via corporate profit-making logic—has seriously degraded effective governance as well.
Cybersecurity is not a switch. It isn’t something that’s either “on” or “off,” but something that we can approach if we have a sound strategy. And progress toward our shared ideal itself is what we should be counting as success.
Automating Computer Network Defense Can’t Save Us, and May Destroy Us
Even if we could agree to moderate our cultural insistence on measuring success or failure in terms of decisively “solving” policy problems, we’d be left with another set of problems caused mainly by the assertion that humans are too slow and emotional as compared to computers, which are imagined as fast (absolutely) and objective (absolutely not). We need to challenge these ideas, because together they make up a kind of binary weapon which leads us into very dangerous territory while at the same time doing little to advance us toward our ideal of “cybersecurity in our time.”
So, a first critical question is, under what conditions is speed a necessary advantage? That’s where computers come in. Few Americans will be aware, for example, that the first-ever presidential directive on cybersecurity—NSDD-145 (1984)—was issued by President Ronald Reagan in reaction to his viewing of John Badham’s WarGames (1983). After viewing the film, which imagines a nascent artificial intelligence called the WOPR hijacking U.S. nuclear missile defense and threatening to start a global thermonuclear war, Reagan asked his national security team whether the events in the film could happen in real life. When his question was later answered in the affirmative, the Reagan administration issued the NSDD. Here’s a key bit of dialogue from Badham’s film, which starts after a simulated nuclear attack resulted in 22 percent of Air Force officers refusing to launch their missiles when commanded to do so:
Mr. McKittrick: I think we ought to take the men out of the loop.
GEN Berringer: Mr. McKittrick, you’re out of line sir!
McKittrick: Why am I out of line?
Cabot: Wait. Excuse me. What are you talking about? I'm sorry. What do you mean, take them ‘out of the loop’?
GEN Berringer: Gentlemen, we've had men in these silos since before any of you were watching Howdy Doody. For myself, I sleep pretty well at night knowing those boys are down there.
McKittrick: General, we all know they're fine men. But in a nuclear war, we can't afford to have our missiles lying dormant in those silos because those men refuse to turn the keys when the computers tell ‘em to!
Watson: You mean, when the president orders them to.
McKittrick: The president will probably follow the computer war plan. Now that’s a fact!
Watson: Well, I imagine the joint chiefs will have some input?
GEN Berringer: You’re damned tootin’!
Cabot: Well hell, if the Soviets launch a surprise attack there's no time…
Healy: Twenty-three minutes from warning to impact. Six minutes, if it’s sub-launched.
McKittrick: Six minutes! Six minutes. That's barely enough time for the president to make a decision. Now once he makes that decision, the computers should take over.
This discussion brackets two critical components of any discussion of contemporary cybersecurity. The first is the “humans are too slow” theme, and by slow, we mean slow as compared to computers. Second, humans have consciousness and morals and computers don’t. Computers have some version of whatever their programmers give them. Recent advances in deep learning and artificial neural networks—in particular foundation models—have created the impression that machine consciousness and independent creativity are here or very near, but they are not; and moreover—and this is key—whatever these machines come up with will always be tethered to their programmers which, to be blunt, remain mostly young, upper-middle-class males from the northern hemisphere.
This impossibility of algorithmic objectivity is the second half of the “binary weapon” I referenced earlier: along with speed, algorithms, code, and the like promise to be objective—but they cannot be. So as mathematician Cathy McNeill or computer scientist (and activist) Joy Buolamwini remind us, when you get turned down for a loan or a job and an algorithm is involved, it remains almost certain that far from being “objective” in the sense we usually mean, some unintended but profitable bias has very likely been introduced. Ask an AI to show you a photo of an “attractive person,” for example, and what the “objective” algorithm is likely to supply is the image of a thin blond woman with large breasts. Imagine deploying biased algorithms in military or cybersecurity applications, but then also imagine that your reservations about bias cause you to hesitate. You’d be terrified that an economic or security rival less principled or cautious than you would deploy and gain a terminal advantage over you. It’s straight out of philosopher Carl von Clausewitz’s discussion of the limitations of restraint in war:
As the use of physical power to the utmost extent by no means excludes the cooperation of the intelligence, it follows that he who uses force unsparingly, without reference to the bloodshed involved, must obtain a superiority if his adversary uses less vigor in its application. The former then dictates the law to the latter, and both proceed to extremities to which the only limitations are those imposed by the amount of counteracting force on each side.
Having weaponized cyberspace, and corporate strategy having appropriated military metaphors, the message is clear: restraint makes you a sucker. In terms of policy, this leaves us with a classic dilemma: we get gored either way. If we don’t deploy AI and our competitors do, we may lose everything. But if we do deploy AI, not fully understanding how it arrives at its conclusions but having unreasonable faith that “it must be right, because it’s math; it’s objective,” we may lose everything as well.
So, the WarGames scene should also remind us that the particular domain in which speed is being claimed as a necessary virtue is armed conflict. Note that Watson’s objection to the idea that “computers are in charge” implies there should be checks and balances between decision and action; an anagram of democracy itself, with its emphasis on deliberation and consensus. But Cabot counters by asserting, reasonably, that in war, checks and balances are a liability: “there’s no time” (contemporary hypersonic missile technology compresses time still further).
In the United States, the idea that we’re always at war—and its destructive impact on democracy worldwide—emerged from the impact of a shattering moment in U.S. history: 9/11. Since then, we’ve never had the feeling we can “go back” to making washing machines and babies as we did after World War II. We are permanently mobilized, always on alert, always at war. And in war, speed above all else seems to make sense (think Blitzkrieg). Being “always at war” also biases politics in favor of the world’s political Right, with its claim that checks and balances, deliberation, and popular sovereignty put citizens of democracies at too much risk. What’s needed, then, is an unfettered executive who can act fast.