Why Cyber Arms Control Is Not a Lost Cause
Previous struggles to control dangerous nuclear technology show us how difficult it is to successfully implement arms control and that cyberspace may be difficult to control, too.
Disruption and attack in cyberspace poses a serious challenge to international security. In the face of rapidly-evolving cyber threats, it is natural to look to the past to discover how policymakers managed the emergence of previous disruptive technologies, especially through arms-control negotiations. Recently, some have called into question the usefulness of historical arms-control examples for emerging digital technologies, noting that these new technologies are fundamentally different from the nuclear weapons controlled by the existing arms-control regime. Specifically, arms-control skeptics have claimed that it is easier to verify limitations on nuclear-weapons technologies than on emerging cyber threats. Perhaps the skeptics are correct: only time will tell whether an arms-control framework for cyber can be constructed. History, however, provides us with some reason for hope. After all, today’s ability to control nuclear technology is the result of intentional efforts to develop new technologies, new organizations, and new norms for arms control.
It must be remembered how dark the prospects seemed for nuclear-arms control in 1945. The United States had just triumphed in the world’s first nuclear war, demonstrating that nuclear weapons could prove decisive in determining a future great-power competition. At the same time, nuclear technology was new and largely unknown, its limits untested. The United States faced an emerging nuclear-arms race against adversaries with vast resources and territorial expanses—the Soviet Union, China, the British and French Empires—that dwarfed existing verification capabilities. The future remained uncertain, with the possibility that smaller countries, or even private actors, might gain access to dangerous nuclear technology.
The initial proposals for nuclear-arms control reflected these dim prospects. The United States’ 1946 Baruch Proposal argued that the control of nuclear weapons required the creation of a super-governmental authority, which would control or regulate the mining, processing, refining and use of uranium and other radioactive products. The perceived problems of verification were so great that only this sort of super-national organization could ensure that no one on earth would be able to cheat undetected. Yet this sort of agreement proved to be nonnegotiable, since the Soviets and others rejected its verification provisions, ending the prospects for early nuclear-arms control. At that point, much like skeptics today about cyber, some predicted that arms control would never succeed.
But in the ensuing years, proponents of arms control made tremendous strides in technology, organization, and norms, which totally transformed the nature of the nuclear-arms control problem. Two important examples of technological transformation illustrate the advances made in monitoring the nuclear genie: the expansion of seismic sensing capability, and the development of satellite reconnaissance. These new verification tools proved so successful that today we take for granted their accomplishments. But very little of the nuclear-arms control regime was inevitable: rather, proponents of arms control built the regime piece by piece.
Constructing that arms-control regime was a long and difficult process. Proponents of arms control in the 1940s faced a daunting verification problem: how to surveil the entire earth for signs of nuclear testing. Early arms-control proposals like the Baruch Plan relied on extensive and intrusive inspections by large international organizations to police nuclear activity. Inspectors would need the capability to go anywhere, at any time, for any reason. Others argued that even this sort of authority would be too limited, and instead insisted that only a world government could stem the nuclear-arms race. Even then, the prospects of policing the secret nuclear activities of countries as large as the United States and the Soviet Union remained daunting.
Meeting this challenge would require combining new technologies with new organizations and ideas. As scientists came to understand better the physical effects of nuclear weapons, improvements in sensor technologies provided the opportunity to detect nuclear testing, even at very long distances. American aircraft equipped with radiological sensors were able to detect the first Soviet nuclear test in 1949. But improved sensors only translated into arms-control successes when wedded to new ideas concerning how to structure arms-control agreements. As sensor technology improved, proponents of arms control came to realize that these sensors could be used to alert inspectors to suspicious activity, making the problem of wide-ranging nuclear inspections far more manageable, and potentially opening the way to a ban on nuclear explosions. At the 1958 Geneva Conference of Experts on the Prevention of Surprise Attack, American and Soviet scientists concluded that a network of seismic sensors would provide the most effective means of detecting nuclear explosions, cuing inspectors to suspicious activity. This network would combine the technical capability for wide-area monitoring with the reliability of in-person inspections. By tag-teaming sensors and inspectors, experts concluded that significantly greater verification results could be achieved.
At least initially, the sensor-inspector verification plan did not work. Some scientists in the United States worried that seismographs would not be sensitive enough to detect relatively small explosions, especially if they were concealed in solid rock or large caverns, while Soviet leaders remained resistant to the level of inspections that the United States demanded. As a result, the 1963 Partial Test Ban Treaty banned only nuclear explosions in the atmosphere, ocean, and outer space, which American leaders believed could be adequately verified employing atmospheric radiation sensors. For the time being, superpower nuclear testing continued underground. Although this treaty only limited testing, the basic concept of verifying a prohibition on nuclear explosions through a combination of seismic sensors and onsite inspections continued to develop throughout the later Cold War. Improved seismographic techniques, especially related to the development of digital computing, made it much easier to detect a nuclear explosion.
Significant progress was also made in strengthening the normative commitment to cooperate in monitoring nuclear explosions. The 1974 Threshold Test Ban Treaty negotiated by the United States and the Soviet Union called for the exchange of significant geological and testing data. This treaty made it much easier for the superpowers to detect and measure underground nuclear explosions. In 1976, the Soviets agreed to a treaty on Peaceful Nuclear Explosions (PNET), which provided for visits by American inspectors to Soviet nuclear facilities, albeit not military ones. Still, the PNET upheld the basic logic of using data exchange and wide-area sensors to cue human inspections to specific locations. Eventually, the Russians agreed to on-site inspections as a component of the 1996 Comprehensive Test Ban Treaty (CTBT).
New seismographic technologies and cooperative norms were important to ending widespread nuclear testing. These new tools of verification became most effective when given an explicit organizational basis. With the signing of the CTBT, the process of verifying the nuclear test ban was handed over to the Preparatory Commission for the Comprehensive Test Ban Treaty Organization (CTBTO). To this day, the CTBTO operates and coordinates the activities of numerous seismographic sensors around the world, designed to monitor the earth for evidence of nuclear explosions. Furthermore, the CTBTO has conducted large-scale drills of the on-site inspections that might be necessary to locate nuclear testing sites, developing the practical procedures to carry out the cue-and-inspect model first developed at the 1958 Geneva Conference.
Together, the proliferation of advanced seismographs, the introduction and growing hold of norms for promoting international cooperation in the nuclear realm, and the establishment of organizations like the CTBTO have generated previously-unimaginable levels of transparency in the field of nuclear testing, transforming an existential threat into a more manageable, verifiable problem. Most recently, North Korea’s nuclear-testing activities can be tracked by employing seismographs and other remote-sensing techniques, despite the infamous opacity of North Korean society. This level of transparency that has been built would have been unimaginable to concerned leaders and observers of the late 1940s.
Alongside seismographs, reconnaissance satellites are widely recognized as a revolutionary technological development in arms control, allowing countries to monitor each other’s military deployments through national technical means. American satellite reconnaissance transformed the Soviet Union from an impenetrable black box into a much more transparent map, exploding previous worst-case myths of bomber and missile gaps. Combined with developments in electronic and radiological sensors, reconnaissance satellites allowed the United States to track the development and deployment of rival nuclear capabilities in significant detail.
Yet the development of satellite reconnaissance technology was a long and difficult process. While engineers at the RAND Corporation imagined “world-circling spaceships” contributing to nuclear disarmament as early as 1946, the practical challenges of building a reconnaissance satellite with 1950s technology remained daunting. Initial efforts at television-based satellite broadcasting were unsuccessful, producing blurry images that provided little useful information. In the late 1950s, the CIA pioneered orbital photo-reconnaissance using gigantic panoramic photo-lenses, revolutionary radiation-resistant film, and parachuting film canisters to return high-quality photographs of the Soviet Union back to earth. The search for improved timeliness in satellite reconnaissance later drove the development of digital imaging using pixilation, a technology that would have a profound effect on all walks of life.
Satellite technology itself was only the tip of the iceberg. What made satellite reconnaissance so effective a tool of arms control was the total reorganization of the American government, beginning with the National Security Act of 1947 and associated reforms, which constructed a bureaucratic home for advanced technical intelligence capabilities like satellites, as well as a centralized intelligence apparatus capable of analyzing the rapidly-increasing volume of data generated by emerging technical intelligence capabilities. With the onset of overhead reconnaissance, the national-security system was further enhanced through the creation of the National Reconnaissance Office (NRO), which continues to coordinate Air Force and intelligence agency activities in outer space, monitoring vast swathes of the earth’s surface for evidence of nuclear weapons activity. Satellite reconnaissance as we understand it could not have existed without these organizations, which designed and launched the satellites themselves, and then rendered the resulting technical data intelligible to policymakers.