If it was not already evident that disinformation campaigns should be taken seriously as a security threat, the effort to overturn the results of the 2020 U.S. presidential election brought this reality into clear view as rioters stormed the U.S. Capitol on January 6, 2021.
John Scott-Railton is a senior researcher at Citizen Lab where he researches malware, phishing, and disinformation. He joined Dr. Emma Belcher, President of Ploughshares Fund, on the Ploughshares’ podcast Press the Button to discuss the effects of disinformation on conflict. Their conversation highlighted the way disinformation campaigns impact the environments in which policymakers must make decisions—whether or not those policymakers believe the disinformation.
Scott-Railton became interested in disinformation through his investigations of Russian hacking activity. He notes “not only was cybersecurity and disinformation potentially more connected than I realized, but actually, disinformation, in so many cases, is about security, and about changing realities on the ground.”
Of course, disinformation campaigns are not new. Regarding nuclear weapons, for instance, Belcher emphasized the danger that disinformation has posed from the Cold War up to the present. In today’s “global information bloodstream” the spread of disinformation is particularly concerning to Scott-Railton. He says that now, “more than ever, there is sort of fertile ground for various entities and operations that want to further erode trust and authority or to plant false information.”
According to Scott-Railton, part of the problem is that while “disinformation moves at the speed of social media, fact-checking … is still stuck in an almost analog age.” Furthermore, the job of fact-checking is up against the challenge that “once certain beliefs are planted, they can be really hard to get rid of.”
In discussing recent conflicts, Scott-Railton highlights “just how effectively disinformation campaigns have been wielded to … shape the information environment that policymakers are making decisions in.” Even if a policymaker is receiving extremely accurate intelligence through government channels, they may be working in an environment where disinformation is consumed by the public.
Scott-Railton notes that on top of the proliferation of disinformation, these disinformation campaigns are becoming more and more sophisticated. Take, for example, an Iran-linked operation that used clones of real media sites to spread disinformation. This case illuminated that “disinformation operators… appear to be really experimenting and iterating and learning from efforts by platforms to control what they were doing, and ultimately honing their message.”
So how do we address the proliferation of such advanced disinformation campaigns? Scott-Railton says, “we're still just in the sort of earliest stages of states thinking together about norm building and about specific regulations that would limit the proliferation of this tech. Unfortunately, I think that means that we're going to see a lot more destabilizing harm before we see effective control measures.”
But he also sees space for “researchers and others who are interested in limiting the proliferation of these new infinite range cyber weapons to be in dialogue with the [nuclear] anti-proliferation community.” And halting the spread of disinformation might serve to lessen nuclear risk. Belcher notes that in 2016 there were “false news reports that the U.S. moved its tactical nuclear arsenal from an airbase in Turkey to one in Romania.” As she emphasizes, “you don't really need to use your imagination too much to see just how badly things could go wrong in this respect with nuclear weapons.”
Alexandra B. Hall is the policy associate and special assistant to the president at Ploughshares Fund, a global security foundation.