How to Prepare for the Coronavirus’s Impact on Terrorism

Reuters
June 21, 2020 Topic: Security Region: Americas Tags: CoronavirusTerrorismTechnologySurveillanceData

How to Prepare for the Coronavirus’s Impact on Terrorism

In America, a task force of data-mining start-ups and technology companies is currently working with the White House to develop a range of tracking and surveillance technologies to fight the coronavirus.

 

Following 9/11, President George W. Bush framed the challenge facing the United States as thus: “Our nation has been put on notice: We are not immune from attack.” Over a decade and a half later, in 2017, then-UK foreign secretary Boris Johnson described the global effort against radical Islam as “a fight not against a military opponent but against a disease or psychosis.” As these two examples show, the use of medicinal language to describe the war against terrorism has been a common theme in speeches of leaders in the United States and the UK.  

Yet recent statistics produced by New York City health officials have revealed that the number of people dying because of the coronavirus in NYC has already surpassed the number who were killed in the 9/11 terrorist attacks. Both terrorist attacks and pandemics are high-impact events that have the ability to disrupt lives. Perhaps one of the most interesting ways to examine this disruption is the effect such events have on our data.  

 

In a March 2020 Pew Research Center survey, the American public named the spread of infectious diseases as the greatest threat to the country. For the first time, this surpassed the threat of terrorism: 79 percent of Americans named outbreaks of disease as a major threat to the country, compared to 73 percent of Americans who saw terrorism as a major threat. Counterterrorism measures nonetheless provide an important context for examining the trade-offs between reduced civil liberties and increased security. Following high-impact events such as terrorist attacks, public concerns regarding government intrusions on privacy tend to decrease. After the terrorist attacks in Paris, France, and San Bernardino, California, in 2015, for example, a national survey by Pew Research Center found that the American public was less concerned that anti-terrorism policies restricted civil liberties: such concerns fell to their lowest level in five years (to 28 percent), with twice as many people (56 percent) stating that their greater concern was that policies had not gone far enough to adequately protect the country.  

Similarly, following the 7/7 bombings in the UK in 2005, a Guardian/ICM poll illustrated that 73 percent of Britons would trade civil liberties for security, with only 17 percent rejecting it outright. A more recent survey by YouGov in May 2018 found that Britons would still be willing to trade civil liberties for the purposes of countering terrorism: 67 percent were in favor of monitoring all public spaces in the UK with CCTV cameras, 63 percent were in favor of making it compulsory for every person in the UK to carry an ID card, 64 percent supported keeping a record of every British citizen’s fingerprints, and 59 percent supported a DNA database. 

Where does our data go, and what is it used for? Data mining, the process of extracting trends from large amounts of data using techniques such as pattern recognition and machine learning, has been used to understand and prevent terrorist activity and fraudulent behavior, often as part of a broader knowledge discovery process. A 2002 op-ed published by The New York Times detailed new plans for a program within the Defense Advanced Research Project Agency (DARPA) to create a centralized database containing information on citizens that could be used to data-mine for various purposes, including security concerns. The article led to the creation of a blue-ribbon committee around privacy concerns, the Technology and Privacy Advisory Committee, and the eventual cancellation of the program.  

Similar concerns have been raised in the UK around data retention following the introduction of blanket emergency legislation. Part eleven of the UK Anti-Terrorism, Crime and Security Act 2001, for example, allows for the automated surveillance of the private lives of a proportion of the population by analyzing patterns within their communications. Powers introduced following national crises can, therefore, be deliberately broad, and oversight mechanisms are necessary to protect against their exercise being extended from terrorist investigations to matters involving the wider population. In the UK, some of these concerns have been alleviated by data privacy rules under the European Union’s (EU’s) General Data Protection Regulation. However, exceptions exist for ‘vital interests’ (where processing is necessary to protect someone’s life). 

As countries ease lockdown restrictions imposed in response to the coronavirus, a trade-off for the liberty of free movement may be greater accessibility of civilian data. In at least twenty-three countries, dozens of ‘digital contact tracing’ apps have been downloaded more than fifty million times. Authorities in the UK and other countries, meanwhile, have deployed drones with video equipment and temperature sensors to track those who have broken lockdown restrictions by being outside their homes. In the United States, a task force of data mining start-ups and technology companies is currently working with the White House to develop a range of tracking and surveillance technologies to fight the coronavirus. Other ideas being considered include geolocation tracking of people using data from their phones, and facial recognition systems to determine who has come into contact with individuals later tested positive for the virus.  

Such methods have raised concerns around “surveillance creep,” where intrusive powers are expanded or data is used to prosecute for a range of crimes. Data used to build predictive or preventative computer models around the coronavirus outbreak, therefore, comes with various issues, the most important of which surround privacy and accuracy. Here, past experiences with the collection of data around the prevention of terrorism can offer some lessons learned.  

The first lesson is about privacy. An essential aspect of the UK Coronavirus Act 2020, for example, focuses on containing and slowing the virus by reducing unnecessary social contact. The measures it introduces to achieve this represent an erosion of safeguards placed on important and potentially intrusive investigatory powers. One example of data being used to prevent terrorism, which is relevant to privacy concerns around data sharing for the coronavirus, is aviation security. The United States, for instance, uses the Automated Targeted System (ATS), which assesses the comparative risks of arriving passengers. Knowledge discovery techniques within this system have been employed to create risk assessments and to target investigative resources. One example of such data being used to flag a subject of interest—suicide bomber Raed al Banna—who was denied entry to the United States, but whose biometrics and fingerprints were used to later identify him as part of a bomb attack in Iraq. Unlike other data collection methods, however, when it comes to terrorism, data collection often occurs without the knowledge or consent of the data subject.  

The second lesson is around accuracy. Unlike arrests that happen in person, artificial algorithms that use large forms of surveillance lack context for the data collected, which may lead to inaccurate inferences. Such potential for false positives and false negatives carries greater risks in the realm of disease control and terrorism prevention than, say, in identifying a shopper’s preferences. For example, one way to test for a disease that is able to accurately detect the disease 99 percent of the time, and inaccurately predicts it 1 percent of the time (a false positive). If 0.1 percent of the population has the disease (and the only way to confirm the presence of the disease is with a biopsy) in a population of three hundred million people, three hundred thousand people would have the disease, but ten times that number (nearly three million people) would have to undergo an unnecessary biopsy. In his book The Naked Crowd, George Washington University Professor Jeffrey Rosen discusses false-positive rates in a system that might have been designed to identify the nineteen hijackers involved in the 9/11 attacks. Assuming a 99 percent accuracy rate, searching a population of nearly three hundred million (the U.S. population in 2001 was 285 million) would mean approximately three million people would be identified as potential terrorists. 

The final lesson is on collaboration. In the future, similar data collection techniques may be employed in the sharing of information between countries on potential individuals who are carrying disease, or who may be at risk due to their travel. Unlike in the context of terrorism, where countries are working to share information against a foreign entity or actor (under United Nations Security Council Resolution 2396, for example), countries will be required to collaborate in order to contain the spread of disease. Concerns around the accuracy of data shared by China and other countries in the early stages of the pandemic, however, raise issues around this initiative, and a new international body may need to ensure that some countries avoid the temptation to coast while hoping that other countries will pick up the slack. It would also be useful for countries who have employed surveillance techniques to sign a code of practice to ensure that data analysis has sufficient oversight.

 

Nikita Malik is the Director of the Centre on Radicalisation and Terrorism at The Henry Jackson Society, a think tank based in Westminster, London.

Image: Reuters