As Facial Recognition Systems Proliferate in the UK, Their Efficacy Has Come Into Question
Here's what you need to know.
The UK is currently witnessing a tug of war over facial recognition. On the streets of London and in South Wales, live systems have been deployed by the police, supported by the UK government. But in the Scottish parliament, the Justice Sub-Committee on Policing is trying to halt use of the technology.
I recently gave evidence to the Scottish sub-committee’s inquiry, highlighting the cost of this technology in terms of its damage to freedom, trust and inclusivity in society. This comes not just from the use of facial recognition but in the ways it is designed and tested as well. And yet the benefits are often exaggerated - or have yet to be proven.
Facial recognition systems have already been tested and deployed across the UK. Investigative journalist Geoff White has created a map to show where systems are being, or have been, used, identifying dozens of sites across the country. Another map for the US shows a similar situation. If you see facial recognition technologies being used somewhere you can let such sites know to add the location and details. The results can be surprising.
Airports are a common place you may see facial recognition used and it is typically found in automatic border control machines. Airlines have also been testing the systems at the gate, expanding the data collection beyond government to private companies. Meanwhile, advertising screens in Piccadilly Circus in London, as well as Manchester, Nottingham and Birmingham, reportedly use the technology to target ads according to the age, gender and mood of people in the crowd.
Shopping centres and public spaces such as museums in cities across the UK have used the technology for security purposes. Football matches, airshows, concerts, Notting Hill Carnival and even the Remembrance Sunday service now fall under the invasive eye of facial recognition.
It is not always clear whether facial recognition is used to catch known criminals from a watchlist or simply to add an extra layer of security to public spaces and events. But the South Wales and Metropolitan police forces have admitted they are using it to try to catch elusive criminals. They claim to only use specific watchlists of dangerous individuals, but leaked documents show that they also include “persons where intelligence is required” - which could be just about anyone.
Research shows the UK public largely supports facial recognition, provided it benefits society and has appropriate limits. Yet there is little proof that facial recognition actually provides significant social benefit given the costs to privacy.
On a practical level, facial recognition technology doesn’t yet work very well. A 2019 independent review by the University of Essex found that only one in five matches by the Metropolitan Police’s system could confidently be considered accurate. South Wales Police has claimed its use of the technology has enabled 450 arrests. But only 50 were actually made using live facial recognition. The rest were down to conventional CCTV and face-matching or having officers on the street.
Facial recognition systems are often marketed with outrageous claims. The company Clearview AI, which is facing legal action for building a database of 3 billion photos of faces taken from social media and other websites, says its technology “helps to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably”. Yet it has also faced criticism that its technology simply isn’t anywhere near as useful to the police as the firm claims. (Clearview AI did not respond to The Conversation’s request for comment.)
With this in mind, the huge sums of money spent on these systems could probably be better spent on other things to tackle crime and improve public safety. But there are also deep problems with the way facial recognition technology works. For example, research has shown that the accuracy of facial recognition can depend on a subject’s race and gender. This means that if you are black and/or a woman, the technology is more likely to falsely match you with someone on a watchlist.
Slippery slope
Another issue is how it has the potential to be used for more than spotting known criminals and become a tool of mass surveillance. In one Metropolitan Police trial that led to four arrests, those detained weren’t dangerous criminals but passersby who simply tried to cover their faces to avoid the non-consensual facial recognition test.
Police harassment and fines are a slippery slope towards further discrimination and abuse of power. We may accept a human officer combing through CCTV footage looking for a specific suspect. But the sheer scale of live facial recognition is more like turning the entire country into one huge police line-up.
On a more fundamental level, biometric data (such as our facial measurements, fingerprints or DNA) are part of our identity. Facial recognition violates not only our right to go about in public without being monitored, but also our bodily rights and our very sense of self.
Facial recognition is creeping across the UK, but its distribution and its effects are likely to be very uneven. So, regardless of the efficacy or value of facial recognition, we need thoroughly considered national regulation to mitigate the significant risks. Otherwise we risk ending up with an inadequate patchwork of guidelines that is full of gaps and loopholes.
Garfield Benjamin, Postdoctoral Researcher, School of Media Arts and Technology, Solent University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image: Reuters.