As live facial recognition (LFR) becomes more common in public spaces across the United Kingdom, critical questions are being raised about its legality, accuracy, and impact on human rights. A recent policy report sheds light on how law enforcement’s use of LFR operates in a legal grey area—one that leaves individuals vulnerable to privacy violations, racial profiling, and unwarranted surveillance.
What is Live Facial Recognition (LFR)?
LFR systems scan faces in real time using cameras in public areas—such as city centres, sports events, or transport hubs—and match them against police watchlists. While promoted as tools for crime prevention and public safety, the increasing routine use of this technology signals a concerning shift from exceptional policing to blanket biometric surveillance.
Legal Loopholes and Weak Oversight
There is currently no UK law specifically authorizing or regulating police use of facial recognition technology. Instead, police rely on general principles of data protection, human rights, and internal policies—many of which lack enforceability. In 2020, the Court of Appeal ruled in the Bridges v. South Wales Police case that such use of LFR violated privacy rights due to unclear rules and unchecked police discretion. Yet, despite this ruling, deployments continue under fragmented and ambiguous guidelines.
Human Rights at Risk
The use of facial recognition in public spaces raises significant human rights concerns. It allows for indiscriminate scanning of every passerby—treating all individuals as potential suspects—and intrudes on the right to privacy as protected under Article 8 of the European Convention on Human Rights. The surveillance has a chilling effect on public life, affecting the freedoms of assembly, expression, and movement.
Systemic Bias and Error
LFR has also been shown to disproportionately misidentify individuals from Black and minority ethnic backgrounds. Technical studies, including those from the U.S. National Institute of Standards and Technology (NIST), reveal that some facial recognition algorithms are up to 100 times more likely to misidentify non-white individuals. In the UK, false positive rates of over 90% have been reported in deployments by both the Metropolitan and South Wales Police. These errors can lead to wrongful stops, questioning, and the storage of biometric data without consent.
Toward Responsible Use of Technology
While the potential of facial recognition in solving crimes is acknowledged, the risks it poses—unchecked surveillance, erosion of trust, racial discrimination—far outweigh its benefits under the current regulatory vacuum. Comparative approaches from the European Union show a more cautious stance, with stricter controls and bans in place for public-space biometric surveillance.
Until the UK develops clear, enforceable laws around this powerful technology, the public remains exposed to unjustified monitoring and systemic bias in the name of security.
📄 Read the full report at ici