UK facial recognition deployment raises monitoring and error concerns
Updated
Updated · The Guardian · May 5
UK facial recognition deployment raises monitoring and error concerns
9 articles · Updated · The Guardian · May 5
In Croydon, police live cameras alerted officers to watchlist matches, while the Metropolitan Police scanned more than 1.7 million faces this year, up 87% on 2025.
Retailers are also using systems such as Facewatch to identify suspected shoplifters, but people have reported being wrongly flagged and removed from stores.
Campaigners warn fragmented oversight leaves risks around protest surveillance, children and racial bias, while the Home Office says it is considering a new legal framework.
As facial recognition expands from streets to police phones, are we all becoming permanent suspects?
When an AI wrongly flags you as a criminal, who is held accountable for the error?
Facial Recognition in UK Policing: 50 Live Scan Vans, Racial Bias, and the Fight for Oversight
Overview
In early 2026, the wrongful arrest of Alvi Choudhury due to biased facial recognition technology (FRT) sparked widespread concern and demands for regulation. Essex Police paused live FRT deployments after an academic study revealed significant racial bias, later resuming with stricter policies. Despite public support for FRT, civil society groups criticized the government's plans to expand live recognition vans and access large government photo databases without clear legal safeguards. The fragmented oversight and ongoing legal challenges highlight the urgent need for a comprehensive legal framework, improved algorithms, and transparent, independent monitoring to balance crime-fighting benefits with protecting civil rights and public trust.