In London, the Met has scanned more than 1.7 million faces this year, up 87%, while watchdogs say regulation could still be at least three years away.
They want new laws and a dedicated regulator, citing a patchwork legal framework, police self-scrutiny, postponed independent audits and claims of misidentification and malicious retail watchlist additions.
The Home Office is considering a national framework as police and retailers expand use, but polling shows strong surveillance concerns and wrongly flagged people say complaints processes are ineffective.
As facial recognition spreads across Britain, why might the law remain three years behind the technology?
With AI misidentifying innocent people, is Britain creating a justice system of 'guilty until proven innocent'?
UK Police Deploy 1.7 Million Facial Scans in 2026 Amid £115M AI Push and Regulatory Gaps
Overview
In 2026, police forces across England and Wales rapidly expanded live facial recognition (LFR) use, driven by a major Home Office investment and legal backing from a High Court ruling. The Metropolitan Police alone scanned millions of faces, leading to arrests, while mobile LFR vans increased fivefold. However, concerns about algorithmic bias and privacy risks led to pauses and scrutiny by the Information Commissioner’s Office. The UK lacks a unified legal framework for facial recognition, prompting calls for comprehensive regulation. The government’s proposed principles-based law aims to balance public safety with transparency and oversight, but civil liberties groups continue to warn against mass surveillance and systemic bias, especially as a national facial matching service is developed amid global privacy debates.