Essex Police halts dwell facial recognition over bias and accuracy dangers
Essex Police has paused its use of dwell facial recognition (LFR) know-how after figuring out potential accuracy and bias dangers.
The pressure’s suspension of its LFR system – offered by Israeli biometrics agency Corsight – was revealed in an audit doc printed by the Data Commissioner’s Workplace (ICO), which mentioned Essex Police should work to “cut back the dangers” recognized earlier than persevering with with future deployments.
An inventory of LFR deployments from Essex Police reveals the final time the pressure used the know-how was on 26 August 2025, that means its deployments had already been paused by the point the ICO carried out its audit that November.
Whereas it’s at the moment unclear what particularly prompted the pressure to droop its LFR use, Laptop Weekly solely reported in Might 2025 that Essex Police had didn’t correctly take into account its doubtlessly discriminatory impacts, after a “clearly insufficient” equality impression evaluation (EIA) was obtained through Freedom of Data guidelines by privateness marketing campaign group Massive Brother Watch.
Specialists criticised the doc on the time for being “incoherent”, failing to have a look at the systemic equalities impacts of the know-how, and relying solely on testing of totally completely different software program algorithms utilized by different police forces skilled on completely different populations.
The pressure was additionally criticised for “parroting deceptive claims” from the provider concerning the LFR system’s lack of bias, with the Nationwide Institute of Requirements and Expertise – a physique extensively recognised because the gold commonplace for LFR testing, the place all the testing knowledge is publicly shared – holding no data to assist the accuracy figures cited by Corsight, or its declare to basically have the least-biased algorithm out there.
Massive Brother Watch alleged on the time that these points taken collectively meant the pressure had probably didn’t fulfil its public sector equality obligation to contemplate how its insurance policies and practices might be discriminatory.
Impartial testing
Responding to the criticisms, the pressure mentioned on the time that it was persevering with to hold out evaluations, noting that each the Nationwide Bodily Laboratory (NPL) and Cambridge College had been commissioned to conduct additional impartial testing of its system.
In accordance with the outcomes of that Cambridge research – printed 12 March 2026 – the system was extra prone to accurately determine males than ladies, and was “statistically considerably extra prone to accurately determine black members than members from different ethnic teams”.
Matt Bland, a criminologist concerned within the research, mentioned: “In the event you’re an offender passing facial recognition cameras that are arrange as they’ve been in Essex, the possibilities of being recognized as being on a police watchlist are higher in the event you’re black. To me, that warrants additional investigation.”
In contrast, the additional NPL testing – additionally printed in March 2026 – discovered black males had been probably to be accurately matched by the system and white males least probably, however famous that the disparity was not statistically vital.
Laptop Weekly contacted the pressure to ask what particularly prompted the LFR suspension determination, together with whether or not it was the research outcomes or earlier criticisms of the EIA.
“In keeping with our dedication to our Public Sector Equality Obligation, Essex Police commissioned two impartial research which had been accomplished by academia,” a spokesperson mentioned. “The primary of those indicated there was a possible bias within the optimistic identification charge, whereas the second recommended there was no statistical related bias within the outcomes.
“Based mostly on the very fact there was potential bias, the pressure determined to pause deployments whereas we labored with the algorithm software program supplier to evaluate the outcomes and search to replace the software program,” they added. “We then sought additional educational evaluation.
“Because of this work, we have now revised our insurance policies and procedures and at the moment are assured that we will begin deploying this essential know-how as a part of policing operations to hint and arrest needed criminals. We are going to proceed to observe all outcomes to make sure there isn’t a danger of bias in opposition to anyone part of the neighborhood.”
Responding to information of the suspension, Jake Hurfurt, the pinnacle of analysis and investigations at Massive Brother Watch, mentioned: “Police throughout the nation should be aware of this fiasco. AI [artificial intelligence] surveillance that’s experimental, untested, inaccurate or doubtlessly biased has no place on our streets.”
Ramping up deployments with out debate
Whereas the usage of LFR by police – starting with the Met’s deployment at Notting Hill Carnival in August 2016 – has already ramped up massively lately, there has up to now been minimal public debate or session, with the Dwelling Workplace claiming for years that there’s already “complete” authorized framework in place.
Nonetheless, in December 2025, the Dwelling Workplace launched a 10-week session on the usage of LFR by UK police, permitting events and members of the general public to share their views on how the controversial know-how ought to be regulated.
The division has mentioned that though a “patchwork” authorized framework for police facial recognition exists (together with for the rising use of the retrospective and “operator-initiated” variations of the know-how), it doesn’t give police themselves the arrogance to “use it at considerably higher scale … nor does it persistently give the general public the arrogance that will probably be used responsibly”.
It added that the present guidelines governing police LFR use are “sophisticated and obscure”, and that an unusual member of the general public could be required to learn 4 items of laws, police nationwide steerage paperwork and a variety of detailed authorized or knowledge safety paperwork from particular person forces to totally perceive the premise for LFR use on their excessive streets.
Earlier than the session had even closed, nonetheless, the Dwelling Workplace introduced plans for the large roll-out of AI and facial-recognition applied sciences as a part of sweeping reforms to the UK’s “damaged” policing system.
Below the proposals – introduced in late January 2026, almost three weeks earlier than the session closed – the Dwelling Workplace will improve the variety of LFR vans out there to police from 10 to 50; arrange a brand new Nationwide Centre for AI in Policing – to be often called Police.AI – to construct, check and guarantee AI fashions for policing contexts; and make investments £115m over three years to assist determine, check and scale new AI applied sciences in policing.
‘Panopticon’ imaginative and prescient
In a current interview with former prime minister Tony Blair, UK residence secretary Shabana Mahmood described her ambition to make use of applied sciences akin to AI and LFR to attain Jeremy Bentham’s imaginative and prescient of a “panopticon”, referring to his proposed jail design that may enable a single, unseen guard to silently observe each prisoner without delay.
Usually used right now as a metaphor for authoritarian management, the underpinning concept of the panopticon is that by instilling a perpetual sense of being watched among the many inmates, they might behave because the authorities needed.
“After I was in justice, my final imaginative and prescient for that a part of the felony justice system was to attain, by way of AI and know-how, what Jeremy Bentham tried to do along with his panopticon,” Mahmood instructed Blair. “That’s that the eyes of the state might be on you always.”

