How police reside facial recognition subtly reconfigures suspicion
Police use of reside facial recognition (LFR) know-how reconfigures suspicion in delicate but essential methods, undermining so-called human-in-the-loop safeguards.
Regardless of the long-standing controversies surrounding police use of LFR, the know-how is now used within the UK to scan hundreds of thousands of individuals’s faces yearly. Whereas preliminary deployments had been sparse, taking place solely each few months, they’re now run-of-the-mill, with facial recognition-linked cameras frequently deployed to occasions and busy areas in locations like London and Cardiff.
Given the potential for misguided alerts, police forces deploying the know-how declare {that a} human will at all times make the ultimate resolution over whether or not to have interaction somebody flagged by an LFR system. This measure is meant to make sure accuracy and cut back the potential of pointless police interactions.
Nevertheless, a rising physique of analysis highlighting the socio-technical nature of LFR techniques suggests the know-how is undermining these human-in-the-loop safeguards, by basically reshaping (and reinforcing) police perceptions of who’s deemed suspicious and the way police work together with them on the road in consequence.
A rising physique of analysis
In line with one paper from March 2021 – written by sociologists Pete Fussey, Bethan Davies and Martin Innes – the usage of LFR “constitutes a socio-technical assemblage that each shapes police practices but can also be profoundly formed by types of police suspicion and discretion”.
The authors argue that whereas beneath present police powers, officers recognising somebody could represent grounds for a cease and search, this adjustments when LFR is inserted into the method, as a result of the “preliminary recognition” doesn’t outcome from an officer exercising discretion.
“As a substitute, officers act extra akin to intermediaries, deciphering after which appearing upon a (computer-instigated) suggestion originating outdoors of, and previous to, their very own instinct,” the sociologists wrote. “The know-how thus performs a framing and priming position in how suspicion is generated.”
Extra just lately, lecturers Karen Yeung and Wenlong Li argued in a September 2025 analysis paper that, given the potential for misguided matches, the mere technology of an LFR match alert isn’t in itself sufficient to represent “affordable suspicion”, which UK police are required to reveal to legally cease and detain individuals.
“Though cops in England and Wales are entitled to cease people and ask them questions on who they’re and what they’re doing, people will not be obliged to reply these questions within the absence of affordable suspicion that they’ve been concerned within the fee of a criminal offense,” they wrote.
“Accordingly, any preliminary try by cops to cease and query a person whose face is matched to the watchlist should be undertaken on the premise that the person isn’t legally obliged to cooperate for that cause alone.”
Regardless of being legally required to have affordable suspicion, a July 2019 paper from the Human Rights, Massive Information & Expertise Mission primarily based on the College of Essex Human Rights Centre, which marked the primary impartial assessment into trials of LFR know-how by the Metropolitan Police, noticed a discernible “presumption to intervene” amongst cops utilizing the know-how.
In line with authors Fussey and Daragh Murray, who’s a reader in worldwide regulation and human rights at Queen Mary’s College of Legislation, this implies the officers concerned tended to behave on the outcomes of the system and have interaction people that it stated matched the watchlist in use, even when they didn’t.
As a type of automation bias, the “presumption to intervene” is essential in a socio-technical sense, as a result of in apply it dangers opening up random members of the general public to unwarranted or pointless police interactions.
Priming suspicion
Though Yeung and Li famous that people will not be legally obliged to cooperate with police within the absence of affordable suspicion, there have been situations the place failing to adjust to officers after an LFR alert has affected individuals negatively.
In February 2025, for instance, anti-knife crime campaigner Shaun Thompson, who was returning residence from a volunteer shift in Croydon with the Road Fathers youth outreach group, was stopped by officers after being wrongly recognized as a suspect by the Met’s LFR system.
Thompson was then held for nearly half-hour by officers, who repeatedly demanded scans of his fingerprints and threatened him with arrest, regardless of being supplied with a number of id paperwork displaying he was not the person on the database.
Thompson has publicly described the system as “cease and search on steroids” and stated it felt like he was being handled as “responsible till confirmed harmless”. Following the incident, Thompson launched a judicial assessment into the Met’s use of LFR to cease others ending up in comparable conditions, which is because of be heard in January 2026.
Even when no alert has been generated, there are situations the place the usage of LFR has prompted unfavorable interactions between residents and the police.
In the course of the Met’s February 2019 deployment in Romford, for instance, Pc Weekly was current when two members of the general public had been stopped for protecting their faces close to the LFR van as a result of they didn’t need their biometric data to be processed.
Writing to the Lords Justice and Residence Affairs Committee (JHAC) in September 2021 as a part of its investigation into policing algorithms, Fussey, Murray and criminologist Amy Stevens famous that whereas most surveillance within the UK is designed to focus on people as soon as a sure threshold of suspicion has been reached, LFR inverts this by contemplating everybody that passes via the digital camera’s gaze as suspicious within the first occasion.
This implies though individuals will be subsequently eradicated from police inquiries, the know-how itself impacts how officers see suspicion, by basically “priming” them to have interaction with individuals flagged by the system.
“Any potential tendency to defer or over-rely on automated outputs over different accessible data has the flexibility to remodel what continues to be thought-about to be a human-led resolution to de facto an automatic one,” they wrote.
“Sturdy monitoring ought to subsequently be in place to supply an understanding of the extent of deference to instruments supposed as advisory, and the way typically and by which circumstances human customers make an alternate resolution to the one suggested by the software.”
Watchlist creation and bureaucratic suspicion
A key facet mediating the connection between LFR and the idea of “affordable suspicion” is the creation of watchlists.
Socio-technically, researchers investigating LFR use by police have expressed numerous issues round watchlist creation, together with the way it “buildings the police gaze” to give attention to explicit individuals and social teams.
Of their 2021 paper, for instance, Fussey, Davies and Innes famous that creating watchlists from police-held custody pictures naturally means police consideration might be focused towards “the same old suspects”, inducing “a technologically framed bureaucratic suspicion in digital policing”.
Because of this, quite than linking particular proof from a criminal offense to a selected particular person (often known as ‘incidental suspicion’), LFR as an alternative depends on the usage of normal, standardised standards (equivalent to an individual’s prior police document or location) to determine potential suspects, which is understood in sociology as “bureaucratic suspicion”.
“People listed on watchlists and databases are forged as warranting suspicion, and the AFR [automated facial recognition] surveillant gaze is particularly oriented in direction of them,” they wrote.
“However, in so doing, the social biases of police exercise that disproportionately focuses on younger individuals and members of African Caribbean and different minority ethnic teams (inter alia The Lammy Evaluation 2017) are additional inflected by alleged technological biases deriving from how technical accuracy recedes for topics who’re older, feminine and for some individuals of color.”
Others have additionally raised separate issues in regards to the obscure standards round watchlist creation and the significance of needing “high quality” information to feed into the system.
Yeung and Li, for instance, have highlighted “unresolved questions” in regards to the legality of watchlist composition, together with the “significance and seriousness” of the underlying offence used to justify an individual’s inclusion, and the “legitimacy of the explanation why that individual is ‘wished’ by the police” within the first place.
For example, whereas police repeatedly declare that LFR is getting used solely on probably the most critical or violent offenders, watchlists frequently include pictures of individuals for drug, shoplifting or visitors offences, which legally don’t meet this definition.
Writing of their September 2025 paper, Yeung and Li additionally famous that whereas the Met’s watchlists had been populated by people wished on excellent arrest warrants, in addition they included “pictures of a much wider, amorphous class of individuals” who didn’t meet the definition of significant offenders.
This included “people not allowed to attend the Notting Hill Carnival”, “people whose attendance would pose a danger to the safety and security of the occasion”, “wished lacking” people and youngsters, and even people who “current a danger of hurt to themselves and to others” and people who “could also be in danger or weak”.
In December 2023, senior officers from the Met and South Wales Police confirmed that LFR operates on a “bureaucratic suspicion” mannequin, telling a Lords committee that facial recognition watchlist picture choice relies on generic crime classes connected to individuals’s photographs, quite than a context-specific evaluation of the risk introduced by a given particular person.
The Met Police’s then-director of intelligence, Lindsey Chiswick, additional instructed Lords that whether or not or not one thing is “critical” is determined by the context, and that, for instance, retailers affected by prolific shoplifting could be “critical for them”.
Whereas the obscure and amorphous nature of police LFR watchlist creation has been highlighted by different lecturers – together with Fussey et al, who argued that “broad classes provide vital latitude for interpretation, creating an area for officer discretion on the subject of who was enrolled and excluded from such databases” – the difficulty has additionally been highlighted by the courts.
In August 2020, for instance, the Courtroom of Attraction dominated that the usage of LFR by South Wales Police was illegal, partially as a result of the vagueness of the watchlist standards – which used “different individuals the place intelligence is required” as an inclusion class – left extreme discretion within the palms of the police.
“It isn’t clear who will be positioned on the watchlist, neither is it clear that there are any standards for figuring out the place AFR will be deployed,” stated the judgment, including that, “in impact, it might cowl anybody who’s of curiosity to the police.”
In the course of the December 2023 Lords session, watchlist measurement was additionally highlighted by Yeung – who was referred to as to present proof given her experience – as an essential socio-technical issue.
“There’s a divergence between the claims that they solely put photos of these wished for critical crimes on the watchlist, and the truth that within the Oxford Circus deployment alone, there have been over 9,700 pictures,” she stated.
Illegal custody pictures retention
Additional underpinning issues in regards to the socio-technical impacts of watchlist creation, there are ongoing points with the illegal retention of custody pictures within the Police Nationwide Database (PND). This represents the first supply of pictures used to populate police watchlists.
In 2012, a Excessive Courtroom ruling discovered the retention of custody pictures within the PND to be illegal on the premise that details about unconvicted individuals was being handled in the identical approach as details about individuals who had been finally convicted, and that the six-year retention interval was disproportionate.
Regardless of the 2012 ruling, hundreds of thousands of custody pictures are nonetheless being unlawfully retained.
Writing to different chief constables to stipulate among the points round custody picture retention in February 2022, the Nationwide Police Chiefs Council (NPCC) lead for data administration, Lee Freeman, stated the doubtless illegal retention of an estimated 19 million pictures “poses a major danger by way of potential litigation, police legitimacy, and wider assist and problem in our use of those pictures for applied sciences equivalent to facial recognition”.
In November 2023, the NPCC confirmed to Pc Weekly that it had launched a programme that may search to ascertain a administration regime for custody pictures, alongside a assessment of all at present held information by police forces within the UK.
The problem was once more flagged by the biometric commissioner of England and Wales, Tony Eastaugh, in December 2024, when he famous in his annual report that “forces proceed to retain and use pictures of people that, whereas having been arrested, have by no means subsequently been charged or summonsed”.
Eastaugh added that whereas work was already “underway” to make sure the retention of pictures is proportionate and lawful, “the usage of custody pictures of unconvicted people could embody for facial recognition functions”.

