MPs suggest ban on predictive policing
Predictive policing applied sciences infringe human rights “at their coronary heart” and ought to be prohibited within the UK, argues Inexperienced MP Siân Berry, after tabling an modification to the federal government’s forthcoming Crime and Policing Invoice.
Talking within the Home of Commons in the course of the report stage of the invoice, Berry highlighted the hazards of utilizing predictive policing applied sciences to evaluate the probability of people or teams committing felony offences sooner or later.
“Such applied sciences, nevertheless cleverly offered, will at all times have to be constructed on current, flawed police information, or information from different flawed and biased private and non-private sources,” she stated. “That implies that communities which have traditionally been over-policed shall be extra more likely to be recognized as being ‘in danger’ of future felony behaviour.”
Berry’s modification (NC30 within the modification paper) – which has been sponsored by eight different MPs, together with Zarah Sultana, Ellie Chowns, Richard Burgon and Clive Lewis – would particularly prohibit using automated decision-making (ADM), profiling and synthetic intelligence (AI) for the aim of constructing threat assessments concerning the probability of teams or folks committing felony offences.
It will additionally prohibit using sure data by UK police to “predict” folks’s behaviour: “Police forces in England and Wales shall be prohibited from… Predicting the prevalence or reoccurrence of an precise or potential felony offence based mostly on profiling of a pure particular person or on assessing character traits and traits, together with the particular person’s location, or previous felony behaviour of pure individuals or teams of pure individuals.”
Talking within the Commons, Berry additional argued: “As I’ve at all times stated within the context of facial recognition, questions of accuracy and bias should not the one purpose to be in opposition to these applied sciences. At their coronary heart, they infringe human rights, together with the proper to privateness and the proper to be presumed harmless.”
Whereas authorities deploying predictive policing instruments say they can be utilized to extra effectively direct assets, critics have lengthy argued that, in observe, these methods are used to repeatedly goal poor and racialised communities, as these teams have traditionally been “over-policed” and are due to this fact over-represented in police datasets.
This then creates a detrimental suggestions loop, the place these so-called “predictions” result in additional over-policing of sure teams and areas, thereby reinforcing and exacerbating the pre-existing discrimination as growing quantities of knowledge are collected.
Tracing the historic proliferation of predictive policing methods of their 2018 guide Police: A subject information, authors David Correia and Tyler Wall argue that such instruments present “seemingly goal information” for regulation enforcement authorities to proceed participating in discriminatory policing practices, “however in a fashion that seems free from racial profiling”.
They added it due to this fact “shouldn’t be a shock that predictive policing locates the violence of the longer term within the poor of the current”.
Because of such considerations, there have been quite a few calls in latest months from civil society for the UK authorities to ban using predictive policing instruments.
In February 2025, for instance, Amnesty Worldwide revealed a 120-page report on how predictive policing methods are “supercharging racism” within the UK through the use of traditionally biased information to additional goal poor and racialised communities.
It discovered that throughout the UK, a minimum of 33 police forces have deployed predictive policing instruments, with 32 of those utilizing geographic crime prediction methods in comparison with 11 which might be utilizing people-focused crime prediction instruments.
Amnesty added these instruments are “in flagrant breach” of the UK’s nationwide and worldwide human rights obligations as a result of they’re getting used to racially profile folks, undermine the presumption of innocence by concentrating on folks earlier than they’ve even been concerned in against the law, and gasoline indiscriminate mass surveillance of whole areas and communities.
Greater than 30 civil society organisations – together with Large Brother Watch, Amnesty, Open Rights Group, Inquest, Public Regulation Undertaking and Statewatch – additionally signed an open letter in March 2025 elevating considerations about how the Knowledge Use and Entry Invoice, which is now an Act, will take away safeguards in opposition to using automated decision-making by police.
“At present, sections 49 and 50 of the Knowledge Safety Act 2018 prohibit solely automated selections from being made within the regulation enforcement context except the choice is required or authorised by regulation,” they wrote within the letter, including that the brand new Clause 80 would reverse this safeguard by allowing solely automated decision-making in all eventualities the place particular class information isn’t getting used.
“In observe, because of this automated selections about folks may very well be made within the regulation enforcement context on the idea of their socioeconomic standing, regional or postcode information, inferred feelings, and even regional accents. This significantly expands the probabilities for bias, discrimination, and lack of transparency.”
The teams added that non-special class information can be utilized as a “proxy” for protected characterises, giving the instance of how postcodes can be utilized as a proxy to probably infer somebody’s race.
Additionally they highlighted how, in response to the authorities’s personal influence evaluation for the regulation, “these with protected traits equivalent to race, gender and age, usually tend to face discrimination from ADM on account of historic biases in datasets”.
The letter was additionally signed by a variety of teachers, together with Brent Mittelstadt and Sandra Wachter from the Oxford Web Institute, and social anthropologist Toyin Agbetu from College Faculty London.
A separate modification (NC22) launched by Berry makes an attempt to alleviate these information points by introducing new safeguards for automated selections in a regulation enforcement context, which would come with offering significant redress, larger transparency round police use of algorithms, and guaranteeing that individuals can request human involvement in any police selections about them.
In April 2025, Statewatch additionally individually known as for the Ministry of Justice (MoJ) to halt its growth of crime prediction instruments, after acquiring paperwork by way of a Freedom of Info (FoI) marketing campaign that exposed that the division is already utilizing one flawed algorithm to “predict” folks’s threat of reoffending, and is actively creating one other system to “predict” who will commit homicide.
“The Ministry of Justice’s try and construct this homicide prediction system is the most recent chilling and dystopian instance of the federal government’s intent to develop so-called crime ‘prediction’ methods,” stated Statewatch researcher Sofia Lyall.
“Like different methods of its variety, it would code in bias in direction of racialised and low-income communities. Constructing an automatic software to profile folks as violent criminals is deeply improper, and utilizing such delicate information on psychological well being, dependancy and incapacity is extremely intrusive and alarming.”
She added: “As an alternative of throwing cash in direction of creating dodgy and racist AI and algorithms, the federal government should spend money on genuinely supportive welfare providers. Making welfare cuts whereas investing in techno-solutionist ‘fast fixes’ will solely additional undermine folks’s security and well-being.”
Previous to this, a coalition of civil society teams known as on the then-incoming Labour authorities in July 2024 to put an outright ban on each predictive policing and biometric surveillance within the UK, on the idea they’re disproportionately used to focus on racialised, working class and migrant communities.
A March 2022 Home of Lords inquiry into using superior algorithmic applied sciences by UK police has additionally beforehand recognized main considerations round using crime prediction methods, highlighting their tendency to provide a “vicious circle” and “entrench pre-existing patterns of discrimination” as a result of they direct police patrols to low-income, already over-policed areas based mostly on historic arrest information.
Lords discovered that, usually, UK police are deploying algorithmic applied sciences – together with AI and facial recognition – with no thorough examination of their efficacy or outcomes, and are basically “making it up as they go alongside”.