Technology

UK ICO publishes AI and biometrics technique


The UK Info Commissioner’s Workplace (ICO) has launched a man-made intelligence (AI) and biometrics technique, which the regulator says will help innovation whereas defending individuals’s information rights.

Revealed on 5 June 2025, the technique highlights how the ICO will focus its efforts on expertise use instances the place many of the dangers are concentrated, however the place there’s additionally “vital potential” for public profit.

This contains the usage of automated decision-making (ADM) techniques in recruitment and public companies, the usage of facial recognition by police forces, and the event of AI basis fashions.

Particular actions outlined for these areas embody conducting audits and producing steering on the “lawful, truthful and proportionate” use of facial recognition by police, setting “clear expectations” for the way individuals’s private information can be utilized to coach generative AI fashions, and growing a statutory code of observe for organisations’ use of AI.

The regulator added it might additionally seek the advice of on updating steering for ADM profilers, working particularly with early adopters such because the Division for Work and Pensions (DWP), and produce a horizon scanning report on the implications of agentic AI that’s more and more able to appearing autonomously.

“The identical information safety rules apply now as they all the time have – belief issues, and it could actually solely be constructed by organisations utilizing individuals’s private info responsibly,” mentioned info commissioner John Edwards on the launch of the technique. “Public belief just isn’t threatened by new applied sciences themselves, however by reckless functions of those applied sciences exterior of the required guardrails.”

The technique additionally outlined how – as a result of “we persistently see public concern” round transparency and explainability, bias and discrimination, and rights and redress – these are areas the place the regulator will focus its efforts.

On AI fashions, for instance, the ICO mentioned it should “safe assurances” from builders round how they’re utilizing individuals’s private info so persons are conscious, whereas for police facial recognition, it mentioned it should publish steering clarifying how it may be deployed lawfully.

Police facial recognition techniques will even be audited, with the findings printed to guarantee people who the techniques are being effectively ruled and their rights are being protected.

Synthetic intelligence is greater than only a expertise change – it’s a change in society. However AI should work for everybody … and that includes placing equity, openness and inclusion into the underpinnings
Daybreak Butler, AI All Celebration Parliamentary Group

“Synthetic intelligence is greater than only a expertise change – it’s a change in society. It can more and more change how we get healthcare, attend faculty, journey and even expertise democracy,” mentioned Daybreak Butler, vice-chair of the AI All Celebration Parliamentary Group (APPG), on the technique’s launch. “However AI should work for everybody, not just some individuals, to vary issues. And that includes placing equity, openness and inclusion into the underpinnings.”

Lord Clement-Jones, co-chair of the AI APPG, added: “The AI revolution should be based on belief. Privateness, transparency and accountability will not be impediments to innovation – they represent its basis. AI is advancing quickly, transitioning from generative fashions to autonomous techniques. Nonetheless, elevated velocity introduces complexity. Complexity entails danger. We should assure that innovation doesn’t compromise public belief, particular person rights, or democratic rules.”

In accordance with the ICO, detrimental perceptions of how AI and biometrics are deployed dangers hampering their uptake, noting that belief particularly is required for individuals to help or have interaction with these applied sciences, or the companies powered by them.

It famous that public issues are significantly excessive with regards to police biometrics, the usage of automated algorithms by recruiters, and the usage of AI to find out individuals’s eligibility for welfare advantages.

“In 2024, simply 8% of UK organisations reported utilizing AI decision-making instruments when processing private info, and seven% reported utilizing facial or biometric recognition. Each have been up solely marginally from the earlier 12 months,” mentioned the regulator.  

“Our goal is to empower organisations to make use of these advanced and evolving AI and biometric applied sciences consistent with information safety regulation. This implies persons are protected and have elevated belief and confidence in how organisations are utilizing these applied sciences.

“Nonetheless, we is not going to hesitate to make use of our formal powers to safeguard individuals’s rights if organisations are utilizing private info recklessly or in search of to keep away from their duties. By intervening proportionately, we are going to create a fairer taking part in subject for compliant organisations and guarantee sturdy protections for individuals.”

In late Could 2025, an evaluation by the Ada Lovelace Institute discovered that “vital gaps and fragmentation” within the current “patchwork” governance frameworks for biometric surveillance applied sciences means individuals’s rights will not be being adequately protected.

Whereas the Ada Lovelace Institute’s evaluation targeted totally on deficiencies in UK policing’s use of stay facial recognition (LFR) expertise – which it recognized as essentially the most outstanding and extremely ruled biometric surveillance use case – it famous there’s a want for authorized readability and efficient governance for “biometric mass surveillance applied sciences” throughout the board.

This contains different types of biometrics, equivalent to fingerprints for cashless funds in colleges, or techniques that declare to remotely infer individuals’s feelings or truthfulness, in addition to different deployment situations, equivalent to when supermarkets use LFR to determine shoplifters or verification techniques to guarantee individuals’s ages for alcohol purchases.

Each Parliament and civil society have made repeated calls for brand new authorized frameworks to manipulate UK regulation enforcement’s use of biometrics.

This contains three separate inquiries by the Lords Justice and Residence Affairs Committee (JHAC) into shoplifting, police algorithms and police facial recognition; two of the UK’s former biometrics commissioners, Paul Wiles and Fraser Sampson; an impartial authorized assessment by Matthew Ryder QC; the UK’s Equalities and Human Rights Fee; and the Home of Commons Science and Expertise Committee, which referred to as for a moratorium on LFR way back to July 2019

Nonetheless, whereas most of those targeted purely on police biometrics, the Ryder assessment particularly additionally took into consideration non-public sector makes use of of biometric information and applied sciences, equivalent to in public-private partnerships and for office monitoring