Technology

Lack of assets biggest hurdle for regulating AI, MPs instructed


Nearer cooperation between regulators and elevated funding are wanted for the UK to deal successfully with the human rights harms related to the proliferation of synthetic intelligence (AI) programs. 

On 4 February 2026, the Joint Committee on Human Rights met to debate whether or not the UK’s regulators have the assets, experience and powers to make sure that human rights are protected against new and rising harms brought on by AI. 

Whereas there are at the very least 13 regulators within the UK with remits regarding AI, there is no such thing as a single regulator devoted to regulating AI.

The federal government has said that AI must be regulated by the UK’s present framework, however witnesses from the Equality and Human Rights Fee (EHRC), the Info Commissioner’s Workplace (ICO) and Ofcom warned MPs and Lords that the present disconnected method dangers falling behind fast-moving AI with out stronger coordination and resourcing. 

Mary-Ann Stephenson, chair of the EHRC, pressured that assets had been the best hurdle in regulating the expertise. “There’s a nice deal extra that we want to do on this space if we had extra assets,” she mentioned.

Highlighting how the EHRC’s finances has remained frozen at £17.1m since 2012, which was then the minimal quantity required for the fee to carry out its statutory features, Stephenson instructed MPs and Lords that that is equal to a 35% reduce.

Regulators instructed the committee that the authorized framework is essentially in place to deal with AI-related discrimination and rights harms by way of the Equality Act.  

The constraint is subsequently in capability and assets, not a scarcity of statutory powers. Because of this, a lot of the enforcement is reactive somewhat than proactive.

Stephenson mentioned: “The very first thing the federal government ought to do is make sure that present regulators are sufficiently funded, and funded to have the ability to work collectively in order that we will reply swiftly when gaps are recognized.”

Andrew Breeze, director for on-line security expertise coverage at Ofcom, pressured that regulation couldn’t hold tempo with fast AI growth.

Nonetheless, regulators additionally pressured that they’re technology-neutral; their powers with regard to AI are restricted to the use case and deployment stage. Ofcom, the ICO and the ECHR haven’t any energy to refuse or give prior approval to new AI merchandise. 

The committee itself expressed a robust curiosity in having a devoted AI regulator. Labour peer Baroness Chakrabarti in contrast AI regulation to the pharmaceutical trade. 

“Massive enterprise, a lot of jobs, able to doing huge good for therefore many individuals, however equally able to doing plenty of harm,” she mentioned. “We’d not dream of not having a particular medicines regulator on this nation or any developed nation, despite the fact that there may be privateness points and basic human rights points.”

Regulators had been in favour of a coordinating physique to deliver stronger cross-regulator mechanisms somewhat than a single super-regulator. They pressured that as a result of AI is a general-purpose expertise, regulation works finest when dealt with by sector regulators that cowl particular domains.

Types of coordination are already in place, such because the Digital Regulation Cooperation Discussion board (DRCF), fashioned in July 2020 to strengthen the working relationship between 4 regulators. 

It has created cross-regulatory groups to share data and develop collective views on digital points, together with algorithmic processing, design frameworks, digital promoting applied sciences and end-to-end encryption. 

The then-outgoing info commissioner, Elizabeth Denham, instructed MPs and friends that information-sharing gateways between regulators and the flexibility to carry out obligatory audits “would make sure that expertise corporations, some the dimensions of nation-states, usually are not discussion board purchasing or operating one regulator towards one other”.

Unfold of misinformation 

Breeze made the case for higher worldwide regulatory cooperation with regard to disinformation produced by AI. 

Ofcom clarified that, below the UK’s On-line Security Act, it doesn’t have the facility to manage the unfold of misinformation on social media. 

“Parliament explicitly determined on the time the On-line Security Invoice was handed to not cowl content material that was dangerous however authorized, besides to the extent that it harms youngsters,” mentioned Breeze.

Whereas misinformation and disinformation regulation is essentially absent in UK legislation, it’s current within the European Union’s counterpart to the On-line Security Act. 

Due to the cross-border nature of enormous tech corporations, Breeze famous that authorized motion on discrimination can generally be taken utilizing European laws.

Age regulation and the On-line Security Act

Regulators additionally addressed scepticism on age assurance safeguards within the context of the proposed social media ban for under-16s and limiting entry to on-line pornography.

Breeze mentioned age assurance represented a trade-off for regulators between baby safety and making certain a excessive diploma of on-line privateness.

Responding to criticism that the On-line Security Act has been ineffective as a result of widespread use of digital personal networks (VPNs), Breeze mentioned: “Checks are about making certain as many younger individuals as attainable are protected against seeing merchandise deemed dangerous to them … and there’s no impregnable defence which you can create on the web towards a decided particular person, grownup or baby.”

He mentioned that in keeping with the proof, nearly all of youngsters who report seeing dangerous content material normally weren’t in search of it. 

The identical committee heard in November 2025 that the UK authorities’s deregulatory method to synthetic intelligence would fail to cope with the expertise’s extremely scalable human rights harms and will result in additional public disenfranchisement.

Massive Brother Watch director Silkie Carlo highlighted that the federal government’s “very optimistic and commercial-focused outlook on AI” and the Knowledge Use and Entry Act (DUAA) have “decimated individuals’s protections towards automated decision-making”.

Carlo added that there’s actual potential for AI-enabled mass surveillance to “spiral uncontrolled”, and {that a} system constructed for one function might simply be deployed for an additional “within the blink of an eye fixed”.