Technology

The UK’s On-line Security Act defined: what it is advisable know


The UK’s On-line Security Act turned regulation in October 2023 with the intention to boost on-line security for all web customers, significantly kids, by putting obligations on service suppliers that both host user-generated content material or present search engine performance.

Underneath their new obligations, greater than 100,000 firms – together with social media platforms, on-line boards, messaging companies and video-sharing websites – are required to proactively forestall their customers from seeing unlawful or dangerous content material. This consists of assessing the dangers of such content material showing on their platforms, implementing “strong” age limits for the accessing of sure content material and rapidly eradicating the offending content material when it does seem.

Failure to adjust to the OSA’s measures can lead to important penalties for service suppliers. On-line harms regulator Ofcom, for instance, has the ability to impose substantial fines (10% of an organization’s international income or £18m, whichever is increased), and will require fee suppliers or advertisers to cease working with the platform.

Senior managers for on-line platforms also can face felony legal responsibility for failing to adjust to Ofcom’s info requests, or for not guaranteeing their firm adheres to youngster security duties, whereas the regulator itself also can conduct audits and direct firms to take particular steps to enhance their on-line security measures.

How Ofcom would regulate the act was set out in its December 2024 Unlawful Harms Codes and Steering, which went into impact and have become enforceable on 17 March 2025. Underneath the codes, Ofcom expects any web companies that kids can entry (together with social media networks and search engines like google and yahoo) to hold out strong age checks, to configure their algorithms to filter out essentially the most dangerous content material from these kids’s feeds and implement content material moderation processes that guarantee swift motion is taken in opposition to this content material.

Nevertheless, since its inception, the OSA has confronted various criticisms, together with for obscure and overly broad definitions of what constitutes “dangerous content material”, and the risk it poses to encrypted communications.

There has additionally been intensive debate about whether or not the OSA is efficient in apply, significantly since age verification measures went stay in late July 2025 that require platforms to confirm customers’ ages to entry sure content material or websites, and within the wake of the 2024 Southport Riots the place on-line misinformation performed a key position within the unfold of violence.

Age verification measures

Since 25 July 2025, on-line service suppliers have been required to place in place age checks to make sure kids are unable to entry pornography, self-harm, suicide or consuming disorder-related content material that could possibly be dangerous to them.

The plans for “strong age checks” have been outlined in Ofcom’s Could 2024 draft on-line youngster security guidelines, which contained greater than 40 different measures tech corporations would want to implement by 25 July to adjust to their new authorized obligations underneath the act.

Whereas a lot of the media focus because the deadline has been on the age-gating of porn websites, the change has additionally affected social media corporations, relationship apps, stay streamers and a few gaming firms.

The strategies these companies can use to guarantee individuals’s ages are diverse, and might embody facial age estimation applied sciences, open banking, photo-ID matching, digital identification companies or bank card checks. Nevertheless, because the age gate deadline on 25 July, on-line searches for digital personal networks (VPNs) – which encrypt a person’s connection to the web, permitting them to bypass the OSA’s measures – have skyrocketed, with Proton alone reporting a 1800% spike in day by day sign-ups for its VPN service within the UK, and VPN apps topping the Apple retailer’s obtain charts.

The Age Verification Suppliers Affiliation (AVPA), however, mentioned there has additionally been a pointy improve in extra age checks within the UK since age gating was launched, with 5 million extra checks being carried out a day since then.

Because it stands, the OSA locations no limits on age verification suppliers from distributing, profiling or monetising the private information of UK residents going by way of verification, though Ofcom notes on its web site it could refer suppliers to the information regulator if it believes an age verification firm has not complied with information safety regulation.

Some web customers have expressed frustration that the selection of which age assurance expertise to make use of lies solely with the platform, that means to entry its companies they need to hand over their delicate private information to a 3rd celebration. Whereas these corporations are topic to UK information safety regulation, it’s unclear how the OSA measures round age verification will work together with the Information Use and Entry Act’s (DUAA) new “function limitation” guidelines that make it simpler to course of information exterior of its initially supposed use.

The DUAA may also take away present protections in opposition to automated decision-making (ADM) in order that they solely apply to selections that both considerably have an effect on people or contain particular class information, and introduce a listing of “recognised legit pursuits” that organisations can use to course of information with out the necessity to conduct legitimacy assessments, which incorporates issues resembling nationwide safety, prevention of crime and safeguarding.

There are additionally considerations with the OAS that political content material is being censored within the identify of defending kids, with studies of Palestine-related content material being positioned behind age verification partitions on X and Reddit. Different reported examples of legit speech being eliminated because of age-gating at scale embody customers being unable to entry content material associated to Alcoholics Nameless and different dependancy assist, medical hashish, the battle in Ukraine, and even pictures of historic artwork, resembling Francisco de Goya’s Nineteenth-century portray Saturn Devouring His Son.

Some civil society teams and teachers have additionally expressed concern that Ofcom’s steerage on the OSA thus far incentivises platforms to undertake a “bypass technique”, whereby they’re inspired to average content material in methods which can be extra restrictive than essential to keep away from potential fines. This strategy may result in the over-removal of legit speech whereas doubtlessly proscribing customers’ freedom of expression. 

Breaking encryption

Other than age verification, essentially the most controversial facet of the act is energy it provides to Ofcom to require tech corporations to put in “accredited expertise” to watch encrypted communications for unlawful content material. In essence, this could imply tech firms utilizing software program to bulk-scan messages on encrypted companies (resembling WhatsApp, Sign and Aspect) earlier than their encryption, in any other case often called client-side scanning (CSS).

Implementing such measures would undermine the safety and privateness of encrypted companies by scanning the content material of each message and e-mail to verify whether or not they comprise unlawful content material. This has been repeatedly justified by the federal government as vital for stopping the creation and unfold of kid sexual abuse supplies (CSAM), in addition to violent crime and terrorism. Cryptographic consultants, nonetheless, have repeatedly argued that measures mandating tech corporations to proactively detect dangerous content material by way of client-side scanning needs to be deserted.

A coverage paper written by Ross Anderson, a Cambridge College professor of safety engineering, and researcher Sam Gilbert in October 2022, for instance, argued that utilizing synthetic intelligence (AI)-based scanning to look at the content material of messages would elevate an unmanageable variety of false alarms and show “unworkable”. They additional claimed the expertise is “technically ineffective and impractical as a method of mitigating violent on-line extremism and youngster sexual abuse materials”.

A earlier October 2021 paper from Anderson and 13 different cryptographic consultants, together with Bruce Schneier, argued that whereas client-side scanning “technically” permits for end-to-end encryption, “that is moot if the message has already been scanned for focused content material. In actuality, CSS is bulk intercept, albeit automated and distributed.”

In September 2023, BCS, The Chartered Institute for IT, mentioned the federal government proposals in on end-to-end encryption are usually not attainable with out creating systemic safety dangers and, in impact, bugging tens of millions of telephone customers.

It argued that the federal government is looking for to impose a technical answer to an issue that may solely be solved by broader interventions from police, social staff and educators, noting that some 70% of BCS’ 70,000 members say they don’t seem to be assured it’s attainable to have each actually safe encryption and the power to verify encrypted messages for felony materials.

The proposals have additionally led to a backlash from encrypted messaging suppliers, together with WhatsApp, Sign and Aspect, which threatened to withdraw their companies from the UK if the invoice turns into regulation.

Because it stands, whereas Ofcom does have the ability to compel firms to scan for youngster sexual abuse materials in encrypted environments, it’s nonetheless engaged on steerage for tech corporations round how “accredited applied sciences” resembling client-side scanning and hash-matching may be applied to guard youngster security on-line.

There are at present no “accredited applied sciences” that Ofcom requires firms to make use of, with closing steerage on the matter deliberate for publication in Spring 2026.

On-line disinformation persists

Though the invoice ultimately obtained royal assent in October 2023 – four-and-a-half years after the web harms whitepaper was revealed in April 2019 – its capability to take care of real-world disinformation continues to be an open query. In Could 2025, for instance, the federal government and Ofcom have been nonetheless in disagreement in over whether or not the act even covers misinformation.

As a part of its inquiry into on-line misinformation and dangerous algorithms, a report from the Commons Science, Innovation and Know-how Committee (SITC) revealed a report of its findings in July 2025, outlining how the OSA fails to take care of the algorithmic amplification of “authorized however dangerous” misinformation.

Highlighting the July 2024 Southport riots for instance of how “on-line exercise can contribute to real-world violence”, the SITC warned that whereas many elements of the OSA weren’t absolutely in drive on the time of the unrest, “we discovered little proof that they might have made a distinction in the event that they have been”.

It mentioned this was as a result of a mix of things, together with weak misinformation-related measures within the act itself, in addition to the enterprise fashions and opaque advice algorithms of social media corporations.

“It’s clear that the On-line Security Act simply isn’t as much as scratch,” mentioned SITC chair Chi Onwurah. “The federal government must go additional to deal with the pervasive unfold of misinformation that causes hurt however doesn’t cross the road into illegality.

“Social media firms are usually not simply impartial platforms however actively curate what you see on-line, and so they should be held accountable. To create a stronger on-line security regime, we urge the federal government to undertake 5 ideas as the inspiration of future regulation.”

These ideas embody public security, free and secure expression, duty (together with for each finish customers and the platforms themselves), management of private information and transparency.

Improvement hell

Whereas controversies round sure points of the act are nonetheless ongoing, its means of changing into laws was additionally fraught with stress, operating by way of many iterations because the UK authorities initially revealed its On-line Harms Whitepaper in April 2019.

Saying the brand new measures, the then prime minister Theresa Could argued these firms “haven’t executed sufficient for too lengthy” to guard their customers, significantly younger individuals, from “authorized however dangerous” content material.

Though this was the world’s first framework designed to carry web firms accountable for the security of these utilizing their companies, and outlined proposals to position a statutory “responsibility of care” on web firms to make them accountable for the security of their customers, it didn’t obtain Royal Assent to turn into an act till October 2023.

Whereas the federal government revealed an preliminary response to its whitepaper in February 2020 and a full response in December 2020 which supplied extra element on the proposals, an preliminary draft of the invoice didn’t materialise till Could 2021.

At this stage, the draft invoice contained a variety of new measures, resembling particular duties for “Class 1” firms – these with the biggest on-line presence and high-risk options, which is prone to embody Fb, TikTok, Instagram and Twitter – to guard “democratically necessary” content material, publish up-to-date assessments of their affect on freedom of expression, and new felony legal responsibility for senior managers.

Additional additions to the invoice got here in February 2022, when the federal government expanded the listing of “precedence offences” that tech firms should proactively forestall individuals from being uncovered to. Whereas terrorism and youngster sexual abuse have been already included within the precedence listing, the federal government has redrafted it to incorporate revenge porn, hate crime, fraud, the sale of unlawful medication or weapons, the promotion or facilitation of suicide, individuals smuggling and sexual exploitation. Because it stands, there at present are greater than 130 precedence offences outlined within the act.

In November 2022, the “authorized however dangerous” facet of the invoice – which attracted robust criticism from Parliamentary committees, marketing campaign teams and tech professionals alike – was then dropped, that means firms would not be obliged to take away or prohibit authorized content material, or in any other case droop customers for posting or sharing that content material. As an alternative, the measures round “authorized however dangerous” have been diminished to solely apply to kids.

Nevertheless, controversy continued – in January 2023, the then-Conservative authorities tried to amend the invoice in order that current immigration offences can be included into the listing of “precedence offences”, that means tech firms could possibly be compelled to take away movies of individuals crossing the English Channel “which present that exercise in a optimistic gentle”. “Illegal immigration” content material continues to be included within the act’s listing of precedence offences.

All through this whole course of, the invoice attracted robust criticism. The Open Rights Group and different civil society organisations, for instance, known as for its full overhaul in September 2022, on the idea that its measures threaten privateness and freedom of speech.

They particularly highlighted considerations across the act’s provisions to compel on-line firms to scan the content material of customers’ personal messages, and the intensive government powers granted to the secretary of state to outline what constitutes lawful speech.