Technology

GDPR’s seventh anniversary: within the AI age, privateness laws remains to be related


It’s been barely greater than seven years since GDPR got here into pressure – a key second that reshaped how organisations method knowledge safety. We’ve seen UK companies of all sizes refine their knowledge methods, audit their processes, and assume extra rigorously in regards to the influence of each bit and byte they acquire.

As we began to get extra snug with the post-GDPR panorama, AI has now entered the body. Very like GDPR did in 2018 AI is pushing companies to rethink how they handle knowledge. The stakes are arguably larger now – not simply due to the higher quantity of knowledge companies usually deal with, however as a result of the dangers and alternatives tied to AI are transferring at breakneck velocity.

GDPR in apply

Within the years since GDPR’s implementation, the shift from reactive compliance to proactive knowledge governance has been noticeable. Information safety has developed from a authorized formality right into a strategic crucial — a subject mentioned not simply in authorized departments however in boardrooms. Excessive-profile fines in opposition to tech giants have strengthened the concept knowledge privateness isn’t non-obligatory, and compliance isn’t only a checkbox.

That progress must be acknowledged — and even celebrated — however we additionally should be trustworthy about the place gaps stay. Too typically GDPR remains to be handled as a one-off train or a hurdle to clear, relatively than a steady, embedded enterprise course of. This short-sighted view not solely exposes organisations to compliance dangers however causes them to overlook the actual alternative: regulation as an enabler.

When understood and utilized appropriately, GDPR gives greater than a authorized framework — it supplies a transparent, structured approach to handle knowledge responsibly, enhance operational hygiene, and construct belief with prospects and companions. In different phrases, robust knowledge governance is not a drag on innovation — it’s what makes innovation sustainable.

AI: Innovation, however with new questions relating to threat

Enter AI. Companies clearly recognise the big advantages and potential of AI. In accordance with Cisco’s 2024 AI Readiness Index, 95% of companies said they’ve a extremely outlined AI technique in place or are within the means of growing one, with 50% allocating as a lot 10-30% of their IT price range in the direction of AI. 

Nevertheless, 65% of IT groups say they don’t absolutely perceive the privateness implications of utilizing AI. And solely 11% belief it sufficient to deal with mission-critical workloads (in keeping with Splunk’s 2025 State of Safety report). That tells us one thing necessary: AI could also be transferring ahead apace, however maybe governance round it’s nonetheless catching up.

We’re seeing new dangers emerge: the transparency difficulty, for instance, feels significantly pressing: consider black-box fashions utilized in areas like fraud detection or credit score scoring – in case you can’t audit how a call was made, you possibly can’t clarify it to a regulator or justify it to a buyer.

The compliance questions try to be asking

As organisations embed AI deeper into their operations, it’s time to ask the robust questions round what sort of knowledge we’re feeding into AI, who has entry to AI outputs, and if there’s a breach – what processes now we have in place to reply rapidly and meet GDPR’s reporting timelines. 

Regardless of the urgency, there’s nonetheless a evident hole of organisations that don’t have a proper AI coverage in place, which exposes organisations to privateness and compliance dangers that might have critical penalties. Particularly when knowledge loss prevention is a high precedence for companies.

The excellent news? We don’t have to begin from scratch. GDPR already offers us a framework for assessing AI instruments – assume knowledge minimisation, function limitation, and privacy-by-design ideas. This implies solely accumulating the information you want, utilizing it for particular, legit functions, and embedding privateness into each stage of AI improvement. Apply these ideas to AI, and also you’ve obtained a robust basis to construct on. These aren’t simply authorized safeguards – they’re the constructing blocks of moral AI.

GDPR has laid the groundwork

As we mirror on GDPR’s seven-year journey, one factor is evident: its relevance hasn’t diminished. Actually, the arrival of AI has made its ideas extra important than ever. The regulatory frameworks of yesterday are nonetheless match for function — however provided that they’re utilized with the identical agility and ambition that new applied sciences demand.

The problem forward isn’t merely about following guidelines — it’s about demonstrating management. The way forward for knowledge governance received’t be dictated by regulators alone. It is going to be outlined by companies daring sufficient to align innovation with integrity, and quick sufficient to show compliance into aggressive benefit.

GDPR laid the groundwork. Now, within the age of AI, the chance is to construct one thing even stronger on high of it.

James Hodge is the GVP & Chief Technique Advisor – EMEA at Splunk