Technology

Hungry for knowledge: Inside Europol’s secretive AI programme


It’s described by critics as a knowledge seize and surveillance creep technique. Europol calls it Strategic Goal 1: to grow to be the European Union’s (EU) “legal info hub” by a method of mass knowledge acquisitions.

Europol officers have not often hidden their urge for food for getting access to as a lot knowledge as doable. On the core of Europol’s starvation for private knowledge lies rising synthetic intelligence (AI) ambition. 

The ambition is said brazenly within the company’s 2024 to 2026 technique doc: take in knowledge on EU residents and migrants from European regulation enforcement databases, and analyse the mass of knowledge utilizing AI and machine studying (ML).

Since 2021, the Hague-based EU regulation enforcement company has launched into an more and more formidable, but largely secretive, mission to develop automated fashions that can have an effect on how policing is carried out throughout Europe. 

Based mostly on inner paperwork obtained from Europol and analysed by knowledge safety and AI specialists, this investigation raises severe questions concerning the implications of the company’s AI programme for folks’s privateness. It additionally raises questions concerning the affect of integrating automated applied sciences into on a regular basis policing throughout Europe with out sufficient oversight.

Responding to questions concerning the story, Europol stated it maintains “good contacts” with a variety of actors, and that it maintains “an neutral place” in direction of every of them to fulfil its mandate of supporting nationwide authorities in combating severe crime and terrorism: “Europol’s technique ‘delivering safety in partnership’ units out that Europol shall be on the forefront of regulation enforcement innovation and analysis.”

Europol added that its “strategy of cooperation is guided by the precept of transparency”.

Since 2021, Europol has launched into an more and more formidable, but largely secretive, mission to develop automated fashions that can have an effect on how policing is finished throughout Europe

Mass knowledge results in new alternatives

Europol’s crucial position in three mega-hack operations that dismantled encrypted communication methods EncroChat, SkyECC and Anom in 2020 and 2021 landed the company with monumental volumes of knowledge. 

Throughout these cross-border operations, Europol primarily served as a knowledge transit hub, taking part in intermediary between the policing authorities that had obtained the info and people who wanted the info to pursue criminals of their jurisdictions. 

However Europol went additional. As an alternative of limiting itself to a mediating position, it discreetly copied the datasets from the three operations into its repositories and tasked its analysts with investigating the fabric. 

The impossibility of the duty – Encrochat having 60-plus million message exchanges and Anom having 27-plus million messages exchanged – accentuated the company’s curiosity in coaching AI instruments to assist expedite the analysts’ work. Their motive was clear: to forestall criminals escaping and lives being misplaced.

In September 2021, 10 inspectors from the European Knowledge Safety Supervisor (EDPS) parachuted into Europol’s Hague headquarters to examine what was set to be Europol’s first try to coach its personal algorithms, which the company deliberate to do with knowledge harvested from EncroChat. 

Based on an EDPS doc, Europol’s purpose was to “develop seven machine studying fashions … that may be run as soon as over the entire Encrochat dataset” to assist analysts scale back the amount of messages they needed to verify.

Nevertheless, the event of those fashions was paused after the EDPS initiated a session process to overview the company’s proposed knowledge processing operations. 

Whereas Europol initially resisted the session course of – arguing that its use of machine studying did “not quantity to a brand new sort of processing operation presenting particular dangers for the basic rights and freedoms” – the EDPS unilaterally began the session process. 

This compelled the company to produce a set of inner paperwork, together with copies of the knowledge safety affect assessments (DPIAs). The next inspection dropped at gentle a severe disregard for safeguards and highlighted the shortcuts Europol was keen to absorb growing its personal AI fashions. 

For instance, based on an EDPS doc, a “flash inspection” in September 2021 discovered nearly no documentation for monitoring the coaching had been “drafted through the interval wherein the fashions had been developed”.

EDPS inspectors additionally seen that “all of the paperwork the EDPS obtained concerning the growth of fashions had been drafted after the event stopped (because of the prior session ship to the EDPS) and solely replicate partially the info and AI unit’s growing practices”. The EDPS additionally talked about that “dangers associated to the bias within the coaching and use of ML fashions or to statistical accuracy weren’t thought-about”.

For Europol’s analysts, although, there appeared to be no motive to fret, as they thought-about the danger of the machines wrongly implicating a person in a legal investigation to be minimal. 

Europol’s empire-building masterminds noticed a brand new alternative within the potential use of AI for scanning each EU citizen’s digital communication gadget

Whereas fashions developed throughout this time had been by no means deployed for operational causes, because the company’s authorized framework didn’t present a mandate for growing and deploying AI for legal investigations, this modified when Europol’s mandate was expanded in June 2022.

By then, the agenda on AI and the significance of entry to knowledge had shifted to the problem of on-line little one sexual abuse materials (CSAM). Thrust to the forefront of the political agenda by the European Fee’s proposal a month earlier to introduce so-called client-side scanning algorithms for the detection of abusive materials, it had sparked a polarising debate about the specter of breaking end-to-end encryption and the hazards of mass surveillance.

Europol’s empire-building masterminds noticed a brand new alternative within the potential use of AI for scanning each EU citizen’s digital communication gadget. 

Doubling down throughout a gathering with a fee house affairs director, Europol proposed that the know-how could possibly be repurposed to search for non-CSAM content material.

The company’s message was clear. “All knowledge is beneficial and needs to be handed to regulation enforcement,” based on minutes of a 2022 assembly between Europol and a senior official from the European Fee’s directorate-general for house affairs. “High quality knowledge was wanted to coach algorithms,” the Europol official stated.

Europol officers additionally urged the fee to make sure that all European regulation enforcement our bodies “can use AI instruments for investigations” and keep away from limitations positioned within the AI Act, the EU’s then-forthcoming regulation to restrict intrusive and dangerous algorithm use. 

In response to this investigation, Europol claimed that its operational use of non-public knowledge is topic to tight supervision and management, and that below the Europol regulation, paperwork shared with the EDPS for oversight functions shouldn’t be shared publicly.

“Accordingly, Europol considers that the DPIA documentation will not be topic to common public disclosure, together with on condition that the data of the specifics set out in a DPIA can place crime actors in an advantageous state of affairs over public safety curiosity.”

In mattress with non-public agendas

Europol’s issues concerning the restrictive regime set by the AI Act echoed these of main AI gamers. The shared pursuits of Europol and personal actors in growing AI fashions are hardly a secret. Quite the opposite, it’s typically talked about within the company’s paperwork that sustaining shut contact with AI builders is taken into account of strategic significance.

One vital level of contact has been Thorn, a US non-profit developer of an AI-powered CSAM classifier that may be deployed by regulation enforcement companies to detect new and unknown abuse photos and movies.

Since 2022, Thorn has been on the forefront of an advocacy marketing campaign in favour of the CSAM proposal in Brussels, pushing to mandate the compulsory use of AI classifiers by all corporations providing digital communication companies within the EU.

What’s much less recognized, nevertheless, is the shut contact or coordination between Thorn and Europol. A cache of electronic mail exchanges between the corporate and Europol between September 2022 and Could 2025, obtained through a collection of Freedom of Data (FOI) requests, lays naked how Europol’s plan to develop a classifier has been carefully tied to the corporate’s recommendation.

In April 2022, in anticipation of Europol’s expanded mandate getting into into pressure, which might enable the company to trade operational knowledge immediately with non-public entities, a Europol official emailed Thorn “to discover the chance for the Europol workers engaged on CSE space … to get entry” for a objective that continues to be redacted. 

Thorn responded by sharing a doc and suggested Europol that additional info was wanted to proceed. Nevertheless, that doc has not been disclosed, whereas particulars of the data wanted are redacted. “I’ve to emphasize out this doc is confidential and never for redistribution,” Thorn’s electronic mail concluded.

5 months later, Europol contacted Thorn for assist in accessing classifiers developed in a undertaking the non-profit had taken half in, so the company may consider them. 

Based on machine studying professional Nuno Moniz, the exchanges increase severe questions concerning the relationship between the 2 actors. “They’re discussing greatest practices, anticipating trade of information and sources, basically treating Thorn as a regulation enforcement companion with privileged entry,” stated Moniz, who can also be affiliate analysis professor on the Lucy Household Institute for Knowledge & Society on the College of Notre Dame.

Udhav Tiwari, vice-president for technique and international affairs at Sign, stated: “These interactions level in direction of a doubtlessly harmful nexus of conflicted pursuits that would circumvent vital democratic safeguards designed to guard civil liberties.”

The intimate collaboration between Europol and Thorn has continued ever since, with a deliberate “catchup over lunch” in a single occasion, and one other of Thorn presenting its CSAM classifier at Europol’s headquarters.

In the latest exchanges obtained by this investigation, an electronic mail trade from Could 2025 reveals Thorn discussing its rebranded CSAM classifier with the company.

Though a lot of Europol’s correspondence with Thorn stays closely redacted, some emails have been withheld in full, in disregard of the European Ombudsman’s name to offer wider entry to the exchanges prompted by complaints filed on this investigation. 

Europol claims that a few of the undisclosed paperwork “comprise strategic info of operational relevance relating to Europol’s working strategies in relation to the usage of picture classifiers, whereby particular such classifiers are talked about concretely and have been the topic of inner deliberations but in addition exterior discussions with Thorn”. 

In response to this investigation, a Thorn spokesperson stated: “Given the character and sensitivity of our work to guard kids from sexual abuse and exploitation, Thorn doesn’t touch upon interactions with particular regulation enforcement companies. As is true for all of our collaborations, we function in full compliance with relevant legal guidelines and uphold the very best requirements of knowledge safety and moral accountability.”

Europol advised this investigation that “to this point, not a single AI mannequin from Thorn has been thought-about to be used by Europol”, and therefore, “there isn’t a collaboration with builders of Thorn for AI fashions in use, or meant to be made use of by Europol”.

It added that the session technique of the EDPS “requires important time and sources earlier than deployment”, and that “any output generated by AI instruments is topic to human professional management earlier than being utilized in evaluation or different help actions”.

Patchy scrutiny 

It isn’t solely Europol’s deliberations with Thorn that stay opaque. The company has doggedly refused to reveal a variety of crucial paperwork relating to its AI programme, from knowledge safety affect assessments and mannequin playing cards to minutes from board conferences. 

Disclosed paperwork typically stay closely redacted on questionable authorized grounds. In lots of situations, Europol has flouted statutory deadlines for responding to requests by weeks. 

Europol has doggedly refused to reveal a variety of crucial paperwork relating to its AI programme: from knowledge safety affect assessments and mannequin playing cards, to minutes from conferences of its administration board

Generally, the company has cited “public safety” and “inner decision-making” exemptions to justify withholding info. The European Ombudsman, nevertheless, has repeatedly questioned the vagueness of these claims in preliminary findings, noting that Europol has failed to elucidate how disclosure would concretely endanger its operations.

5 transparency complaints filed by this investigation are presently pending in entrance of the European Ombudsman.  

However Europol’s obvious aversion to transparency is however one facet of a failing accountability structure that’s, on paper, meant to make sure that all of Europol’s actions, together with the roll-out of AI instruments, adjust to basic rights obligations. 

Inside Europol, that activity falls primarily on the shoulders of the company’s basic rights officer (FRO), an inner watchdog place launched with Europol’s 2022 mandate to appease issues that its large enlargement of powers didn’t have robust sufficient guardrails.

Put in place in 2023, the place has not addressed issues about lack of sturdy oversight.  

“Europol’s basic rights officer doesn’t perform as an efficient safeguard towards the dangers posed by the company’s rising use of digital applied sciences. The position is institutionally weak, missing inner enforcement powers to make sure that its suggestions are adopted,” stated Bárbara Simão, an accountability professional at Article 19, a London-based worldwide human rights organisation that tracks the affect of surveillance and AI applied sciences on freedom of expression. Simão reviewed a number of FRO “non-binding” assessments of Europol’s AI instruments obtained by this investigation. 

“To fulfil its position as an inner oversight mechanism, it should transfer past a symbolic perform, correctly scrutinise the applied sciences being deployed and be given real authority to uphold basic rights,” she added.  

Lots of the non-binding studies issued by the FRO comprise a copy-and-pasted admission that this capability to robustly overview Europol’s AI instruments will not be in place. 

“At this second, no instruments exist for the basic rights evaluation of instruments utilizing synthetic intelligence. The evaluation methodology the FRO makes use of is impressed by a doc edited by the Strategic Group on Ethics and Know-how, and on a technique to take care of dilemmas,” the studies famous. 

Exterior oversight doesn’t seem a lot stronger. The principal mechanism – the so-called Joint Parliamentary Scrutiny Group (JPSG), which brings collectively nationwide and European parliamentarians to observe Europol’s actions – is a physique that may ask questions and request paperwork, with none enforcement powers. 

Satirically, Europol, responding to the European Ombudsman’s inquiries concerning the company’s questionable transparency practices, claims that its “legitimacy and accountability” is “already largely and essentially being fulfilled by the statutory democratic scrutiny carried out by the European Parliament along with nationwide parliaments by the Joint Parliamentary Scrutiny Group (JPSG)”.

Europol’s basic rights officer doesn’t perform as an efficient safeguard towards the dangers posed by the company’s rising use of digital applied sciences. The position is institutionally weak, missing inner enforcement powers to make sure that its suggestions are adopted
Bárbara Simão, Article 19

It’s left to the EDPS to scrutinise the company’s hasty enlargement with restricted sources and an insufficient knowledge protection-focused mandate, which doesn’t totally seize the vary of human rights harms introduced by Europol’s AI efforts.

‘Extreme penalties’ 

By summer time 2023, growing its personal CSAM classifier was a prime precedence for Europol’s AI programme. A two-page advisory doc issued by the company’s FRO signifies the purpose was to develop “a instrument that makes use of synthetic intelligence (AI) to categorise mechanically alleged little one sexual abuse (CSE) [child sexual exploitation] photos and video”. 

In simply 4 traces, Europol FRO Dirk Allaerts addressed the problem of bias, indicating {that a} balanced knowledge combine in age, gender and race was crucial “to restrict the danger the instrument will recognise CSE just for particular races or genders”.

The event part would occur in a managed atmosphere to additional restrict any dangers of knowledge safety violations. To coach the instrument, the undertaking would use each CSE and non-CSE materials. Whereas it’s unclear how Europol would receive the non-CSE materials crucial for coaching the algorithm, the CSE materials would largely be supplied by the Nationwide Heart for Lacking and Exploited Kids (NCMEC), a US-based non-profit carefully aligned with the federal authorities and its regulation enforcement companies. 

Though Europol had already put plans to coach a classifier on the backburner by late 2023, knowledge delivered from NCMEC was ingested into the company’s first in-house AI mannequin, deployed in October 2023.

Named EU Cares, the mannequin is tasked with mechanically downloading CSE materials from NCMEC, cross-checking it with Europol’s repositories, after which disseminating the info in close to actual time to member state regulation enforcement authorities. 

The amount of fabric ingested – primarily from US-based digital giants like Meta, that are obliged to report any potential CSAM to NCMEC – turned so massive that guide processing and dissemination, which Europol applied earlier than deploying AI, was not possible. 

Europol’s personal evaluation of the system had recognized dangers of “incorrect knowledge reported by NCMEC” and “incorrect cross-match studies” that will wrongfully establish folks as “a distributor or proprietor” of CSAM.

Nonetheless, based on the EDPS, the company “failed to completely assess the dangers” linked to its automation of those processes.

In an EDPS opinion obtained through FOI requests by this investigation, the info safety watchdog underlined the “extreme penalties” that knowledge inaccuracies may trigger. 

It requested that the company implement further mitigation measures to deal with errors that may happen by automating the method. In response, Europol dedicated to marking suspect knowledge as “unconfirmed”, including “enhanced” set off alerts for anomalies, and enhancing its system for eradicating retracted referrals. Amongst different measures, the company stated these steps would deal with the EDPS’s issues about accuracy and cross-match errors.  

In February 2025, the company’s govt director, Catherine De Bolle, stated EU Cares “had delivered 780 thousand referrals in whole with enrichment packages till January 2025”. The query stays, what number of of those are false-positive or redundant leads? The German federal company, which receives studies immediately from NCMEC with out utilizing Europol’s system, advised this investigation that out of 205,728 studies obtained in 2024, 99.375 (48.3%) weren’t “related below legal regulation”. 

The following frontier: facial recognition

Even because the EU’s privateness regulators pressed for safeguards on EU Cares, Europol was increasing automation into one other delicate area: facial recognition. 

Since 2016, the company has examined and bought a number of business instruments. Its newest acquisition, NeoFace Watch (NFW) from Japanese software program agency NEC, was meant to finally substitute or complement an earlier in-house system often called Face, which may already entry about a million facial photos by mid-2020.

Closely redacted correspondence reveals that by Could 2023, Europol was discussing the usage of NeoFace Watch. When it later submitted the brand new programme for overview, the EDPS warned of the “threat of decrease accuracy processing for the faces of minors (as a type of bias)” and “of incoherent processing” if previous and new methods (reminiscent of the present Face and the NeoFace Watch) run in parallel. 

After the session, Europol determined to exclude the info of minors below the age of 12 from being processed, as a precaution. 

The watchdog requested Europol to run a six-month pilot to find out a suitable accuracy threshold and minimise false positives. 

Europol’s submission to the EDPS included reference to 2 research by the Nationwide Institute of Requirements and Know-how (NIST), a US authorities physique. Whereas the research had been meant to help Europol’s selection of NeoFace Watch as its new go-to system, NIST laid out in one report that they didn’t use “wild photos” sourced “from the web nor from video surveillance”, that are the sort of sources Europol would use. 

In a associated report, NIST evaluations for NEC’s algorithm documented that utilizing images in poor gentle situations had an identification error fee of as much as 38%.

Nevertheless, Europol signed a contract with NEC in October 2024. The company confirmed to this investigation that the software program is used inside a selected CSE unit by skilled professional workers. 

Related deployments of NeoFace Watch within the UK have confronted authorized challenges over bias and privateness. 

In a non-binding advisory opinion in November 2024, Europol’s FRO described the system as one which “raises dangers of false positives that may hurt the best of defence or of truthful trial”. The system is taken into account excessive threat below the brand new EU AI Act. Nonetheless, the FRO cleared it to be used, merely urging the company to acknowledge when the instrument is utilized in cross-border investigations to “improve transparency and accountability, key to maintain the belief of the general public”.  

The EDPS advised this investigation that it’s presently making ready an inspection report and can’t talk additional at this stage about Europol’s use of NEC software program.

NEC additionally advised this investigation that NeoFace Watch was ranked as “the world’s most correct answer” at NIST’s most up-to-date testing spherical. It added that its product “has undergone intensive impartial testing by the Nationwide Bodily Laboratory (NPL) and was discovered to have zero false-positive identifications when used dwell in typical operational situations”.

Excessive accuracy figures alone don’t make facial recognition secure or deal with the authorized and rights issues documented on this case. Consultants together with Luc Rocher, an affiliate professor on the Oxford Web Institute, have demonstrated that facial recognition analysis methodologies nonetheless fail to completely seize real-world efficiency, the place elements like picture high quality, inhabitants scale and demographic variety trigger accuracy to degrade considerably, notably for racial minorities and younger folks. 

Simão, the Article 19 professional, famous that emphasising technical efficiency “tends to downplay dangers related to facial recognition applied sciences”, together with the bias towards minors flagged by the EDPS and threats to truthful trial rights recognized by Europol’s personal watchdog.

The larger image 

A binding inner roadmap drawn up by Europol in 2023 outlines the true scale of Europol’s ambition: 25 potential AI fashions, starting from object detection and picture geolocation to deep-fake identification and biometric private characteristic extraction. The imaginative and prescient would place the company on the centre of automated policing within the EU, as instruments deployed by Europol may just about be utilized by all regulation enforcement our bodies throughout the bloc. 

In February 2025, Europol’s De Bolle advised European lawmakers that the company had submitted 10 DPIAs to the EDPS – seven had been updates for fashions already being developed and three for brand new ones.

Members of the JPSG requested Europol to offer an in depth report of its AI programme. When the company delivered, it despatched lawmakers a four-page paper with generic descriptions of its inner vetting processes, with none substantive info on the ΑΙ methods themselves. 

Inexperienced MEP Saskia Bricmont, a part of the JPSG, advised this investigation that as a result of AI being developed by Europol “can entail very robust dangers and penalties for basic rights”, robust and efficient supervision should be ensured. 

“Despite the data supplied, it stays very advanced for MEPs to fulfil their monitoring activity and totally assess the dangers related to the usage of AI-based methods by the company,” stated Bricmont.

On the similar time, the European Fee is making ready to current a brand new, complete reform to show Europol into “a very operational company”. 

The exact that means of this transformation stays unclear. Nevertheless, the European Fee has proposed doubling Europol’s funds for the following monetary time period to €3bn in taxpayers’ cash. 


This investigation was supported by IJ4EU and Lighthouse Studies.