Swedish welfare authorities droop ‘discriminatory’ AI mannequin
A “discriminatory” synthetic intelligence (AI) mannequin utilized by Sweden’s social safety company to flag folks for profit fraud investigations has been suspended, following an intervention by the nation’s Information Safety Authority (IMY).
Beginning in June 2025, IMY’s involvement was prompted after a joint investigation from Lighthouse Experiences and Svenska Dagbladet (SvB) revealed in November 2024 {that a} machine studying (ML) system being utilized by Försäkringskassan, Sweden’s Social Insurance coverage Company was disproportionally and wrongly flagging sure teams for additional investigation over social advantages fraud.
This included ladies, people with “international” backgrounds, low-income earners and other people with out college levels. The media retailers additionally discovered the identical system was largely ineffective at figuring out males and wealthy folks that really had dedicated some type of social safety fraud.
These findings prompted Amnesty Worldwide to publicly name for the system’s instant discontinuation in November 2024, which it described on the time as “dehumanising” and “akin to a witch hunt”.
Launched by Försäkringskassan in 2013, the ML-based system assigns threat scores to social safety candidates, which then routinely triggers an investigation if the danger rating is excessive sufficient.
In line with a weblog revealed by IMY on 18 November 2025, Försäkringskassan was particularly utilizing the system to conduct focused checks on recipients of non permanent little one assist advantages – that are designed to compensate mother and father for taking break day work after they need to care for his or her sick kids – however took it out of use over the course of the Authorities investigation.
“Whereas the inspection was ongoing, the Swedish Social Insurance coverage Company took the AI system out of use,” stated IMY lawyer Måns Lysén. “For the reason that system is now not in use and any dangers with the system have ceased, we’ve assessed that we are able to shut the case. Private knowledge is more and more being processed with AI, so it’s welcome that this use is being recognised and mentioned. Each authorities and others want to make sure that AI use complies with the [General Data Protection Regulation] GDPR and now additionally the AI regulation, which is regularly coming into pressure.”
IMY added that Försäkringskassan “doesn’t at the moment plan to renew the present threat profile”.
Beneath the European Union’s AI Act, which got here into pressure on 1 August 2024, using AI programs by public authorities to find out entry to important public companies and advantages should meet strict technical, transparency and governance guidelines, together with an obligation by deployers to hold out an evaluation of human rights dangers and assure there are mitigation measures in place earlier than utilizing them. Particular programs which might be thought-about as instruments for social scoring are prohibited.
Laptop Weekly contacted Försäkringskassan concerning the suspension of the system, and why it elected to discontinue earlier than IMY’s inspection had concluded.
“We discontinued using the danger evaluation profile so as to assess whether or not it complies with the brand new European AI regulation,” stated a spokesperson. “We now have in the meanwhile no plans to place it again into use since we now obtain absence knowledge from employers amongst different knowledge, which is predicted to offer a comparatively good accuracy.”
Försäkringskassan beforehand advised Laptop Weekly in November 2024 that “the system operates in full compliance with Swedish legislation”, and that candidates entitled to advantages “will obtain them no matter whether or not their utility was flagged”.
In response to Lighthouse and SvB’s claims that the company had not been absolutely clear concerning the inside workings of the system, Försäkringskassan added that “revealing the specifics of how the system operates might allow people to bypass detection”.
Comparable programs
Comparable AI-based programs utilized by different nations to distribute advantages or examine fraud have confronted related issues.
In November 2024, for instance, Amnesty Worldwide uncovered how AI instruments utilized by Denmark’s welfare company are creating pernicious mass surveillance, risking discrimination in opposition to folks with disabilities, racialised teams, migrants and refugees.
Within the UK, an inside evaluation by the Division for Work and Pensions (DWP) – launched underneath Freedom of Data (FoI) guidelines to the Public Regulation Mission – discovered that an ML system used to vet 1000’s of Common Credit score profit funds was displaying “statistically important” disparities when deciding on who to analyze for potential fraud.
Carried out in February 2024, the evaluation confirmed there’s a “statistically important referral … and final result disparity for all of the protected traits analysed”, which included folks’s age, incapacity, marital standing and nationality.
Civil rights teams later criticised DWP in July 2025 for a “worrying lack of transparency” over how it’s embedding AI all through the UK’s social safety system, which is getting used to find out folks’s eligibility for social safety schemes akin to Common Credit score or Private Independence Cost.
In separate stories revealed across the identical time, each Amnesty Worldwide and Massive Brother Watch highlighted the clear dangers of bias related to using AI on this context, and the way the expertise can exacerbate pre-existing discriminatory outcomes within the UK’s advantages system.

