Automated welfare fraud detection system contravenes international law, Dutch court rules – Government & civil service news


The court ruled that the Dutch government’s ‘risk indication system’ legislation fails a balancing test in Article 8 of the European Convention on Human Rights. (Image by S. Hermann & F. Richter, Pixabay).

A Dutch court has ruled that an automated surveillance system using artificial intelligence (AI) to detect welfare fraud violates the European Convention on Human Rights, and has ordered the government to cease using it immediately. The judgement comes as governments around the world are ramping up use of AI in administering welfare benefits and other core services, and its implications are likely to be felt far beyond the Netherlands.

The Dutch government’s risk indication system (SyRI) is a risk calculation model used by the social affairs and employment ministry. It gathers government data previously held in separate silos – such as housing, employment, personal debt and benefit records – and analyses it using an algorithm to identify which individuals might be at a higher risk of committing benefit or tax fraud. It is deployed primarily in neighbourhoods with a high proportion of low-income and minority residents.

The case against the government was brought by a number of civil society organisations, including the Netherlands Law Committee on Human Rights, and two citizens. They argued that poor neighbourhoods and their inhabitants were being spied on digitally without concrete suspicion of individual wrongdoing.  

The court found that the SyRI legislation fails a balancing test in Article 8 of the European Convention on Human Rights (ECHR), which requires that any social interest – in this case to prevent and combat fraud in the interest of economic wellbeing – be weighed against the violation of individuals’ privacy, and is therefore unlawful. 

Lack of
transparency

The court also found the
legislation to be “insufficiently clear, verifiable… and controllable” and
criticised a lack of transparency about the way the system functions.

According
to the New York-based non-governmental organisation, Human Rights Watch, the Dutch government
refused to disclose during the hearing “meaningful information” about how SyRI
uses personal data to draw inferences about possible fraud, particularly the
risk models and risk factors applied. 

The state does not agree that
the system violates human rights, and says SyRI legislation contains sufficient
guarantees to protect individuals’ privacy, according to a press
release
. It is not clear whether the
government will appeal the decision.

The UN special rapporteur on extreme poverty and
human rights, Philip Alston, said in
a statement
that the verdict was “a clear victory for all those who
are justifiably concerned about the serious threats digital welfare systems
pose for human rights”.

The decision “sets a strong legal precedent for
other courts to follow,” he added. “This is one of the first times a court
anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on
human rights grounds.”

The judgement –
which comes at a time when EU policymakers are working on a framework to
regulate AI and ensure that it is applied ethically and in a human-centric way
– does not bar governments from using automated profiling systems. However, it makes
clear that human rights law in Europe must be central to the design and implementation
of such tools.

The effect of the ruling is not expected to be limited to signatories of the ECHR. Christiaan van Veen, director of the digital welfare state and human rights project at New York University School of Law, said, as reported by The Guardian, that it was “important to underline that SyRI is not a unique system; many other governments are experimenting with automated decision-making in the welfare state”.

“This strong ruling will set a strong precedent globally that will encourage activists in other countries to challenge their governments.”





Source link