Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on February 6, 2020

Court orders moratorium on black box AI that detects welfare fraud amid human rights concerns

Hopefully this precedent puts an end to the "human in the loop" loophole


Court orders moratorium on black box AI that detects welfare fraud amid human rights concerns

The Hague District Court in The Netherlands yesterday ordered the Dutch government to halt its use of a black box AI system designed to predict welfare fraud. The ruling was issued over privacy concerns and is being heralded as a civil rights victory by activists and privacy advocacy groups.

The particular program, System Risk Indication (SyRI), uses government data to predict whether welfare recipients are “likely” to commit welfare fraud. In use for years, the Dutch government has asserted that it poses no undue threat to citizen privacy.

However, after numerous privacy advocacy groups – including the United Nations – expressed concerns over the what amounts to an automated criminal investigation of welfare recipients in specific low-income neighborhoods, the Court declined to see things the state’s way.

Read: We gave US police, ICE, and CBP AI without regulations in 2016: Now it’s 1984

Per a press release from the Hague District Court (translated via Google Translate):

The court concludes that the SyRI legislation in its current form does not pass the test of Article 8, paragraph 2 of the ECHR. The court compared the objectives of the SyRI legislation, namely to prevent and combat fraud in the interest of economic well-being, with the violation of the private life that the legislation makes.

According to the court, the legislation does not meet the ‘fair balance’ that the ECHR requires in order to be able to speak about a sufficiently justified violation of private life. Regarding the deployment of SyRI, the legislation is insufficiently clear and verifiable. The legislation is unlawful because it violates higher law and is therefore non-binding.

What’s important here is that the court didn’t use GDPR regulations to come to its conclusions. Instead it derived its conclusions from the European Convention on Human Rights, which calls for any technology used to potentially profile people to strike a fair balance between privacy and necessity.

The Dutch government submits that it’s crucial to the state’s well-being that citizens maintain trust in the social security system and thus has defended its use of profiling software the same way governments in the US, UK, and countless other nations have: with the “human in the loop” loophole.

By claiming there’s a “human in the loop,” the government avoids being accused of using completely autonomous systems to investigate and prosecute people. As Tech Crunch’s Natasha Lomas reports:

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

By dismissing this notion and declaring such an invasion of privacy a human rights violation, the Hague District Court has set a precedent by which this loophole can finally be circumvented.

The Dutch government refused to reveal to the court how SyRI worked, where it got its data, or how it determined risk citing concerns that criminals would learn how to exploit the system were it to divulge such information.

Privacy advocates say this amounts to the government viewing every citizen as ‘not innocent’ and preemptively conducting criminal investigations without disclosing probable cause or evidence for such an investigation. Of even greater concern was the fact that the Dutch government only used SyRI in a handful of low-income neighborhoods, many of which are considered havens for poor immigrants.

Human Rights Watch declared the ruling a global victory in a statement released today:

By stopping SyRI, the Court has set an important precedent for protecting the rights of the poor in the age of automation. Governments that have relied on data analytics to police access to social security – such as those in the US, the U.K., and Australia – should heed the Court’s warning about the human rights risks involved in treating social security beneficiaries as perpetual suspects.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with