Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on July 13, 2023

EU rules on AI must do more to protect human rights, NGOs warn

The group fears lobbyists might succeed in their efforts to water down the proposed AI Act


EU rules on AI must do more to protect human rights, NGOs warn

A group of 150 NGOs including Human Rights Watch, Amnesty International, Transparency International, and Algorithm Watch has signed a statement addressed to the European Union. In it, they entreat the bloc not only to maintain but enhance human rights protection when adopting the AI Act. 

Between the apocalypse-by-algorithm and the cancer-free utopia different camps say the technology could bring, lies a whole spectrum of pitfalls to avoid for the responsible deployment of AI

As Altman, Musk, Zuckerberg, et al., dive head first into the black box, legislation aiming to at least curb their enthusiasm is on the way. The European Union’s proposed law on artificial intelligence — the AI Act — is the first of its kind by any major regulatory body. Two different camps are claiming that it is either a) crippling Europe’s tech sovereignty or b) not going far enough in curtailing dangerous deployment of AI. 

Transparency and redress

The signatories of Wednesday’s collective statement warn that, “Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making, and environmental damage.” 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

This is no “AI poses risk of extinction” one-liner statement. It includes specific segments of the Act the writers feel must be kept or enhanced. For instance, a “framework of accountability, transparency, accessibility, and redress,” must include the obligation of AI deployers to publish fundamental rights impact assessments, register use in a publicly accessible database, and ensure people affected by AI-made decisions have the right to be informed. 

The NGOs are also taking a strong stance against AI-based public surveillance (such as the one deployed during the coronation of King Charles). They are calling for a full ban on “real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception.” They also ask that the EU prohibit AI in predictive and profiling systems in law enforcement, as well as migration contexts and emotional recognition systems. 

In addition, the letter writers urge lawmakers not to “give into lobbying efforts of big tech companies to circumvent regulation for financial interest,” and uphold an objective process to determine which systems will be classified as high-risk. 

The proposed act will divide AI systems into four tiers, depending on the level of risk they pose to health and safety or fundamental rights. The tiers are: unacceptable, high, limited, and minimal. 

High risk vs. general purpose AI

Unacceptable are applications such as social scoring systems used by governments, whereas systems used for things like spam filters or video games would be considered minimal risk. 

Under the proposed legislation, the EU will allow high-risk systems (for instance those used for medical equipment or autonomous vehicles), but deployers must adhere to strict rules regarding testing, data collection documentation, and accountability frameworks. 

The original proposal did not contain any reference to general purpose or generative AI. However, following the meteoric rise of ChatGPT last year, the EU approved last minute amendments to include an additional section. 

Business leaders have been hard at work the past few months trying to influence the EU to water down the proposed text. They have been particularly keen on what should be classified as high-risk AI, resulting in much higher costs. Some, such as OpenAI’s Sam Altman, went on a personal charm offensive (throwing a threat or two in the mix). 

Others, specifically more than 160 executives from major companies around the world (including Meta, Renault, and Heineken), have also sent a letter to the Commission. In it, they warned that the draft legislation would “jeopardise Europe’s competitiveness and technological sovereignty.”

The European Parliament adopted its negotiating position on the AI Act on June 14, and trilogue negotiations have now begun. These entail discussions between the Parliament, the Commission, and the Council, before they will adopt the final text. 

With the law set to establish a global precedent (albeit hopefully one capable of evolving as the technology does), Brussels is, in all likelihood, currently abuzz with solicitous advocates — on behalf of all interested parties. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.