Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on August 19, 2022

A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

Every silver lining has a raincloud


A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EU’s strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesn’t mean it’s perfect.

The 2018 initiative

In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies.

Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.

The commentary, concerns, and advice provided by those stakeholders has been considered by the EU’s “High-level expert group on artificial intelligence,” who ultimately created four key documents that work as the basis for the EU’s policy discussions on AI:

1. Ethics Guidelines for Trustworthy AI

2. Policy and Investment Recommendations for Trustworthy AI

3. Assessment List for Trustworthy AI

4. Sectoral Considerations on the Policy and Investment Recommendations

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

This article focuses on item number one: the EU’s “Ethics Guidelines for Trustworthy AI.”

Published in 2019, this document lays out the barebones ethical concerns and best practices for the EU. While I wouldn’t exactly call it a ‘living document,’ it is supported by a continuously updated reporting system via the European AI Alliance initiative.

The Ethics Guidelines for Trustworthy AI provides a “set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.”

Human agency and oversight

Per the document:

AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.

Neural’s rating: poor

Human-in-the-loop, human-on-the-loop, and human-in-command are all wildly subjective approaches to AI governance that almost always rely on marketing strategies, corporate jargon, and disingenuous approaches to discussing how AI models work in order to appear efficacious.

Essentially, the “human in the loop” myth involves the idea that an AI system is safe as long as a human is ultimately responsible for “pushing the button” or authorizing the execution of a machine learning function that could potentially have an adverse effect on humans.

The problem: Human-in-the-loop relies on competent humans at every level of the decision-making process to ensure fairness. Unfortunately, studies show that humans are easily manipulated by machines.

We’re also prone to ignore warnings whenever they become routine.

Think about it, when’s the last time you read all the fine print on a website before agreeing to the terms presented? How often do you ignore the “check engine” light on your car or the “time for an update” alert on software when it’s still functioning properly?

Automating programs or services that affect human outcomes under the pretense that having a “human in the loop” is enough to prevent misalignment or misuse is, in this author’s opinion, a feckless approach to regulation that gives businesses carte blanche to development harmful models as long as they tack on a “human-in-the-loop” requirement for usage.

As an example of what could go wrong, ProPublica’s award-winning “Machine Bias” article laid bare the propensity for the human-in-the-loop paradigm to cause additional bias by demonstrating how AI used to recommend criminal sentences can perpetuate and amplify racism.

A solution: the EU should do away with the idea of creating “proper oversight mechanisms” and instead focus on creating policies that regulate the use and deployment of black box AI systems to prevent them from deployment in situations where human outcomes might be affected unless there’s a human authority who can be held ultimately responsible.

Technical Robustness and safety

Per the document:

AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.

Neural’s rating : needs work.

Without a definition of “safe,” the whole statement is fluff. Furthermore, “accuracy” is a malleable term in the AI world that almost always refers to arbitrary benchmarks that do not translate beyond laboratories.

A solution: the EU should set a bare minimum requirement that AI models deployed in Europe with the potential to affect human outcomes must demonstrate equality. An AI model that achieves lower reliability or “accuracy” on tasks involving minorities should be considered neither safe nor reliable.

Privacy and data governance

Per the document:

Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.

Neural’s rating: good, but could be better.

Luckily, the General Data Protection Regulation (GDPR) does most of the heavy lifting here. However, the terms “quality and integrity” are highly subjective as is the term “legitimised access.”

A solution: the EU should define a standard where data must be obtained with consent and verified by humans to ensure the databases used to train models contain only data that is properly-labeled and used with the permission of the person or group who generated it.

Transparency

Per the document:

The data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

Neural’s rating: this is hot garbage.

Only a small percentage of AI models lend themselves to transparency. The majority of AI models in production today are “black box” systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse.

In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, we’d have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine.

A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes.

Diversity, non-discrimination and fairness

Per the document:

Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.

Neural’s rating: poor.

In order for AI models to involve “relevant stakeholders throughout their entire life circle” they’d need to be trained on data sourced from diverse sources and developed by teams of diverse people. The reality is that STEM is dominated by white, straight, cis-males and there are myriad peer-reviewed studies demonstrating how that simple, demonstrable fact makes it almost impossible to produce many types of AI models without bias.

A solution: unless the EU has a method by which to solve the lack of minorities in STEM, it should instead focus on creating policies that prevent businesses and individuals from deploying AI models that produce different outcomes for minorities.

Societal and environmental well-being

Per the document:

AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.

Neural’s rating: great. No notes!

Accountability

Per the document:

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

Neural’s rating: good, but could be better.

There’s currently no political consensus as to who’s responsible when AI goes wrong. If the EU’s airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm (they miss their flight and any opportunities stemming from their travel) or unnecessary mental anguish, there’s nobody who can be held responsible for the mistake.

The employees following procedure based on the AI’s flagging of a potential threat are just doing their jobs. And the developers who trained the systems are typically beyond reproach once their models go into production.

A solution: the EU should create a policy that specifically dictates that humans must always be held accountable when an AI system causes an unintended or erroneous outcome for another human. The EU’s current policy and strategy encourages a “blame the algorithm” approach that benefits corporate interests more than citizen rights.

Making a solid foundation stronger

While the above commentary may be harsh, I believe the EU’s AI strategy is a light leading the way. However, it’s obvious that the EU’s desire to compete with the Silicon Valley innovation market in the AI sector has pushed the bar for human-centric technology a little further towards corporate interests than the union’s other technology policy initiatives have.

The EU wouldn’t sign off on an aircraft that was mathematically proven to crash more often if Black persons, women, or queer persons were passengers than it did when white men were onboard. It shouldn’t allow AI developers to get away with deploying models that function that way either.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with