You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on February 26, 2020

Scientists propose new regulatory framework to make AI safer

Imperial College London researchers believe the HIAT framework would support human-centric tech


Scientists propose new regulatory framework to make AI safer

Scientists from Imperial College London have proposed a new regulatory framework for assessing the impact of AI, called the Human Impact Assessment for Technology (HIAT).

The researchers believe the HIAT could identify the ethical, psychological and social risks of technological progress, which are already being exposed in a growing range of applications, from voter manipulation to algorithmic sentencing.

They based their idea on the Environmental Impact Assessments (EIA), which has been used to evaluate the environmental effects of proposed developments for 50 years.

Like environmental impact, the human impact of AI is difficult to model and often produces unforeseen results. Software is often easy to modify and regularly updated. The HIAT would therefore need to be part of an ongoing evaluation.

How the HIAT would work

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The researchers recommend using an existing technology framework, such as the EU’s new AI guidelines, as the basis for the HIAT assessment and reporting.

Social science methods would then be used to assess human impact, like those used in psychology to evaluate wellbeing.

[Read: Pentagon unveils toothless ethical principles for using AI in war]

Every technology would also have to comply with current technical standards.

Next steps

Relevant impact assessments already in place, such as the data protection impact assessment required by GDPR and algorithmic impact assessments (AIA) could also be incorporated into the HIAT.

“Impact assessments are an important tool for embedding certain values and have been successfully used in many industries including mining, agriculture, civil engineering, and industrial engineering,” Imperial’s Professor Rafael Calvo, who led the research team, said in a statement.

“Other sectors too, such as pharmaceuticals, are accustomed to innovating within strong regulatory environments, and there would be little trust in their products without this framework. As AI matures, we need frameworks like HIAT to give citizens confidence that this powerful new technology will be broadly beneficial to all.”


You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top