Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 4, 2024

Does AI have a place on ethics committees? How to use it the right way

Productivity saver or moral dilemma?


Does AI have a place on ethics committees? How to use it the right way

The role of an ethics committee is to give advice on what should be done in often contentious situations. They are used in medicine, research, business, law and a variety of other areas.

The word “ethics” relates to the moral principles governing human behaviour. The task for ethics committees can be quite tricky given the wide range of moral, political, philosophical, cultural and religious views. Even so, good ethical arguments make up the foundation of society, as they are the basis of the laws and agreements that we use to get on with each other.

Given the importance of ethics, any tool that can be used to help come to better ethical decisions should be explored and used. Over the last couple of years, there has been an increasing recognition that artificial intelligence (AI) is a tool that can be used to analyse complex data. So it makes sense to ask the question of whether AI can be used to help make better ethics decisions.

As AI is a class of computer algorithm, it relies on data. Ethics committees also rely on data, so one important question is whether AI is able to load, and then meaningfully analyse, the types of data that ethics committees regularly consider.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Here, context becomes very important. For instance a hospital ethics committee might make decisions based upon experience with patients, input from lawyers, and a general understanding of common cultural or societal norms and opinions. It is currently difficult to see how such data could be captured, and fed into, an AI algorithm.

However, I chair a very specific type of ethics committee, called a research ethics committee (REC), whose role is to review scientific research protocols. The aim is to promote high quality research while protecting the rights, safety, dignity and well being of the people who take part in the research.

The majority of our activity involves reading complex documents to determine what the relevant ethics issues may be, and then making suggestions to researchers on how they can improve their proposed protocols, or procedures. It is in this area that AI could be very helpful.

Research protocols, especially those of clinical trials, often run to hundreds if not thousands of pages. The information is dense and complex. Although protocols are accompanied by ethics application forms that seek to present information on key ethics issues in a way that REC members can easily find, the task can still take a very long time.

After studying the documents, REC members weigh up what they have read, compare it with guidance on good ethics practice, consider input from patient and participant involvement groups, and then come to a decision as to whether the research can proceed as planned. The most common outcome is that more information and a few modifications are needed before the research can go ahead.

A role for machines?

While attempts have been made to standardise REC membership and experience, researchers often complain that the process can take a long time and is inconsistent between different committees.

AI seems ideally placed to speed up the process and assist in ironing out some of the inconsistencies. Not only could the AI read such long documents very quickly, but it could also be trained on a large number of previous protocols and decisions.

It could very rapidly spot any ethics issues and suggest solutions for the research teams to implement. This would vastly speed up the ethics review process and probably make it far more consistent. But is it ethically acceptable to use AI in this way?

While AI could clearly conduct many of the REC tasks, it could also be argued that these reviewing tasks are not actually the same as making an ethics decision. At the end of the review process, RECs are asked to decide whether a protocol, with the updates, should receive a favourable or unfavourable opinion.

As a consequence, while the advantage of AI is clear in speeding up the process, this isn’t quite the same as making the final decision.

A human in the loop

It may be possible for AI to be extremely effective in assessing a situation and recommending a course of action that is consistent with previous “ethical” behaviour. However, the decision to actually adopt a course of action, and then go on to behave in that way, is fundamentally human.

In the example of research ethics, the AI might well recommend a course of action, but actually deciding on the action is a human decision. The system could be designed to instruct ethics committees or researchers to unquestionably do what the AI suggests, but such a decision is about how the AI is used, not the AI itself.

While AI is perhaps immediately useful to research ethics committees given the type of data we review, it is very likely that ways of encoding non-text data (such as people’s experiences) will improve.

This means that over time AI may also be able to assist in other areas of ethics decision making. However, the key point is not to confuse the tool used to analyse data, the AI, with the final “ethics” decision on how to act. The danger is not the AI, but how people choose to integrate AI into ethics decision making processes.The Conversation

Simon Kolstoe, Associate Professor of Bioethics, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with