AI & futurism

powered by

This article was published on March 26, 2020

Dubious claims that AI outperforms doctors pose risk to ‘millions of patients,’ study finds

Researchers found many of the claims were based on bad research and "arguably exaggerated"

Dubious claims that AI outperforms doctors pose risk to ‘millions of patients,’ study finds Image by: Tom Page
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

AI’s ability to analyze X-rays, MRIs, and other scans has led it to be hyped up as the future of medical imaging. But patients remain reluctant to use it, as they believe only humans can understand their unique needs.

Turns out they might be right.

Many of the studies claiming AI outperforms doctors when interpreting medical images are poor quality and “arguably exaggerated,” according to new research.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

The researchers warn that overhyping the power of these systems could lead to “inappropriate care” that poses a risk to “millions of people.”

[Read: Facebook wants to get into healthcare. What could possibly go wrong?]

Led by intensive care doctor Myura Nagendran, the team reviewed 10 years of research comparing deep learning algorithms with expert clinicians. The results were published in the BMJ, a British medical journal.

They found 83 eligible studies, but only two that used randomized clinical trials — studies that randomly divide people into one group receiving the treatment and another that does not.

Of the 81 non-randomized studies, just six of them were tested in a real clinical setting, while only nine monitored the participants over time.

Can AI outperform doctors?

Headlines claiming AI is better at diagnosis than doctors have become common in recent years, but there has been little investigation of the studies behind these stories.

The researchers wanted to check whether the systems deserved the hype.

They found that two-thirds of the studies had a high risk of bias, and that the standards of reporting were often poor.

The researchers only examined deep learning algorithms, so other forms of AI might be more worthy of their hype.

Nonetheless, they warned that the abundance of exaggerated claims they discovered could lead to patients receiving risky treatments.

“Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward,” they said.

The findings are good news for doctors worrying AI will take their jobs — and for any of their patients who still want a human touch.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with