AI‘s ability to analyze X-rays, MRIs, and other scans has led it to be hyped up as the future of medical imaging. But patients remain reluctant to use it, as they believe only humans can understand their unique needs.
Turns out they might be right.
The researchers warn that overhyping the power of these systems could lead to “inappropriate care” that poses a risk to “millions of people.”
Led by intensive care doctor Myura Nagendran, the team reviewed 10 years of research comparing deep learning algorithms with expert clinicians. The results were published in the BMJ, a British medical journal.
They found 83 eligible studies, but only two that used randomized clinical trials — studies that randomly divide people into one group receiving the treatment and another that does not.
Of the 81 non-randomized studies, just six of them were tested in a real clinical setting, while only nine monitored the participants over time.
Can AI outperform doctors?
Headlines claiming AI is better at diagnosis than doctors have become common in recent years, but there has been little investigation of the studies behind these stories.
The researchers wanted to check whether the systems deserved the hype.
They found that two-thirds of the studies had a high risk of bias, and that the standards of reporting were often poor.
The researchers only examined deep learning algorithms, so other forms of AI might be more worthy of their hype.
Nonetheless, they warned that the abundance of exaggerated claims they discovered could lead to patients receiving risky treatments.
“Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward,” they said.
The findings are good news for doctors worrying AI will take their jobs — and for any of their patients who still want a human touch.
Published March 26, 2020 — 15:18 UTC