AI & futurism

powered by

This article was published on May 5, 2020

This AI tool identifies the most promising COVID-19 research

The system pinpoints papers expected to produce replicable results

This AI tool identifies the most promising COVID-19 research Image by: Polina Tankilevitch from Pexels
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

An AI tool that scans reams of scientific literature for promising COVID-19 research could speed up the search for a coronavirus vaccine.

The system provides an automated alternative to human review, which many COVID-19 researchers are starting to bypass so their colleagues can immediately provide feedback on their work.

[Read: Drug discovery might be the best use of AI to tackle the pandemic]

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

This approach sacrifices safety for speed, which can lead to low quality work reaching a wide audience. It also means that there are way too many papers on COVID-19 for humans to read.

These issues led researchers from Northwestern University to create a tool that predicts which studies are most worthy of further investment — as well as the research that’s unlikely to work.

How it works

The Northwestern system uses an algorithm to predict which studies will produce results that are replicable — meaning they have the same effect when tested again on a new group of people.

Existing methods of assessing replicability rely on review by scientific experts, a thorough but time-consuming process.

For example, the Systematizing Confidence in Open Research and Evidence (SCORE) process created by military research agency DARPA tasks around 314 days on average. That’s a long time to wait when you’re trying to tackle a global pandemic.

Professor Brian Uzzi, who led the Northwestern study, believes this process has two major problems.

First, it takes take too long to move on to the second phase of testing and second, when experts are spending their time reviewing other people’s work, it means they are not in the lab conducting their own research.

Uzzi’s team trained their model on statistics and text from over 2 million study abstracts, and then gave it a new set of studies to evaluate.

Their idea was to analyze not only data, but also the narrative that study authors use to explain their results. It does by recognizing patterns in words that reveal a researcher’s confidence in their findings — which human reviewers don’t always detect.

They then compared the predictions to DARPA’s SCORE evaluation. The researchers say their system produced equally accurate results — but in a matter of minutes, rather than months.

Ultimately, the team aims to pair the system with expert reviewers — which they say will be more accurate than either a human or machine working alone.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with