This article was published on July 24, 2020

Here’s why AI didn’t save us from COVID-19


Here’s why AI didn’t save us from COVID-19

When the COVID-19 pandemic began we were all so full of hope. We assumed our technology would save us from a disease that could be stymied by such modest steps as washing our hands and wearing face masks. We were so sure that artificial intelligence would become our champion in a trial by combat with the coronavirus that we abandoned any pretense of fear the moment the curve appeared to flatten in April and May. We let our guard down.

Pundits and experts back in January and February very carefully explained how AI solutions such as contact tracing, predictive modeling, and chemical discovery would lead to a truncated pandemic. Didn’t most of us figure we’d be back to business as usual by mid to late June?

But June turned to July and now we’re seeing record case numbers on a daily basis. August looks to be brutal. Despite playing home to nearly all of the world’s largest technology companies, the US has become the epicenter of the outbreak. Other nations with advanced AI programs aren’t necessarily fairing much better.

Among the countries experts would consider competitive in the field of AI compared to the US, nearly all of them have lost the handle on the outbreak: China, Russia, UK, South Korea, etc. It’s bad news all the way down.

Figuring out why requires a combination of retrospect and patience. We’re not far enough through the pandemic to understand exactly what’s gone wrong – this thing’s far too alive and kicking for a post-mortem. But we can certainly see where AI hype is currently leading us astray.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Contact tracing

Among the many early promises made by the tech community and the governments depending on it, was the idea that contact tracing would make it possible for targeted reopenings. The big idea was that AI could sort out who else a person who contracted COVID-19 may have also infected. More magical AI would then figure out how to keep the healthies away from the sicks and we’d be able to both quarantine and open businesses at the same time.

This is an example of the disconnect between AI devs and general reality. A system wherein people allow the government to track their every movement can only work with complete participation from a population with absolute faith in their government. Worse, the more infections you have the less reliable contact-tracing becomes.

That’s why only a handful of small countries even went so far as to try it – and as far as we know there isn’t any current data supporting this approach actually mitigates the spread of COVID-19.

Modeling

The next big area where AI was supposed to help was in modeling. For a time, the entire technology news cycle was dominated by headlines declaring that AI had first discovered the COVID-19 threat and machine learning would determine exactly how the virus would spread.

Unfortunately modeling a pandemic isn’t an exact science. You can’t train a neural network on data from past COVID-19 pandemics because there aren’t any, this coronavirus is novel. That means our models started with guesses and were subsequently trained on up-to-date data from the unfolding pandemic.

To put this in perspective: using on-the-fly data to model a novel pandemic is the equivalent of knowing you have at least a million dollars in pennies, but only being able to talk about the amount you’ve physically counted in any given period of time.

In other words: our AI models haven’t proven much better than our best guesses. And they can only show us a tiny part of the overall picture because we’re only working with the data we can actually see. Up to 80 percent of COVID-19 carriers are asymptomatic and a mere fraction of all possible carriers have been tested.

Testing

What about testing? Didn’t AI make testing easier? Kind of but not really. AI has made a lot of things easier for the medical community, but not perhaps in the way you think. There isn’t a test bot that you can pour a vial of blood into to get an instant green or red “infected” indicator. The best we’ve got, for the most part, is background AI that generally helps the medical world run.

Sure there’s some targeted solutions from the ML community helping frontline professionals deal with the pandemic. We’re not taking anything away from the thousands of developers working hard to solve problems. But, realistically, AI isn’t providing game-changer solutions that face up against major pandemic problems.

It’s making sure truck drivers know which supplies to deliver first. It’s helping nurses autocorrect their emails. It’s working traffic lights in some cities, which helps with getting ambulances and emergency responders around.

And it’s even making pandemic life easier for regular folks too. The fact that you’re still getting packages (even if they’re delayed) is a testament to the power of AI. Without algorithms, Amazon and its delivery pipeline would not be able to maintain the infrastructure necessary to ship you a set of fuzzy bunny slippers in the middle of a pandemic.

The cure

AI is useful during the pandemic, but it’s not out there finding the vaccine. We’ve spent the last few years here at TNW talking about how AI will one day make chemical compound discovery a trivial matter. Surely finding the proper sequence of proteins or figuring out exactly how to mutate a COVID-killer virus is all in a day’s work for today’s AI systems right? Not so much.

Despite the fact Google and NASA told us we’d reached quantum supremacy last year, we haven’t seen useful “quantum algorithms” running on cloud-accessible quantum computers like we’ve been told we would. Scientists and researchers almost always tout “chemical discovery” as one of the hard problems that quantum computers can solve. But nobody knows when. What we do know is that today, in 2020, humans are still painstakingly building a vaccine. When it’s finished it’ll be squishy meatbags who get the credit, not quantum robots.

In times of peace, every new weapon looks like the end-all-be-all solution until you test it. We haven’t had many giant global emergencies to test our modern AI on. It’s done well with relatively small-scale catastrophes like hurricanes and wildfires, but it’s been relegated to the rear echelon of the pandemic fight because AI simply isn’t mature enough to think outside of the boxes we build it in yet.

At the end of the day, most of our pandemic problems are human problems. The science is extremely clear: wear a mask, stay more than six feet away from each other, and wash your hands. This isn’t something AI can directly help us with.

But that doesn’t mean AI isn’t important. The lessons learned by the field this year will go a long way towards building more effective solutions in the years to come. Here’s hoping this pandemic doesn’t last long enough for these yet-undeveloped systems to become important in the fight against COVID-19.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with