This article was published on May 13, 2020

Using ‘personalized AI’ to end coronavirus lockdown is a stupid, cruel idea

But insurance companies and big brother are going to love it


Using ‘personalized AI’ to end coronavirus lockdown is a stupid, cruel idea Image by: Alachua County

A trio of AI and business experts from INSEAD, a prestigious business school, recently penned a lengthy article in the Harvard Business Review extolling the virtues of “personalized AI” prediction models to end the coronavirus lockdown.

It’s the kind of piece that politicians, academics, and pundits refer to when discussing how technology can aid in the world’s pandemic response. It’s also a gift to politicians who think we should ignore medical professionals and prioritize the economy over human life.

In short, the article is a travesty of ambiguous information. It uses the generic idea of artificial intelligence as a motivator to convince people that only those we can label as “high risk” should be forced to endure quarantine or shelter-in-place orders.

What the group is talking about is a generic “prediction” AI, such as the one Netflix uses to determine what you want to watch next. The idea is sound: we take as much data about COVID-19 diagnosis and prognosis as possible and use it to create personalized “clinical risk predictions” on whether an individual is likely to experience the worst symptoms.

Per the HBR article:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

These clinical risk predictions could then be used to customize policies and resource allocation at the individual/household level, appropriately accounting for standard medical liabilities and risks. It could, for instance, enable us to target social distancing and protection for those with high clinical risk scores, while allowing those with low scores to live more or less normally.

As you can see, the idea quickly runs afoul of anything approaching common sense. We’re not all sheltering in place because every single one of us thinks we’re going to die, we’re doing it so that – by the authors’ estimations – ten percent of the global population doesn’t end up in a preventable life-threatening situation while we develop a vaccine or determine if herd immunity is a viable solution for COVID-19 (it might not be, more on that here).

The authors’ point is made crystal clear in in this paragraph:

In theory, with a reliable prediction of who these 90% are we could de-confine all these individuals. Even if they were to infect each other, they would not have severe symptoms and would not overwhelm the medical system or die. These 90% low clinical risk de-confined people would also help the rapid build up of high herd immunity, at which point the remaining 10% could be also de-confined.

The numbers appear to be plucked out of thin air – most of what we’ve seen puts the “low risk” category at about 80%, but that’s of those who have the disease already. There’s no clear medical consensus as to what underlying conditions, if any, have caused deaths in otherwise healthy young people including several children. Exposing everyone except those individuals who already know they’re at risk is a stupid, cruel idea. More children and “perfectly healthy” people will die needlessly, that is a statistical certainty. 

There’s no AI smart enough to determine whether someone’s at risk for a disease that medical professionals don’t fully understand yet. These predictions would be, at best, anecdotal based on self-reporting. They’d be no better than the anonymized systems that Google and Facebook are working on, but they come with a special cherry on top for big businesses and governments: the obliteration of personal medical privacy.

Still in the same HBR article, the team writes:

Implementing the technological innovations, however, will require policy changes. Existing policies covering data privacy and cybersecurity, and their respective and differing interpretations across countries, will largely prohibit the kind of personalized pandemic management approach we are advocating.

They expand on this in a follow-up interview with Sifted’s Maija Palmer:

Such provisions would only be enacted in the state of “war” as declared by specified bodies (such as UN, WHO, etc). Reverting back to “normal” when the situation resolves itself. However, privacy should be still maintained — just that a tolerance/threshold (i.e. flexibility) be introduced.

This sounds frighteningly like the Patriot Act for healthcare. The authors advocate for privacy by saying there’s AI that can anonymize data for these purposes, but the “flexibility” they’re talking about is ambiguous enough to mean anything.

It’s clear that the authors aren’t approaching things from a medical safety perspective. They’re using self-interpreted data from myriad science resources to come up with broad, sweeping generalizations that only seem to serve one purpose: convincing people to allow their governments to do away with medical privacy — something that would be incredibly profitable for health insurers and the like.

The researchers aren’t even clear on exactly how the predictions would work. On the one hand they advocate for a blanket loosening of government privacy policies so that AI models could be trained with healthcare information that’s pertinent to the individual, but on the other they claim – in the same HBR article – that one set of anonymous data is as good as another:

Once a model is up and running it can be shared to help other cities and even countries in the early stages of the spread, because the basic underlying biological and physiological data in people’s medical records do not vary much (everyone grows old, and diabetes in Wuhan is the same as diabetes in Baltimore.) If a virus strikes two countries whose populations resemble each other, the outcomes are likely to be similar.

What?

If certain populations – such as Baltimore where a little over 600k people reside and Wuhan where more than 11 million live – are interchangeable for prediction purposes then why would we let governments suspend our medical privacy to make this thing work? The researchers go on to say that different populations will have different epidemiology factors, but the odd disparity between the two ideas is startling in its dichotomy.

At the end of the day, there’s no scientific or medical merit to a system that requires the government to suspend its privacy policies so that a predictive AI can make guesses about our health for politicians. Simply put: AI cannot tell if you’re going to become seriously ill or die if you contract COVID-19.

Based on what the INSEAD team is advocating, this is just a fancy way to obfuscate medical truths. A system like this wouldn’t allow us to ease social-distancing restrictions as commensurate with medical professionals’ direction and established scientific facts, it just makes a guess based on whatever data you feed it. It’s sole purpose would be political.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with