Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 2, 2020

Can a robot decide my medical treatment?

It may already be happening. The high-stakes world of health care algorithms


Can a robot decide my medical treatment?

In an attempt to manage soaring health care costs, some government officials and health care companies are turning to algorithms to determine how to allocate limited benefits, who to provide care to first, or whether a person should receive care at all.

The intent behind those tools might be reasonable, but critics of the automated decision-making systems say they can amplify the damage of errors, and entrench bias, with sometimes severe consequences.

What could go wrong? 

Here’s one example: In 2016, Arkansas turned to an algorithmic tool to manage health benefits for people on a state disability program that provided assistance to some of its neediest residents. Under the program people could apply to have the state pay for visits from a professional caregiver, allowing them to stay in their own homes rather than move to a full-time care facility. Before 2016, the state had used human determinations: A nurse would visit an applicant and then determine how many hours a caregiver should visit every week.

The new tool relied on a survey that asked beneficiaries questions about their abilities and health problems—whether they needed help eating or using the bathroom, for example. After collecting the information, the algorithm would calculate how many hours the person would receive. Many people on the program immediately had their care hours cut drastically, and, it turns out, in several cases this was done in error.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

After a lawsuit was filed by Legal Aid of Arkansas, advocates discovered during litigation that the vendor implementing the tool had inadvertently failed to properly account for diabetes and cerebral palsy in the algorithm, lowering the allocation of hours for hundreds of people. Without the lawsuit, the beneficiaries had little opportunity to challenge the algorithm’s decisions, even if they were flawed. “The algorithm becomes the thing,” said Kevin De Liban, the Legal Aid of Arkansas attorney who led the ultimately successful lawsuit against the state’s use of the algorithm. “And that becomes one of the biggest problems.”

The state moved to a new system in 2019, but De Liban said a new algorithmic assessment used to determine eligibility cut about 30 percent of beneficiaries from the program who had previously been eligible.

Due process and secrecy

Arkansas isn’t alone. States around the country have used similar decision-making tools to determine who receives benefits. A handful have faced litigation as a result. But sometimes it can be hard to determine how an algorithm is working at all, effectively making due process impossible.

Around 2011, Idaho started using a tool to determine costs for home care, similar to the one Arkansas would use. But after the tool was put in place, funds for some beneficiaries dropped by as much as 42 percent.

The state declined to reveal the formula it used, saying it was a “trade secret,” which meant there was no way for the average person to challenge the tool’s decision. If a beneficiary did appeal, an official “would look at the system and say, ‘It’s beyond my authority and my expertise to question the quality of this result,’ ” ACLU of Idaho legal director Richard Eppink told me in 2018.

Eventually, after a lawsuit, the ACLU of Idaho discovered during litigation that the state had relied on data so flawed the state immediately discarded most of it, but it still moved forward with the program. The result, according to the ACLU, was a program relying on data that led to essentially arbitrary decisions about care. The state eventually agreed to change the system.

New tools, old problems

But even when an algorithmic tool is working as intended, there’s the chance for bias to show up, sometimes in unexpected ways. Last year, for example, a team of researchers announced the findings of a study examining an algorithm widely used by health care professionals. The implicit bias in the tool, affecting millions, meant black patients were consistently underserved.

The tool, developed by a company called Optum, was meant to determine which patients had the most complex medical needs and who’d benefit the most from increased medical intervention. To make that determination, the tool looked at the costs of caring for patients, excluding race as a factor. But for several possible reasons—including having historically less access to care, facing poverty disproportionately, or dealing with discrimination from doctors—chronically sick black patients have less money spent on them than white patients with similarly severe ailments. Under the system, the percentage of black patients who received additional care was about 17.7. If the tool was adjusted to remove bias, the researchers found, that figure would skyrocket: 46.5 percent of black patients would then receive the extra care.

What’s next?

The health care industry seems primed to start using tools that could raise the stakes even higher. Right now, researchers are studying how to use artificial intelligence to detect skin cancer, heart attacks, eye disease, and more.

Researchers have even been studying how AI could be used to predict the likelihood of a person’s death, a potentially useful tool for guiding caregivers but one where the consequences of error or bias could literally be life or death.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license. Do you have a question for Ask The Markup? Email us at ask@themarkup.org

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with