This article was published on February 5, 2022

We need to decouple AI from human brains and biases

We might need to let go of the limitations of human thinking


We need to decouple AI from human brains and biases Image by: Shutterstock

In the summer of 1956, 10 scientists met at Dartmouth College and invented artificial intelligence. Researchers from fields like mathematics, engineering, psychology, economics, and political science got together to find out whether they could describe learning and human thinking so precisely that it could be replicated with a machine. Hardly a decade later, these same scientists contributed to dramatic breakthroughs in robotics, natural language processing, and computer vision.

Although a lot of time has passed since then, robotics, natural language processing, and computer vision remain some of the hottest research areas to this day. One could say that we’re focused on teaching AI to move like a human, speak like a human and see like a human.

The case for doing this is clear: With AI, we want machines to automate tasks like driving, reading legal contracts or shopping for groceries. And we want these tasks to be done faster, safer and more thoroughly than humans ever could. This way, humans will have more time for fun activities while machines take on the boring tasks in our lives.

However, researchers are increasingly recognizing that AI, when modeled after human thinking, could inherit human biases. This problem is manifest in Amazon’s recruiting algorithm, which famously discriminated against women, and the U.S. government’s COMPAS algorithm, which disproportionately punishes Black people. Myriad other examples further speak to the problem of bias in AI.

In both cases, the problem began with a flawed data set. Most of the employees at Amazon were men, and many of the incarcerated people were Black. Although those statistics are the result of pervasive cultural biases, the algorithm had no way to know that. Instead, it concluded that it should replicate the data it was fed, exacerbating the biases embedded in the data.

Manual fixes can get rid of these biases, but they come with risks. If not implemented properly, well-meaning fixes can make some biases worse or even introduce new ones. Recent developments regarding AI algorithms, however, are making these biases less and less significant. Engineers should embrace these new findings. New methods limit the risk of bias polluting the results, whether from the data set or the engineers themselves. Also, emerging techniques mean that the engineers themselves will need to interfere with the AI less, eliminating more boring and repetitive tasks.

When human knowledge is king

Imagine the following scenario: You have a big data set of people from different walks of life, tracking whether they have had COVID or not. The labels COVID / no-COVID have been entered by humans, whether doctors, nurses or pharmacists. Healthcare providers might be interested in predicting whether or not a new entry is likely to have had COVID already.

Supervised machine learning comes in handy for tackling this kind of problem. An algorithm can take in all the data and start to understand how different variables, such as a person’s occupation, gross income, family status, race or ZIP code, influence whether they’ve caught the disease or not. The algorithm can estimate how likely it is, for example, for a Latina nurse with three children from New York to have had COVID already. As a consequence, the date of her vaccination or her insurance premiums may get adjusted in order to save more lives through efficient allocation of limited resources.

This process sounds extremely useful at first glance, but there are traps. For example, an overworked healthcare provider might have mislabeled data points, leading to errors in the data set and, ultimately, to unreliable conclusions. This type of mistake is especially damaging in the aforementioned employment market and incarceration system.

Supervised machine learning seems like an ideal solution for many problems. But humans are way too involved in the process of making data to make this a panacea. In a world that still suffers from racial and gender inequalities, human biases are pervasive and damaging. AI that relies on this much human involvement is always at risk of incorporating these biases.

Incorporating human biases into supervised AI isn’t the way to go forward. Image by author
Incorporating human biases into supervised AI isn’t the way to go forward. Image by author.

When data is king

Luckily, there is another solution that can leave the human-made labels behind and only work with data that is, at least in some way, objective. In the COVID-predictor example, it might make sense to eliminate the human-made COVID / no-COVID labels. For one thing, the data might be wrong due to human error. Another major problem is that the data may be incomplete. People of lower socioeconomic status tend to have less access to diagnostic resources, which means that they might have had COVID already but never tested positive. This absence may skew the data set.

To make the results more reliable for insurers or vaccine providers, it might be useful, therefore, to eliminate the label. An unsupervised machine learning model would now go ahead and cluster the data, for example by ZIP code or by a person’s occupation. This way, one obtains several different groups. The model can then easily assign a new entry to one of these groups.
After that, one can match this grouped data with other, more reliable data like the excess mortality in a geographical area or within a profession. This way, one obtains a probability about whether someone has had COVID or not, regardless of the fact that some people may have more access to tests than others.

Of course, this still requires some manual work because a data scientist needs to match the grouped data with the data about excess mortality. Nevertheless, the results might be a lot more reliable for insurers or vaccine providers.

Sending machines on a bounty hunt

Again, this is all well and good, but you’re still leaving fixing vaccine data or insurance policy to the person at the other end of the process. In the case of vaccines, the person in charge might decide to vaccinate people of color later because they tend to use the healthcare system less frequently, thus making it less likely that the hospitals overflow if they get sick. Needless to say, this would be an unfair policy based on racist assumptions.

Leaving decisions up to the machine can help to circumvent bias ingrained in decision-makers. This is the concept behind reinforcement learning. You provide the same data set as before, without the human-made labels since they could skew results. You also feed it some information about insurance policies or how vaccines work. Finally, you choose a few key objectives, like no overuse of hospital resources, social fairness and so on.

In reinforcement learning, the machine gets rewarded if it finds an insurance policy or a vaccine date that fulfills the key objectives. By training on the data set, it finds policies or vaccine dates that optimize these objectives.

This process further eliminates the need for human data-entry or decision-making. Although it’s still far from perfect, this kind of model might not only make important decisions faster and easier but also fairer and freer from human bigotry.

There’s still a lot to fix. Image by author
There’s still a lot to fix. Image by author

Further reducing human bias

Any data scientist will tell you that not every machine learning model — be it supervised, unsupervised or reinforcement learning — is well-suited to every problem. For example, an insurance provider might want to obtain the probabilities that a person has had COVID or not but wish to figure out the policies themselves. This changes the problem and makes reinforcement learning unsuitable.

Fortunately, there are a few common practices that go a long way toward unbiased results, even when the choice over the model is limited. Most of these root to the data set.

First of all, blinding unreliable data is wise when you have reason to suspect that a particular data point may be unduly influenced by existing inequalities. For example, since we know that the COVID / no-COVID label might be inaccurate for a variety of reasons, leaving it out might lead to more accurate results.

This tactic shouldn’t be confused with blinding sensitive data, however. For example, one could choose to blind race data in order to avoid discrimination. This might do more harm than good, though, because the machine might learn something about ZIP codes and insurance policies instead. And ZIP codes are, in many cases, strongly correlated to race. The result is that a Latina nurse from New York and a white nurse from Ohio with otherwise identical data might end up with different insurance policies, which could end up being unfair.

To make sure that this doesn’t happen, one can add weights to the race data. A machine learning model might quickly conclude that Latino people get COVID more often. As a result, it might request higher insurance contributions from this segment of the population to compensate for this risk. By giving Latino people slightly more favorable weights than white people, one can compensate such that a Latina and a white nurse indeed end up getting the same insurance policy.

One should use the method of weighting carefully, though, because it can easily skew the results for small groups. Imagine, for example, that in our COVID data set, there are only a few Native Americans. By chance, all these Native Americans happen to be taxi drivers. The model might have drawn some conclusions about taxi drivers and their optimal healthcare insurance elsewhere in the data set. If the weight for Native Americans is overblown, then a new Native American may end up getting the policy for taxi drivers, although they might have a different occupation.

Manually removing bias from an imperfect model is extremely tricky and requires a lot of testing, common sense and human decency. Also, it’s only a temporary solution. In the longer term, we should let go of human meddling and the bias that comes with it. Instead, we should embrace the fact that machines aren’t as awful and unfair as humans if they get left alone with the right objectives to work toward.

Human-centered AI is awesome, but we shouldn’t forget that humans are flawed

Making AI move, speak, and think like a human is an honorable goal. But humans also say and think awful things, especially toward underprivileged groups. Letting one team of human data scientists filter out all sources of human bias and ignorance is too big of a task, especially if the team isn’t diverse enough itself.

Machines, on the other hand, haven’t grown up in a society of racial and economic disparities. They just take whichever data is available and do whatever they’re supposed to do with it. Of course, they can produce bad output if the data set is bad or if flawed humans intervene too much. But many of these flaws in data sets can be compensated with better models.

AI, at this point in time, is powerful but still carries human bias in it a bit too often. Human-centered AI won’t go away because there are so many mundane tasks that AI could take off the hands of humans. But we shouldn’t forget that we can often achieve better results if we leave machines to do their thing.

This article was originally published on Built In. You can read it here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with