
The biggest actual threat faced by humans, when it comes to AI, has nothing to do with robots. Itâs biased algorithms. And, like almost everything bad, it disproportionately affects the poor and marginalized.
Machine learning algorithms, whether in the form of âAIâ or simple shortcuts for sifting through data, are incapable of making rational decisions because they donât rationalize â they find patterns. That government agencies across the US put them in charge of decisions that profoundly impact the lives of humans, seems incomprehensibly unethical.
When an algorithm manages inventory for a grocery store, for example, machine learning helps humans do things that would, otherwise, be harder. The manager probably canât keep track of millions of items in his head; the algorithm can. But, when itâs used to take away someoneâs freedom or children: Weâve given it too much power.
Two years ago, the bias debate broke wide-open when Pro-Publica published a damning article exposing the apparent bias in the COMPAS algorithms â a system thatâs used to sentence accused criminals based on several factors, including race. Basically, the report clearly showed several cases where it was obvious that the big fancy algorithm predicts recidivism rates based on skin tone.
In an age where algorithms are âhelpingâ government employees do their jobs, if youâre not straight, not white, or not living above the poverty line youâre at greater risk of unfair bias.
Thatâs not to say straight, white, rich people canât suffer at the hands of bias, but theyâre far less likely to lose their freedom, children, or livelihood. The point here is that weâre being told the algorithms are helping. Theyâre actually making things worse.
Writer Elizabeth Rico believes unfair predictive analysis software may have influenced a social services investigator to take away her children. She wrote about her experience in an article where she describes how social services â whether intentionally or not â preys upon those who canât afford to avoid the algorithmâs gaze. Her research revealed a system that equates being poor with being bad.
In the article, published on UNDARK, she says:
⊠the 131 indicators that feed into the algorithm include records for enrollment in Medicaid and other federal assistance programs, as well as public health records regarding mental-health and substance-use treatments. Putnam-Hornstein stresses that engaging with these services is not an automatic recipe for a high score. But more information exists on those who use the services than on those who donât. Families who donât have enough information in the system are excluded from being scored.
If youâre accused of being an abusive or neglectful parent, and youâve had the means to treat any addictions or mental health problems youâve had in a private facility, the algorithm may just skip you. But, if you use government assistance or have a state or county-issued medical card, youâre in the cross-hairs.
And thatâs the problem in a nutshell. The best intentions of researchers and scientists are no match for capitalism and partisan politics. Take, for example, that Stanford researcherâs algorithm purported to predict gayness â it doesnât, but that wonât stop people from thinking it does.
It isnât dangerous in the Stanford machine learning lab, but the GOP-helmed Federal government is increasingly anti-LBGTQ+. What happens when it decides that applicants have to pass a âgaydarâ test before entering military service?
Matters of sexuality and race may not be intrinsically related to poverty or disenfranchisement, but the marginalization of minorities is. LBGTQ+ individuals and black men, for example, already face unfair legislation and systemic injustice. Using algorithms to perpetuate that is nothing more than automating cruelty.
We cannot fix social problems by reinforcing them with black box AI or biased algorithms: Itâs like literally trying to fight fire with fire. Until we develop 100 percent bias-proof AI, using them to take away a personâs freedom, children, or future is just wrong.
Get the TNW newsletter
Get the most important tech news in your inbox each week.