A new study from the University of Melbourne has demonstrated how hiring algorithms can amplify human gender biases against women.
Researchers from the University of Melbourne gave 40 recruiters real-life resumƩs for jobs at UniBank, which funded the study. The resumƩs were for roles as a data analyst, finance officer, and recruitment officer, which Australian Bureau of Statistics data shows are respectively male-dominated, gender-balanced, and female-dominated positions.
Half of the recruitment panel was given resumĆ©s with the candidateās stated gender. The other half was given the exact same resumĆ©s, but with traditionally female names and male ones interchanged. For instance, they might switch āMarkā to āSarahā and āRachelā to āJohn.ā
The panelists were then instructed to rank each candidate and collectively pick the top and bottom three resumƩs for each role. The researchers then reviewed their decisions.
[Read: How to build a search engine for criminal data]
They found that the recruiters consistently preferred resumĆ©s from the apparently male candidates ā even though they had the same qualifications and experience as the women. Both male and female panelists were more likely to give menās resumĆ©s a higher rank.

The researchers then used the data to create a hiring algorithm that would rank each candidate in-line with the panelās preferences ā and found that it reflected their biases.
Read: Amazonās sexist hiring algorithm could still be better than a human
āEven when the names of the candidates were removed, AI assessed resumĆ©s based on historic hiring patterns where preferences leaned towards male candidates,ā said study co-author Dr Marc Cheong in a statement.
āFor example, giving advantage to candidates with years of continuous service would automatically disadvantage women whoāve taken time off work for caring responsibilities.ā
The study relied on a small sample of data, but these types of gender biases have also been documented in large companies. Amazon, for example, had to shut down a hiring algorithm tool after discovering it was discriminating against female applicants, because the models were predominantly trained on resumes submitted by men.
āAlso, in the case of more advanced AIs that operate within a āblack boxā without transparency or human oversight, there is a danger that any amount of initial bias will be amplified,ā added Dr Cheong.
The researchers believe the risks can be reduced by making hiring algorithms more transparent. But we also need to address our inherent human biases ā before theyāre baked into the machines.
Get the TNW newsletter
Get the most important tech news in your inbox each week.