It’s oft-repeated that artificial intelligence will be a danger to our jobs. But perhaps in a not-so-surprising twist, AI is also being increasingly used by companies to hire candidates.
According to a report by The Telegraph, AI-based video interviewing software — such as that developed by HireVue — are being leveraged by UK companies for the first time to shortlist the best job applicants.
“Unilever, the consumer goods giant, is among companies using AI technology to analyse the language, tone and facial expressions of candidates when they are asked a set of identical job questions which they film on their mobile phone or laptop,” the report said.
HireVue, a Utah-based pre-employment assessment AI platform founded in 2004, employs machine learning to evaluate candidate performance in videos by training an AI system centered around 25,000 usable data points. The company’s software is utilized by over 700 companies worldwide, including Intel, Honeywell, Singapore Airlines, and Oracle.
“There are lots of subtle cues we subconsciously make sense of — think facial expressions or intonation — but these are missed when we zone out,” the company notes on its website.
The videos record an applicant’s responses to preset interview questions, which are then analyzed by the software for intonation, body language, and other parameters, looking for matches against traits of previous “successful” candidates.
It’s worth noting that Unilever experimented with HireVue in its recruitment efforts as early as 2017 in the US.
Intelligent recruiting on the rise
From recommending you what to binge watch over the weekend to booking the cheapest flight for your next vacation, AI and machine learning have quickly emerged as two of the most disruptive forces ever to hit the economy.
The technology is now doing more than ever — for both good and bad. It’s being deployed in health care; it’s helping artists synthesize death metal music. On the other hand, it’s predicting crime and identifying criminal activity, enabling high-tech surveillance, being used to develop evasive malware, and even judging your creditworthiness.
What’s more, the ability to weaponize AI to unleash a tidal wave of propaganda in the current flow of fake news has left digital platforms struggling to weed it all out, making information warfare a new battlefield waged in the realm of cyberspace.
They’re also scrutinizing your resume, transforming both job seeking and the workplace, and revamping the very means companies look for candidates, get the most out of employees, and retain top talent.
But just as algorithms steadily infiltrate different aspects of your day-to-day lives and make decisions on your behalf, they have also come progressively under scrutiny for being as biased as the humans they sometimes replace.
What is fairness?
By letting a computer program make hiring decisions for a company, the prevailing notion is that the process can be made more efficient — both by selecting the most qualified people from a deluge of applications and side-stepping human bias to identify top talent from a diverse pool of candidates.
Yet as it’s widely established, AIs are only as good as the data they’re trained on. Bad data that contain implicit racial, gender, or ideological biases can also creep into the systems, resulting in a phenomenon called disparate impact, wherein some candidates may be unfairly rejected or excluded altogether because they don’t fit a certain definition of “fairness.”
Regulating the use of AI-based tools, then, necessitates the need for algorithmic transparency, bias testing, and assessing them for risks associated with automated discrimination.
But most importantly, it calls for collaboration between engineers, domain experts, and social scientists. This is the key to understanding the trade-offs between different notions of fairness and to help us define which biases are desirable or unacceptable.
Pssst, hey you!
Do you want to get the sassiest daily tech newsletter every day, in your inbox, for FREE? Of course you do: sign up for Big Spam here.