This article was published on January 17, 2020

Why using AI to screen job applicants is almost always a bunch of crap


Why using AI to screen job applicants is almost always a bunch of crap

Millions of potential employees are subjected to artificial intelligence screenings during the hiring process every month. While some systems make it easier to weed out candidates who lack necessary educational or work qualifications, many AI hiring solutions are nothing more than snake oil.

Thousands of companies around the world rely on outside businesses to provide so-called intelligent hiring solutions. These AI-powered packages are advertised as a way to narrow job applicants down to a ‘cream of the crop’ for humans to consider. On the surface, this seems like a good idea.

Anyone who’s ever been responsible for the hiring at a decent-sized operation wishes they had a magic button that would save them from wasting their time interviewing the worst candidates.

Read next: Tacoma convenience store’s facial recognition AI is a racist nightmare

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Unfortunately, the companies creating the AI solutions are, often, offering something that’s simply too good to be true.

CNN’s Rachel Metz wrote the following in a recent report concerning AI-powered hiring solutions:

With HireVue, businesses can pose pre-determined questions — often recorded by a hiring manager — that candidates answer on camera through a laptop or smartphone. Increasingly, those videos are then pored over by algorithms analyzing details such as words and grammar, facial expressions and the tonality of the job applicant’s voice, trying to determine what kinds of attributes a person may have. Based on this analysis, the algorithms will conclude whether the candidate is tenacious, resilient, or good at working on a team, for instance.

Here’s the problem: AI cannot determine whether a job candidate is tenacious, resilient, or good at working on a team. Humans can’t even do this. It’s impossible to qualify someone’s tenacity or resilience by monitoring the tone of their voice or their facial expressions over a few minutes of video or audio.

But, for the sake of argument, lets concede we live in a parallel universe where humans magically have the ability to determine whether someone works well with others by observing their facial expressions while they answer questions about, presumably, whether they work well with others. An AI, even in this wacky universe where everyone was neurotypical and thus entirely predicable, still couldn’t make the same judgments because AI is stupid.

AI doesn’t know what a smile means, or a frown, or any human emotion. Developers train it to recognize a smile and then the developers determine what a smile means and assign that to the “smile output” paradigm. Maybe the company developing the AI has a psychiatrist or an MD standing around saying “in response to question 8, a smile indicates the candidate is sincere,” but that doesn’t make the statement true. Many experts consider this type of emotional simplification reductive and borderline physiognomy.

The bottom line is that the company using the software has no clue what the algorithms are doing, the PhDs or experts backing up the statements have no clue what kind of bias the algorithms are coded with – and all AI that judges human personality traits is inherently biased.  The developers coding the systems cannot protect the end users from inherent bias.

Simply put, there is no scientific basis by which an AI can determine human desirability traits by applying computer vision/natural language-processing techniques to short video/audio clips. The analog version of this would be hiring based on what your gut tells you.

You may as well decide that you’ll only hire people wearing charcoal suits or women with red lipstick for all the measurable good these systems do. After all, the most advanced facial recognition systems on the planet struggle to determine whether one black person is or isn’t an entirely different black person.

Anyone who believes that an AI startup has built an algorithm that can tell whether a person of color, for example, is “tenacious” or “a good team worker” based on a 3-5 minute video interview should email me right away. There’s a bridge in Brooklyn I’d like to sell them.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with