US Tech Company Develops AI, Facial Recognition for Recruitment 

An applicant being interviewed on her phone

Rabat – UK companies, such as international consumer goods company Unilever, are using Artificial Intelligence (AI) and facial recognition in recruitment. The companies believe that the new technology can help them to identify the best candidates. 

The recruiters trying out this new technology, developed by US tech company Hirevue, are hoping to use the facial recognition function to analyze facial expressions, language, and tone of voice. The candidates can film their responses to specifically designed questions on a mobile phone or laptop prior to the interview stage.

The algorithms developed by Hireview compare the candidate to 25,000 pieces of facial and linguistic information. The information comes from previous interviews and data on previously successful job applicants.

Hirevue’s chief psychologist Nathan Mondragon explained that; “We get about 25,000 data points from 15 minutes of video per candidate. The text, the audio, and the video come together to give us a very clear analysis and rich data set of how someone is responding, the emotions and cognitions they go through.”

Loren Larsen, Hirevue’s chief technology officer,  explained in detail to The Daily Telegraph how the technology works. Larsen said that only 10-20% of the assessment is based on facial movements and expressions. The rest, according to Larsen, is based on linguistic elements.

“There are 350-ish features that we look at in language: do you use passive or active words? Do you talk about ‘I’ or ‘We.’ What is the word choice or sentence length? In doctors, you might expect a good one to use more technical language,” he said.

AI technology is most effective in sales jobs. According to Hirevue, one company that used the technology saw a 15% sales increase after hiring using the AI analysis. 

How fair is the algorithm?

Hirevue’s psychologist Larsen claims that the AI is much fairer than a human recruiter. “I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day.”

However, watchdogs have expressed fears that, while the AI’s analysis doesn’t depend on mood, carries other biases. 

Griff Ferris, Legal and Policy Officer for Big Brother Watch called the new technology “chilling,” claiming that it will impact negatively on job seekers.

Ferris’ concerns are linked to equal opportunities. The technology, according to Ferris, “ will inevitably have a detrimental effect on unconventional applicants.”

The watchdog representative added that; “As with many of these systems, unless the algorithm has been trained on an extremely diverse dataset there’s a very high likelihood that it may be biased in some way, resulting in candidates from certain backgrounds being unfairly excluded and discriminated against.”