AI and I&D: Do they mix in recruitment? | Fieldfisher
Skip to main content
Insight

AI and I&D: Do they mix in recruitment?

02/12/2021

Locations

United Kingdom

Technology is revolutionising the way organisations hire workers, but concerns about in-built biases in artificial intelligence-based recruitment processes suggest more needs to be done to ensure the technology is always fair and inclusive.
 
Employers are increasingly turning to artificial intelligence (AI) to assist with all aspects of the recruitment process.

While there is enormous potential for such tools, employers must carefully consider the risks to ensure no candidates are unfairly discriminated against when assessed by AI.

In simple terms, AI can be described as the use of technology to mimic the problem-solving abilities of humans for various purposes, including algorithm-based decision making by employers in the recruitment process.

The computerised algorithms provide instructions to use data to make decisions or perform tasks. The underlying rules making up an algorithm can either be created by computer programmers or developed via 'machine learning.' Machine learning allows computers to operate with minimal human supervision, self-create and adapt by learning from patterns they identify in data input (known as the 'training set').

AI in the recruitment process

Employers deploy recruitment AI tools in various ways, including targeted online job advertisements, automated screeners to detect characteristics in candidates' CVs and facial recognition technology (FRT) to analyse video interviews.

Algorithms used by employers to help recruit staff are, in most cases, developed by specialist third party providers.

The development process tends to include the following steps:

  • Developing the 'training set' on which the algorithm will base future decisions – in recruitment algorithms, for example, the dataset may include previous application forms and interview records;

  • Determining the result it wants to achieve – it may do this by considering the characteristics of a successful candidate;

  • Using the technology to determine the best predictors of this result; and

  • Testing the algorithm to ensure it is generating good results.

What are the dangers of using AI in recruitment?

Under the Equality Act 2010, employers must not use AI in a way that constitutes discrimination. This includes using algorithms that replicate historical bias against certain demographics to make decisions.

AI algorithms are only as unbiased as the information they are fed, and their use can result in bias and unlawful discrimination because of the objectives set for the algorithm, the data used to create the algorithm and the predictors of good candidates identified by the algorithm.

This means AI-powered solutions have the ability to perpetuate bias in recruitment rather than removing this risk.

The most common discrimination challenges when relying on AI in recruitment tend to be:

  • The training set

The use of training data that has in-built bias can result in the algorithm automating and amplifying that bias.

The objective for systems used in recruitment processes is to distil a large amount of information about each applicant down to a few predictable features to make easily comparable decisions.

Part of this process involves employers giving the algorithm examples of what a solution looks like (the training data). If these examples have in-built bias, or if the training data is incomplete or inaccurate, the algorithm will process data that may not accurately reflect reality.

For example, the use of CV screening requires technology to process applications and detect candidate characteristics to evaluate employability against criteria for the position.  

If an organisation is in an historically male-dominated industry, however, the training data supplied for the purposes of the recruitment algorithm may be inherently gender-biased.

Such risks may also be present in the use of FRT systems to analyse videos of candidates to examine speech patterns, tone of voice, facial movements and other indicators. Based on these indicators, the system makes recommendations as to who should progress to the next stage.

The danger, however, is that these technologies have the potential to discriminate against people with disabilities that significantly affect facial expression and voice. There are also examples of training sets used for AI facial recognition systems being largely comprised of Caucasian faces, meaning the live sensors cannot successfully recognise the presence of darker skin tones.

  • Data processing

Discrimination can also occur when the data is processed.

Data processing by AI systems typically occurs in a metaphorical 'black-box' that accepts inputs and generates outputs without revealing to the employer, candidate or even the human developer responsible, how the data was processed.

This problem is exacerbated where AI tools are bought 'off the shelf' from third party suppliers, because vendors of such systems tend to be less willing to reveal information that might compromise their trade secrets or intellectual property rights.

It is important to understand how a tool actually works – in a so-called 'glass box solution' – so employers can ensure systems involve effective feedback loops that promote correction and ongoing training.

If this can be determined in the development phase of such systems, the risks of algorithms that entrench biases and discrimination in decision making processes may be avoided.

Conclusion

The use of AI in the recruitment process has tangible benefits for employers. Employers can speed up the recruitment process, ensure vacancies are filled more quickly and enhance the candidate experience.

However, acknowledging and monitoring uncertainty in AI systems is critical to making fair decisions.

When using AI in the recruitment process, employers should ensure robust testing of any anti-discrimination measures and monitor the algorithm's performance on an ongoing basis.

By continuously checking algorithm performance against indicators that reflect non-discrimination, employers will be able to identify bias early and take steps to correct this.

This article was authored by David Lorimer, employment director; and Alexandra Kalu, trainee, at Fieldfisher.