Introduction

The past few weeks have not been kind to the reputation of algorithms. The controversy over their use in determining school examination results has included claims by Shadow Attorney General Lord Falconer that they discriminated unlawfully. These developments are highly relevant to employers.

Recent years have seen rapid growth in the use of algorithms in employment, particularly in recruitment. Algorithms are now being used in interviews – for example, to assess candidates on their facial and vocal expressions. Chatbots are replacing people as interviewers and textbots are communicating with candidates by text or email. The use of algorithms and AI is moving higher up the recruitment funnel to selection decisions and other HR decisions such as redundancies, performance dismissals, promotions and rewards. Algorithms are also being used for increasingly senior roles.

This article explains why claims regarding algorithms and discrimination are likely to become more common in the years ahead, something which UK employment law and enforcement mechanisms are ill-equipped to deal with.

Do algorithms reduce or embed bias?

Academics, especially in the United States, have extensively debated the pros and cons of algorithms and whether they increase or diminish bias and unlawful discrimination in employment decisions. The proponents point out that while some bias is inevitable, algorithms reduce the subjective and subconscious bias involved in decisions made by humans.

There is evidence that algorithms are capable of making better, quicker and cheaper decisions than humans. On the face of it, algorithms bring objectivity and consistency to decision making. However, the Office of Qualifications and Examinations Regulation debacle highlights the potential for automated decisions to go wrong. Just because algorithms are capable of making better decisions does not mean that they always will.

More than 30 years ago, St George's Hospital in London developed an algorithm designed to make admission decisions more consistent and efficient. This was found to discriminate against non-European applicants for medical school. However, the school nonetheless had a higher proportion of non-European students than most other London medical schools, suggesting that the traditional recruitment methods used by the other medical schools discriminated even more.

Amazon attracted a lot of attention in 2018 when it abandoned an AI-developed recruitment tool that reportedly favoured male candidates. The tool had been developed over the previous four years and trained on 10 years of hiring data. The AI programme had reportedly taught itself to favour terms used by male candidates. It has been said that even though the algorithm was not given the candidate's gender, it identified explicitly gender-specific words such as 'women's' (eg, 'women's sports') and, when these were excluded, it moved to implicitly gender-based words such as 'executed' and 'captured' which men apparently use more often than women.

How do algorithms work?

An algorithm is merely computer code used to navigate and often develop a complex decision tree quickly. Algorithms used in recruitment can be 'off the shelf' ones, which are appropriate when recruiting for jobs where the characteristics of successful candidates will be clear and will not vary from employer to employer. Alternatively, a recruitment algorithm can be created specifically for a client based on a data set taken from that client and customised to take account of the client's own experiences and priorities.

The algorithms which employers use to make employment decisions are usually developed by third-party specialist technology businesses. To do this, the developer goes through various stages.

First, it collects a data set from which to develop its model (the so-called 'training set'). With bespoke recruitment algorithms, this data set is usually based on previous applicants for a particular post.

Second, it must agree the outcome that the algorithm is intended to achieve. It could look at those recruited to define successful candidates or it could use a subset of those recruited who are perceived to have been successful hires. Alternatively, the attributes of a successful candidate could be defined.

Third, it will use the computer's power to identify the best predictors of that outcome from the information contained in the data set.

Finally, the algorithm will be tested and verified. In doing so, the developer will use a different data set from the training data set to verify that the algorithm is generating good results.

Algorithms vary from basic decision trees (eg, the National Health Service 111 'pathways' or the IR35 Check Employment Status for Tax algorithm) to complex opaque programmes which can incorporate AI machine learning where the algorithm teaches itself to make changes to its code in order to better achieve its objectives.

Bias and unlawful discrimination can occur by reason of:

  • the objectives set for the algorithm;
  • the data inputted to create the algorithm;
  • the causal links identified by the algorithm; and
  • the data used when running the algorithm.

For example, something clearly went awry in one reported case where a CV screening tool identified being called Jared and playing lacrosse at high school as the two strongest correlators of high performance in the job.

The use of algorithms to make employment-related decisions also raises difficult data privacy issues. The Information Commissioner's Office recently published fresh guidance on AI and data protection, highlighting the importance of processing personal data fairly, transparently and lawfully and, hence, in a non-discriminatory manner. The guidance illustrates how discrimination can occur if the data used to train a machine-learning algorithm is imbalanced or reflects past discrimination.

What is the likelihood of legal claims over the use of algorithms?

Legal cases in the United Kingdom, or even in the United States, challenging algorithm-based employment decisions have been rare to date. However, that promises to change in the years ahead, something which UK employment law and the UK legal system are ill-prepared to deal with.

Cases are likely to become more common for numerous reasons and not just because of the increased use of and attention paid to algorithm-based decisions:

  • To date, algorithms have mostly been used in recruitment decisions, which are less commonly challenged than decisions relating to pay, promotions or dismissals. As the use of algorithms expands beyond recruitment, litigation will become more common.
  • The true basis on which a decision has been made can normally be determined, albeit not always easily, where it is data based. Unpicking the true motivations behind human-based decisions is often impossible.
  • At least to date, there is evidence that people are more likely to mistrust a computer-based recruitment decision than a human-made one, a phenomenon known as 'algorithm aversion'. People are more likely to challenge decisions which they do not understand. Although human decisions are not as transparent as they might initially seem, whatever explanation might be given, there is plenty of evidence that employment decisions made by humans are influenced by subconscious factors and rationalised after the event.

Algorithm-based decisions are particularly vulnerable to discrimination claims and UK discrimination and employment laws were not designed to meet this challenge and are ill-equipped to do so.

Disadvantaged candidates or employees might argue that an algorithm-based decision unlawfully directly or indirectly discriminated against them. The obligation to make reasonable adjustments under disability law poses further challenges; employers may need to prove that they did not discriminate or that the indirectly discriminatory impact of the algorithm is objectively justified. In many cases, employers will not understand how an algorithm works (or even have access to the source code).

How will employers satisfy these tests? Many suppliers of algorithms reassure clients that their codes have been stress tested to ensure that they do not discriminate. An employment tribunal is unlikely to accept a supplier's word for this. Would independent verification be enough? US verification is unlikely to suffice in the United Kingdom as UK and European discrimination laws are different from US ones.

Would a tribunal order disclosure of test and verification data or even the code itself? Algorithm suppliers would no doubt regard these as important trade secrets to be withheld at all costs. Will experts be needed to interpret this information? Can the algorithm supplier be sued for causing or inducing a breach of equality laws or helping the employer to do so? The supplier will often be based in the United States, introducing practical and legal complications.

Like it or not, the use of AI and algorithms in employment will inevitably increase and the conflict with existing laws and enforcement mechanisms will only become more evident.