Employment: AI – programmed for inequality?
Artificial intelligence (“AI”) is everywhere, and is now used so frequently that we may not even think about it. Algorithms, automated decision-making and machine-learning technologies are increasingly used by organisations to help make decisions about individuals. Alongside the many benefits, there are legal issues to be wary of. We focus on where AI may breach the Equality Act 2010 in the workplace and beyond. Data protection considerations are outside the scope of this article.
The recently released Commission on Race & Ethnic Disparities report (the “CRED report”) acknowledged that AI can contribute to racial inequality. The Government has announced that the Office for AI will publish a white paper later this year on regulating AI, covering issues such as how to address any potential racial bias in algorithmic decision-making, and the Equality & Human Rights Commission (“EHRC”) will provide guidance on how to apply the Equality Act to any AI decision-making. We expect this will go wider than race and consider all protected characteristics (age, sex, race, religion, sexual orientation, gender reassignment, marriage/civil partnership, pregnancy/maternity and disability).
Impact on employers
Bias and discrimination can unfortunately be found in AI tools, often stemming unintentionally from the humans who program them, or inherent bias in the datasets used to “train” the technology. For example, in the work context, an automated recruitment system could discriminate if it:
- favours one gender over another (including scoring language more typically used by male candidates more highly than language more commonly used by females);
- values length of service in past roles disproportionately over experience/skills (which could lead to age discrimination risks); or
- does not recognise overseas qualifications on a par with those from the UK (potentially exposing an employer to race discrimination claims).
There are examples of individuals whose disability impacts on their ability to complete multiple choice tests satisfactorily, despite them being able to answer the question using free text. Any automated decision-making process that does not build in triggers for human checks and reasonable adjustments for disabled individuals could breach the Equality Act.
We know that diverse teams work well, but that does not always play out in recruitment decisions. AI may recommend a candidate for recruitment that surprises an organisation. This could leave the recruiters wondering whether the AI tool has got it wrong, or whether it has instead shone a spotlight on potential bias in the human decision-making process, left unchecked until now.
In an employment setting, AI often has significant consequences for individuals and it is not uncommon to see claims arising from its use.
Impact on society
The Equality Act also applies beyond employment, covering education, the public sector and the provision of services to clients, customers and the public. AI’s influence can be more opaque when it is used in society, with the impact often being small for individuals but far-reaching when we zoom out.
For example:
- AI can be used to filter applications for a range of services, from education courses to financial products.
- US research highlighted the disparate impact of facial recognition technologies on women
and certain racial groups, some having an error rate of more than 30% for darker-skinned women compared to less than 1% for lighter-skinned males. - Concerns have been raised about targeted advertisements online for products, jobs and services, for example AI tools resulting in job adverts for mechanic jobs being advertised to predominantly male users, and nursery nurse jobs to predominantly female users.
Taken in isolation, an individual may not realise that an error in face recognition or having sight (or not) of a particular advert has occurred, or is an example of discrimination, but across the population we can see the scope for far-reaching consequences as access to opportunities and support is influenced in part by protected characteristics.
What next?
AI can be a mechanism for achieving more equitable decision-making and reducing bias. To ensure AI is a positive tool, we need big-picture safeguards in place. The European Network of Equality Bodies produced a report highlighting the role of equality bodies, such as the EHRC, in regulating and ensuring that AI tools produce non-discriminatory outcomes. The UK Government’s response to the CRED report stated that the EHRC will advise on safeguards and produce guidance to explain how to apply the Equality Act to algorithmic decision-making. The further guidance to be issued by the EHRC and Office for AI will hopefully assist employers in their use of AI, and also provide a safety net for society.