Organisations are increasingly using algorithms and automated decision-making to assist them in making decisions about individuals, but to what extent is this a step in the right direction?
Many employers now include algorithms and automated decision-making in hiring and other personnel processes. The London School of Economics and Political Science recently reported that more than 60% of firms had adopted new digital technologies and management practices as a result of COVID-19. Whilst the use of these AI tools provides benefits to an organisation, such as speed and cost savings, employers should be mindful of the legal implications of placing too much reliance on AI.
Consideration of data protection law
The UK implementation of the GDPR provides that data must be processed “lawfully, fairly and in a transparent manner”. When organisations use algorithms to process special category data (e.g. health, race and religion), they must ensure this does not have an unjustified, adverse effect on the individual.
UK GDPR specifically prohibits “solely automated decision-making that has a legal or similarly significant effect” unless:
- you have explicit consent from the individual;
- the decision is necessary to enter into or perform a contract; or
- it is authorised by EU or member state law.
These exemptions are a high bar to satisfy for employers. Consent might appear to be the most relevant in an employment context, but there is a risk that the power imbalance between a job candidate and prospective employer could result in consent not being considered to be freely given (and, as such, invalid). Where consent is relied upon as a basis of processing, organisations also need to keep in mind that individuals are entitled to refuse or withdraw consent at any time, without suffering any detriment (in practice, that means they could have a right to switch to a process that does not involve automation). What is “necessary” to enter into a contract can be difficult to establish. The Information Commissioner’s Office guidance states that the processing must be a targeted and proportionate step which is integral to delivering the contractual service or taking the requested action. This exemption will not apply if another decision-making process with human intervention was available. It seems that relying solely on automated decision-making would run into some GDPR hurdles.
That said, most organisations using automation do so alongside traditional methods (interviews, applications, assessments, appraisals etc.). Before introducing algorithms and automated decision-making as part of any process, organisations must prepare a Data Protection Impact Assessment (DPIA) to analyse, identify and minimise the data protection risk to ensure compliance with UK GDPR or risk a fine of up to £8.7 million or 2% global annual turnover if higher.
Consideration of the Equality Act 2010
Algorithms are human-made and, as such, they are inherently at risk of featuring some bias. A significant concern could arise if the algorithm inadvertently leads to discrimination in breach of the Equality Act.
For example, an automated recruitment system could discriminate if it:
- favours one gender over another (including scoring language more typically used by male candidates more highly than language more commonly used by females);
- values length of service in past roles disproportionately over experience/skills, which could lead to age discrimination risks; or
- does not recognise overseas qualifications on a par with those from the UK (potentially exposing an employer to race discrimination claims).
Any automated decision-making process that does not build in disability discrimination safeguards and reasonable adjustments could also place the employer at risk. There are examples of individuals whose disability impacts on their ability to satisfactorily complete multiple choice tests, despite them being able to answer the question using free text. An automated process that does not build in flexibility (including appropriate triggers for human checks) could lead to equality concerns.
A robust AI tool may recommend candidates for recruitment that surprise an organisation. We know that diverse teams work well but that does not always play out in recruitment decisions. Diversity and a range of personality types can challenge existing (often unconscious) preferences related to team cohesion. This could leave the recruiters wondering if the AI tool has got it wrong and needs changed/overruled, or if it has instead shone a spotlight on potential bias in the human decision-making process left unchecked until now.
Takeaway considerations for employers
Bias and discrimination can unfortunately be found in AI tools, often stemming unintentionally from the humans who program them. Notwithstanding this, AI may also be the solution (or at least a helpful part of it) to achieving more equitable decisions. As technology continues to develop, algorithms can be programmed to detect and hopefully reduce discrimination and bias in decision-making. And, perhaps, we should be prepared to embrace some surprise outcomes from AI that in fact redress unidentified bias in the human decision-making process (robot 1:0 human).