The landscape of artificial intelligence (AI) in the employment sphere remains largely unregulated and, in many ways, has even lacked proposals for structured governance. This changed in April 2024, when the Trades Union Congress (TUC) unveiled its draft Artificial Intelligence (Employment and Regulation) Bill (the draft Bill), with the aim of regulating employer use of AI and protecting workers’ and job seekers’ rights. This blog delves into the key aspects of the draft Bill and considers its potential implications for employers.
Scope of application
The draft Bill seeks to target the use of AI in “high-risk” decision-making processes. This is broadly defined to encompass decisions that could have a substantial impact on the legal rights or employment conditions of workers, employees or jobseekers. In practice, this may include high-risk decisions related to the recruitment process, the termination of employment contracts or the evaluation of an employee’s performance.
What is expected of employers?
In Part 3, the draft Bill proposes several requirements for employers when using AI to make “high-risk” decisions:
- Workplace AI risk assessment (WAIRA): A WAIRA would have to be completed prior to any high-risk decision being taken. This would assess an AI system in relation to health and safety, equality, data protection and human rights.
- Direct consultation: Employers would be required to directly consult with employees and workers to take account of their concerns and interests.
- Register of AI systems: Employers would have to establish and maintain a register of information about the AI systems they used in decision-making.
- Right to personalised explanations: On request, employees would be entitled to a personalised statement explaining how they may be impacted by any high-risk decision.
- Right to human reconsideration: On request, employees would be entitled to human reconsideration of any high-risk decision made by AI.
Non-compliance with any of the above rights and obligations would, under the Bill, entitle an employee to present complaints to an employment tribunal. Should a tribunal find in favour of the employee, the tribunal could: (i) make a declaration to that effect; (ii) make an award of compensation; and (iii) make a recommendation to the employer as to the steps necessary to remedy the breach.
Shifting the burden of proof
The draft Bill also seeks to amend the Equality Act 2010 to combat discrimination facilitated by AI in the workplace. It shifts the burden onto employers to demonstrate the absence of discrimination in AI or human-led decisions. Should employers fail to prove non-discrimination, they may still be able to fall back on a statutory defence where the following conditions apply:
- the employer did not create or modify the AI system;
- the employer audited the AI system for discrimination at each stage before using it to make high-risk decisions; and
- post-audit, procedural safeguards were put in place, including monitored steps to ensure the AI system was not used discriminatorily by employees or workers.
This shift in the burden of proof would undoubtedly lead to increased scrutiny of AI systems and greater transparency expectations. If such provisions became law, employers would need to take proactive measures to demonstrate their compliance, such as conducting AI impact assessments, implementing bias checks and developing clear AI policies.
The right to disconnect
Under Part 6 of the draft Bill, employees would be granted a statutory right to disconnect. The TUC believes that this would guard against AI-driven work intensification. The draft Bill builds on this in Part 7 which creates a new type of automatic unfair dismissal. If an employee is dismissed due to “unfair reliance on high-risk decision-making” using AI or as a repercussion of asserting their right to disconnect, that dismissal would be automatically unfair.
ICO’s strategic approach to regulating AI
Separately, the Information Commissioner’s Office (ICO) has responded to the UK government’s White Paper with a strategic approach to AI regulation. This strategy outlines AI’s potential to drive positive societal change while cautioning against risks such as bias, lack of transparency and accountability issues. Emphasising a flexible, principle-driven and risk-based regulatory framework, the ICO calls on organisations to proactively manage AI-related risks and safeguard individual rights. To aid organisations in this complex area, the ICO has developed the AI and Data Protection Risk Toolkit for structured risk assessments and the Harms Taxonomy to categorise potential harms, facilitating comprehensive risk evaluations and the development of context-specific risk mitigation strategies.
Looking ahead
Whilst the TUC’s draft Bill suggests one way in which the use of AI in the workplace could be regulated, it has not been introduced to parliament as an official Bill. It is therefore currently more significant in stimulating debate rather than changing the law on the use of AI. The UK government does not appear to have any imminent plans to legislate in this area. Nevertheless, these latest developments underscore the changing environment for AI, encouraging employers to deploy AI responsibly and transparently in the workplace.
For further information or guidance on the implications of AI in employment, please contact our employment law specialists.