Indeed, some might say I've something of a bee in my bonnet about it! But at least I am in good company, as the UK Equality and Human Rights Commission ("EHRC") has demonstrated a desire to push good AI governance higher up the agenda. Last September, it published new guidance for public bodies and those carrying out public functions in England, Scotland and Wales to help them avoid breaching equality law when using artificial intelligence, and in particular to help those using AI to implement an effective governance framework to support compliance with existing equality obligations.
The EHRC is acutely aware that AI and new digital technologies are transforming the delivery of public services and such technologies have the potential to lead to discrimination if implemented incorrectly. Accordingly, organisations that don't take steps to prevent such a risk may face reputational damage and legal action for breaches of the Equality Act 2010, including the public sector equality duty ("PSED").
Public bodies need to consider PSED from the outset when assessing whether to use AI and associated technologies, or coordinating with others who may be developing or using them on their behalf. PSED requires organisations carrying out public functions to have due regard to the need to eliminate any conduct that is prohibited by the Equality Act 2010, advance equality of opportunity, and foster good relations between people. Organisations subject to PSED must therefore consider and review how they are building equality into decision-making, internal and external policies, and the procurement of goods and services. In this respect, the EHRC published a checklist aimed at helping public bodies to think about what they need to do to comply with PSED.
While the EHRC advises that organisations should develop their own checklist based on their particular operations, it provides the following general steps that organisations should take:
- identify if and how your organisation, or others on your behalf, use AI and consider how PSED applies. Organisations should build equality into their existing services, and decisions about new policies or services must happen at the start of their decision-making on whether to use a new AI system.
- gather evidence from staff, users, and equality and community groups, and assess how the AI could affect people with different protected characteristics either positively or negatively. Organisations should consider involving people from different groups before and after implementation to assess the potential benefits of using AI, the risks it may pose to equality and how the organisation can minimise such risks. Having limited or no data on certain protected characteristics is not an excuse. Where organisations have data gaps, they should take proportionate steps to fill these. For example, this might include undertaking targeted engagement or reviewing existing research.
- keep records of decisions relating to PSED and publish the results of the equality assessment for transparency. Case law on PSED illustrates that it is good practice to keep a record of how an organisation has considered equality in its decision-making process, and doing so can show how that organisation has considered PSED and assist in responding to complaints, audits and freedom of information requests.
- monitor the actual impact of the AI-related policy or service, reviewing and amending it as necessary. PSED applies even if an organisation is commissioning a third party to develop the AI, buying an existing product or commissioning external parties to use the AI on their behalf. In this case, to monitor compliance, the EHRC recommends that organisations impose a contractual requirement on third parties to provide the information needed to monitor the implementation of the AI in relation to PSED.
These general steps should not come as a surprise to those familiar with the anticipated requirements of the proposed EU Artificial Intelligence Act, in particular the requirements for quality systems, record keeping, and post-market monitoring.
In its guidance the EHRC reiterates how PSED is an ongoing duty and, where AI is used, it is important that public bodies (and organisations carrying out public functions) ensure that AI is working as intended and is able to guard against any unlawful discrimination or unintended negative effects. As a minimum, this is likely to require regular monitoring and evaluation to see whether people with one or more protected characteristics are being treated less favourably than others.
AI at Fieldfisher
As an independently ranked Tier 1 practice, our clients greatly benefit from our deep technology knowledge and market-leading expertise. Our international team are experts when it comes to legal issues relating to artificial intelligence, and our clients trust us to deliver pragmatic commercial solutions to help meet their strategic goals, and help them navigate this complex and fast-moving area of law.
The team's position at the top of the market, and the high-profile mandates we have learned from, gives clients the reassurance they need when instructing lawyers.
For further information, please contact Chris Eastham at firstname.lastname@example.org.
Thanks to Andrea Carrera for his support in preparing this article.
Sign up to our email digest