The machines are coming … and so are the regulators! | Fieldfisher
Skip to main content
Insight

The machines are coming … and so are the regulators!

05/03/2020

Locations

United Kingdom

As Isaac Asimov stated in his Zeroth Law of Robotics, "A robot may not injure humanity, or, by inaction, allow humanity to come to harm." That was in 1986, but the principle is as relevant as ever in 2020. As Artificial Intelligence leaps from the lab into our social media networks and hospitals, regulators are now playing catch-up. Their task is to ensure that AI serves as an overall societal good, and that we avoid potential doomsday scenarios.

So how does the GDPR fit in to regulation of AI, what else is on the horizon and how can data protection principles be applied to other sources of risk? This post focusses on the three latest publications: The European Commission's White Paper on Artificial Intelligence, the ICO's Draft Guidance on the AI Auditing Framework, and a joint project between the ICO and the Alan Turing Institute called ExplAIn - all published since the new year. The blog also identifies the practical steps you can be taking now so that when the Terminator (aka regulator) comes knocking, you just don't say "I'm unable to comply".
 

The White Paper – it's not just about privacy

The White Paper was written by the Commission's Unit on Robotics and Artificial Intelligence. It's purpose is to set out policy options in relation to AI and invites comments on the Commission's future decision making. The consultation is open until 31 May 2020.

The Commission acknowledges that AI has the potential to bring huge economic and social benefits to society. For example, it has already been used to improve the speed and accuracy of cancer diagnosis – a recent study showed that an AI model was as accurate as two doctors at interpreting mammograms, and superior to that of a single doctor. Another example is the development of autonomous vehicles, which have the potential to reduce congestion and emissions.

However, where there are rewards, there are risks. The White Paper lays out the risks posed by AI if not guided by sufficient regulation. Along with data protection risks, other risk areas identified include health and safety, protection against discrimination, and freedom of expression. The White Paper does not go into detail about these risks, but examples could include:
 
  • The moderation of social media content – AI is already being used by social media networks to sift through content being posted online to remove child pornography and violent content, but doing so can risk removing legal and perfectly acceptable content. Even computers make mistakes.
  • Reviewing job applications – Since AI is trained using real CVs of "good" applicants identified by humans, how do we ensure that the AI does not unwittingly select candidates based on our historical gender or racial biases?
  • Autonomous vehicle safety - AI algorithms in automated vehicles could be trained to detect, and be cautious, when seeing people by the side of the road. However, if the algorithm is only trained using adults (for example) then it may be less likely to spot dangerous situations involving children.  There may also be cultural differences in road safety awareness across countries and cultures.  An algorithm is only as effective as the data it ingests.
The White Paper proposes six requirements for high-risk AI applications. As you're reading, take a moment to reflect back on the GDPR - do any of these sound familiar?
 
  • Training data. Train your AI on data that is broad enough to cover all the scenarios the AI is likely to encounter in order to avoid dangerous situations, and to take reasonable measures to ensure the use of AI does not lead to prohibited discrimination (e.g. the gender or racial biases when sifting CVs).
  • Keeping of records and data. In order to encourage compliance at an early stage and to facilitate cooperation with authorities and enforcement, the Commission urges a record-keeping requirement in relation to the programming of the algorithm, the data used to train high-risk AI systems, and, in certain cases, the keeping of the data themselves.
  • Information provision. Concise and easy to understand information should be provided to inform individuals about the use of AI, the purpose for which the system is intended and its level of accuracy. This specific issue is addressed in the ExplAIn papers, described below.
  • Robustness and accuracy. AI systems need to be accurate and behave reliably. All reasonable measures should be taken and the risk to individuals minimised.
  • Human Oversight. In addition to situations where decisions produce legal or similarly significant effects using personal data, human review may be required for oversight of the output. Operational constraints to the AI should also be built in during the design phase.
  • Specific requirements for remote biometric identification. The commission specifically calls out the risks of biometric identification and the need to comply with data protection law.
So regulation of artificial intelligence has some very obvious parallels with data protection regulation. Consider your accountability obligations under the GDPR.  To satisfy these, you need to maintain records of processing, implement technical and organisation data security measures, and take appropriate data protection measures during your product design phase. It's precisely these kinds of topics that are mirrored within the Commission's AI proposals.
 

AI Auditing Framework – assessing your risks

In the same week as the White Paper, the UK's data protection regulator published its draft Guidance on the AI Auditing Framework. This 105 page whopper of a document is aimed at both legal and compliance professionals, as well as AI technology specialists. It provide both legal guidance on the data protection aspects of AI, and also provides tools and procedures to audit and investigate uses of AI. In other words, this is the framework the ICO will use when reviewing potential data protection infringements of AI systems. Like the White Paper, it is also open for consultation - until 1 April 2020.

Some of the practical points coming out of the Guidance are:
 
  • Identifying who the data controller is. Identifying the controller is a crucial first step in your compliance journey. The controller will have the responsibility for conducting the DPIA, and will be the party ultimately responsible for data protection compliance. However, whilst crucial, answering this is not always going to be straightforward. Ask yourself: who trained the AI model? Who runs the algorithm? Who evaluates and tests the model? Who sets the balance between false positives and false negatives? They may not be the same entity.
  • Undertaking an assessment. The Commission likes to call this a "conformity assessment" – this is essentially what us data aficionados like to call a data protection impact assessment (DPIA). Your assessment will build upon the same process as your DPIA and will include some additional considerations to take into account the wider risks of AI. The ICO's framework suggests that you may need to undertake two of these – a technical version to truly capture how the AI operates, along with a high-level version to help you when drafting notices to individuals to explain the processing. The conformity assessment should also explain whether there are any trade-offs, for example, when trying to comply with the principles of data minimisation at the same time as maintaining statistical accuracy. Crucially, the aim of your conformity assessment is to identify the risks, and the measures to mitigate them. For example, this could include rebalancing the training data that the model uses, and potentially using special category data (with a valid legal basis!) to test how the system performs against, say, different gender or racial groups, and retrain the model if necessary.
  • Drafting a notice. Not only will this include information required under data protection law, but also information to help the individual understand why AI is being used, how decisions are reached, and how they can be effectively challenged by human review. It's worth noting here the complexities of describing your lawful basis for processing (as required under the GDPR), which may be different for different purposes. For example, your AI system for facial recognition is likely to have a different legal basis when used for crime prevention, tagging friends on a social network or for security authorisation.

ExpAIn Guidance – explaining AI

The ICO has joined forces with the Alan Turing Institute, the UK's national institute for data science and artificial intelligence. Amongst other things, the ExplAIn Guidance discusses the process which organisations should go through to prepare their AI explanations. The guidance is helpful in understanding that there are (at least) six different kinds of explanations that may be given to users of an AI system.

The six types of explanations described are:
 
  • Rationale explanation: the reasons that led to a decision, delivered in an accessible and non-technical way.
  • Responsibility explanation: who is involved in the development, management and implementation of an AI system, and who to contact for a human review of a decision.
  • Data explanation: what data has been used in a particular decision and how; what data has been used to train and test the AI model and how.
  • Fairness explanation: steps taken across the design and implementation of an AI system to ensure that the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably.
  • Safety and performance explanation: steps taken across the design and implementation of an AI system to maximise the accuracy, reliability, security and robustness of its decisions and behaviours.
  • Impact explanation: the impact that the use of an AI system and its decisions has or may have on an individual, and on wider society.
The guidance stresses the importance of prioritising these explanations depending on the user of the system. For example, AI applications in medicine should focus on the safety and performance explanation in line with the established standards and expectations of the medical sector. Meanwhile, AI use in the criminal justice system, where biased decision-making is a significant concern, should focus on the fairness explanation.  These are obviously important considerations for controllers to take into account when considering their transparency statements and how to ensure the "fairness" of their AI algorithms.
 

Final thoughts – "I'll be back"

We hope this foray into recent AI guidance has given you a starting point when reviewing the AI systems you are developing. It is clear that the UK Government and the European Commission are supporting the development of AI given its importance to society and the economy, but they are clear that its development must be carried out in a responsible manner. Other countries are also driving forward the AI agenda.

Practically, the ICO Auditing Framework and the ExplAIn papers give a lot more legal and technical detail and further updated guidance will be produced in due course. They are worth a detailed read as you embark on your AI journey.
 
Robert Fett is an Associate in Fieldfisher's Privacy, Security and Information Law Group in London.
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Data and Privacy