AI & privacy compliance: getting data protection impact assessments right | Fieldfisher
Skip to main content
Insight

AI & privacy compliance: getting data protection impact assessments right

Locations

United Kingdom

In recent years, we have experienced a significant growth of the use of AI technologies. ChatGPT has dominated the headlines in recent months. As the EU ramps up its AI regulations, the legal and governance risks posed by the use of AI have caught the attention of us all, in particular the C-suite.

AI governance has become a top priority and data protection compliance will be front and centre of these concerns to the extent that personal data is processed in order to develop, test or use AI technology.

Against this backdrop, privacy teams will have to ensure that their privacy compliance and accountability tools are 'up to scratch' to deal with the challenges posed by the use of AI technologies.

This blog focusses on your data protection impact assessment ("DPIA") tool and provides information, based on recently updated ICO guidance (here), on the elements that a DPIA should contain, where its purpose is to assess a product containing AI technology.  These considerations build on the list of minimum elements for a DPIA that are set out under Article 35.7 GDPR.

Preliminary questions

Before considering their approach to AI DPIAs, organisations will have to ask themselves two preliminary questions as follows:  

Preliminary question 1: will we handle the personal data as a controller or a processor?

It may not be easy to overlay the traditional split of controllers and processors onto an AI scenario and lines often blur. For instance, while vendors would be typically characterised as processors, an AI vendor may often act as a controller. There is regulatory guidance on this point. An organisation that makes decisions about the source and nature of the data used to train the model, the purpose of the model, the algorithms used, how the model is tested, etc. is likely to be a controller. However, if you develop a customised model for the customer, you may act as a processor or a joint controller. Generally, the user / customer will be a controller in relation to the output data.  See Figure 1 below for the sorts of decisions a processor can potentially take without necessarily straying into the realm of acting as a controller.

Figure 1.

What decisions can a controller take? What decisions can a processor take? 
Target output (i.e. what is being predicted or classified) Specific implementation of generic ML algorithms, such as programming language and code libraries
Feature selection How data and models are stored, such as formats they are serialized and stored in and local caching
Source and nature of the training data Security measures
The kinds of ML algorithms that will be used to create models How you retrieve, transfer, delete or dispose of data (BUT not how long you retain data for)
Key model parameters (such as how complex a decision tree can be, or how many models will be included in an ensemble) Measures to optimize learning algorithms and models to minimize their consumption of computing resources, such as by implementing them as parallel processes
Key evaluation metrics and loss functions (such as trade-off
between false positives and false negative)
Architectural details of how models will be deployed, such as choice of virtual machines, micro services and APIs
Process for testing and updating models (how often, what kinds of data, and how ongoing performance will be assessed) IT systems and methods

The data protection roles of the parties matter because, under the GDPR, only controllers are required to carry out DPIAs, although processors will be required by contract to help controllers carry out such assessment. AI developers (regardless of whether they act as controllers or processors) should be ready to assist their customers to carry out DPIAs or, perhaps, in order to standardise the process, have their own 'ready-made' DPIAs for their customers to use / source information from.   

Preliminary question 2: is a DPIA required?

Under the GDPR / UK GDPR, controllers are required to carry out DPIAs for 'high risk processing'.

It is generally accepted that the use of AI technologies is likely to trigger the requirement to carry out a DPIA in many contexts.

High risk processing scenarios that trigger a DPIA requirement are set out in the GDPR (i.e. systematic and extensive evaluation of personal aspects which fall under the instances of automated processing regulated in Article 22 GDPR; large-scale processing of special category data; or systematic monitoring of publicly accessible areas on a large scale).

The Article 29 Working Party of EU data protection authorities (WP29) published guidelines which define nine criteria of processing that are likely to indicate high risk ("European Guidelines"). Furthermore, European regulators have issued Opinions identifying other processing activities that trigger the need to carry out a DPIA. Data matching, large scale profiling, and targeting of children, amongst others, have been recognised by the ICO as processing activities that require a DPIA.  In the UK, for example, use of innovative technologies such as AI will require a DPIA, if such use is combined with one of the other factors from the European Guidelines, e.g. processing personal data on a large scale or processing sensitive personal data. 

AI DPIAs: what should be covered?

The processing activities

The DPIA should consider the nature, scope, context and purposes of any processing of personal data and whether individuals are likely to expect such processing activities. In particular, you will need to address a number of questions:  
 
  • What data you process and its source;
  • How you will collect, store and use data;
  • How much data you cover: the volume, variety and sensitivity of the data;
  • Who are the individuals: your relationship with individuals; and
  • What are the intended outcomes for the organisation, the individuals or wider society.


Furthermore, your description of the data protection activities should include data flows and indicate the stages at which the AI processing and automated decisions may produce effects on individuals.   

The impact on individuals, necessity and proportionality

When assessing the impact of your processing on individuals, you should consider allocative harms (where the harm results from the decision to allocate goods and opportunities among a group – e.g. loss of a financial opportunity, loss of freedom or loss of life, etc.) and representational harms (where the harm results from the subordination of groups along identity lines – e.g. stereotyping, denigration, etc). 

The availability of the technology should not be considered a good enough reason to use it. You should consider whether AI is necessary to achieve a the relevant objective and that there are no less intrusive ways to achieve the same objective. In general, the DPIA should consider the relevant legal basis for processing, as well as overall lawfulness of any processing of personal data (which will potentially necessitate looking at the legal landscape outside of data protection).  It is worth noting that both the UK Data Reform Bill and the draft AI Act envisage a new legal basis for processing of special category data for bias mitigation. 

You should consider whether the AI tool is proportionate by assessing your interest against the risks to the rights and freedoms to individuals and/or whether it is possible to put guardrails around the use of the technology to ensure that the processing of personal data does not go beyond what is needed to achieve your particular objective. Any bias or inaccuracy of the algorithms may result in detriment to individuals and this possibility should be taken into account as part of your proportionality assessment.

If the use of AI replaces human intervention, you will have to compare the human and algorithmic accuracy in order to justify the use of the AI tool.

You will need to document any trade-offs, for instance, whether you have not reduced the amount of data processed (as you may have otherwise done under the data minimisation principle) in order to maintain better statistical accuracy.

Fairness and transparency

The DPIA should address the issue of transparency and explainability and how these objectives are achieved.  The ICO has issued guidance on transparency and explainability in AI in conjunction with the Alan Turing Institute.  See here: Part 2: Explaining AI in practice | ICO

The DPIA will also have to consider the relevant variation or margins of error in the performance of the system, which may affect the fairness of the processing (including statistical accuracy) and describe if and when there is human involvement in the decision-making process.

The parties involved

The DPIA should describe/consider the roles and obligations of the controller(s) and include any processors or joint controllers involved (as well as what contractual arrangements are in place to meet the requirements of the GDPR).

Some information in the DPIA may be provided by means of incorporating information obtained from vendors.

Security threats

The DPIA will also need to consider the potential impact of any security threats.  As with any data processing, the relevant controller(s)/processor(s) will need to ensure that there are appropriate technical and organisational measures in place to safeguard personal data.  AI potentially exacerbates existing security risks and introduces potential new risks.  The ICO guidance gives examples of two kinds of privacy attacks in an AI context – "model inversion" and "membership inference".  

Consultations

Consultations with stakeholders are recommended unless there is a good reason not to undertake them. You may justify your inability to carry out consultations if these would compromise commercial confidentiality, security or if these would be disproportionate or impracticable. It may be appropriate to consult with individuals whose data you process, independent experts, relevant internal stakeholders, your processor (if relevant) or legal advisors.

Risks to individuals

The level of risks should be documented by identifying the likelihood and severity of the impact on individuals. Risk levels should be given a score.

Your assessment of risks should be broad. In addition to considering individuals' information rights, it should consider other material and/or non-material damage to individuals.

Further, the DPIA will need to consider the rights and freedoms of individuals generally, not just in a data protection context.  Fairness is a very broad concept and provides a "wormhole" into broader AI compliance (and the need to avoid bias/discrimination and ensure compliance with human rights).  Ultimately, it makes sense to take into consideration the principles around which AI is coalescing, such as OECD Principles on AI.

Measures to mitigate the risks

Your DPIA will have to document the measures in place to mitigate the risk and record whether the risk is reduced or eliminated (and document the residual risks once the measures have been implemented). Such measures may include technical and/or organisational measures.   The DPIA should be regarded as a "living" document and should be reviewed on a regular basis, in particular to address "concept drift", e.g. where the demographics of the target population alter or people change their behaviour over time in response to the processing (resulting in a change to the nature, scope, context or purposes of processing, or the risks posed to individuals)

Conclusion

Organisations are likely to benefit from revising (and updating) their DPIA templates so that they are fit for purpose for DPIAs involving the use of AI.

While the main areas of review align with the parameters set out in the GDPR itself (Article 37.7), the ICO provides valuable guidance as to the criteria you should consider in order to assess, categorise and mitigate the risks in relation to the use of AI technology.

Compliance teams and project teams should be in contact from the early stages of projects involving AI so that risks can be identified and addressed early. The ICO has issued an AI risk toolkit which may be used as another accountability tool (not to replace DPIAs) along the AI lifecycle.
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Data and Privacy