Get ready to address data subject rights requests affecting your AI systems | Fieldfisher
Skip to main content
Insight

Get ready to address data subject rights requests affecting your AI systems

At a time where data subjects are becoming increasingly aware of their rights under the GDPR, and with the prevalence of data collection, storage and processing in the context of Artificial Intelligence (AI) and machine learning (ML) systems, it is a fitting time for your organisation to revisit its current policies and processes for honouring data subject rights requests.

This blog provides practical insights, based on the ICO's updated AI guidance (here), on the challenges you may encounter when dealing with data subject rights in an AI context and how to address them. 

Do all data subject rights apply?

Yes. However, management of such requests may trigger different issues at each stage of the AI system lifecycle.

Do data subject rights apply to all stages of AI development or deployment?

  • Training Data

Data will always be used to train AI models. If training data sets contain personal data, data protection rights will apply to information contained in your training data. Often, due to the pre-processing of data for the purposes of training a model (which, to the extent the data is personal data, will still be 'processing' of personal data under the GDPR), training data can be harder to link back to a particular individual. Just because it is difficult to identify an individual in an AI system, it does not mean you do not have to comply with a data subject rights request in respect of that data. However, in cases where you can demonstrate that individuals cannot be (re)identified in a training set (directly or indirectly), you are not be required to honour the rights request as it is no longer considered personal data.

Nevertheless, it is important to bear in mind that the concept of personal data in the EU and the UK is very broad, so you will not be able to rely on this point where individuals can be singled out in your databases or in instances where the individual provides additional information that makes them identifiable.

If you receive a request from an individual to erase their data from your training data set, you do not have to erase all ML models based on that data, unless the models themselves contain the personal data. Such a request is therefore unlikely to affect your system as you will still likely have sufficient information within the training data.

If you are engaging a third party to develop or provide your AI system, it is important to understand how the model is trained, whether any personal data is included in the training data set and to verify that the system has been designed in a way that can facilitate data subject requests (as your obligations as a controller to comply with such requests will continue to apply regardless). Where you are engaging a processor, they should be compelled in your data protection agreements to assist with such requests. In the event you are acting as a joint controller, each controller's obligations with respect to responding to data subject requests must be clearly established and stipulated in the data protection agreement.

  • AI Systems Outputs

It is much more likely for individuals to exercise their rights in relation to output data from an AI model (or the personal data inputs upon which the output is based), with the right to rectification likely to be the most common request. This is because an inaccurate output could directly affect an individual, for example, where AI is used to determine an individual's credit score. Naturally, an individual that receives a credit score that is not reflective of their financial standing is more likely to care and therefore submit a request for rectification where the accuracy of the output of such systems is questionable.

If an AI system generates outputs based on inaccurate input, then individuals would be entitled to exercise their right to rectification. However, if AI outputs are predictions rather than statements of fact, and the personal data upon which they are based is not inaccurate, then the right to rectification does not apply.
All other rights apply as you would expect in relation to AI output data, however, note that the right to portability does not apply to predictions or classifications in AI output data as such data is not 'provided' by the individual. 

  • Data in the model itself

In some cases, a small amount of personal data may be used in the model itself, either by design or by accident.

If you are using an AI system that contains personal data in the model itself, having a well-organised model management system will make it easier and more cost-effective to accommodate data subject rights requests.
Requests for the erasure or rectification of the personal data within the model will have to be honoured and they may result in organisations having to re-train their model. 

  • ADM

When looking at AI systems and data subject rights, it is important to consider whether Article 22 GDPR applies, i.e. the right to not be subjected to solely automated decision making. This applies when you are carrying out decision-making without human intervention, which produces legal effects or similarly significantly affects an individual. Remember, in order to remove yourself from the scope of Article 22 there must be (meaningful) human intervention in the decisions made.

If Article 22 applies, the automated-decision making must fall within an exception to the general prohibition (i.e. must be necessary for entering into, or performance of, a contract between the data subject and the controller; must be authorised by law or based on explicit consent) and individuals have the following rights:

  • right to obtain human intervention;

  • right to express their point of view;

  • right to contest the decision made about them; and

  • right to obtain an explanation about the logic of the decision.

Meaningful human review will be difficult in complex machine learning systems as those systems may reach the wrong decision, even if they are highly statistically accurate.

Organisations will need to consider system requirements necessary to support a meaningful human review from the design phase, train human reviewers and implement processes to allow human reviewers to override the AI system's decisions..

Conclusion

Proactively considering and integrating robust mechanisms for handling data subject rights requests within AI systems not only aligns with legal requirements but it will help you deal with requests smoothly and avoid any hassle when such requests are received.

Remember:

  • You should consider whether you are processing personal data.

  • Data subject rights apply to all stages of AI development and deployment (to the extent that personal data is processed).

  • Where more than one organisation is involved, establish who is required to address data subject rights requests in your contractual arrangements.

  • Think about the consequences. Might honouring a request require you to re-train your model?

  • When automated decision making takes place, ensure meaningful human intervention is available. 

If you think you need to revisit your internal policies or processes for dealing with data subject requests, please contact the authors of this blog.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Related Work Areas

Technology