Anonymisation is the process that converts personal data into a form that does not relate to an identified or identifiable individual. Because anonymised data is no longer personal data, GDPR does not apply.
We covered the benefits of anonymisation in a previous blog with reference to the newly released ICO draft guidance on anonymisation. Similarly, the ICO has provided guidance on the pseudonymisation (see our blog here). The difference is that pseudonymised data is still considered personal data, as the risks have not been completely eliminated.
How anonymous does data have to be?
How should organisations go about anonymising data? Do organisations need to ensure with 100% certainty that anonymised data cannot be re-identified?
Since Recital 26 of the GDPR requires that account should be taken of "all the means reasonably likely to be used" based on "objective factors" and, as the ICO guidance makes clear, there is always an inherent risk of re-identification, carrying out an objective assessment is the way forward.
The ICO and the DPC have a detailed list of factors to consider when assessing anonymity so coming up with a standardised template which references back to those factors and concludes whether or not the standard of anonymity has been achieved will help to demonstrate to regulators that the issues and risks have been considered and mitigated.
Let's look at some of the factors when documenting an anonymisation assessment:
Aggregating data. Any linking of identifiers in a data set will make it more likely that an individual is identifiable. For example, taken individually a first name, "Alicia", or a second name, "Keys" might not be capable of distinguishing one person in a city, but if the two pieces of information are linked, it is far more likely that “Alicia Keys” will refer to a unique, identifiable individual.
Linking. It may be possible to infer a link between two pieces of information. For example, a dataset containing statistics regarding the seniority and pay of employees may not point directly to the salaries of individuals in the dataset but an inference might be drawn between the two pieces of information, allowing some individuals to be identified.
Information security. Having technological and organisation measures to protect anonymous data (as well as personal data) can help you to reduce the risk of re-identification.
Context. What is the data? Who is it being shared with? Who else has access to related data? Answering these will help you to understand where the risks lie in re-identifying data.
Intruder. The motivated intruder test will help you to understand who might be motivated enough to try to re-identify the data and what resources they would have to do so. So would you need to worry about a state intelligence agency, with all its powerful computing resources, when anonymising HR records? Probably not. But would Alicia Keys' doctor, need to consider the resources and motives or a prying tabloid journalist? Yes, according to the ICO's examples.
Agreements or professional obligations? Returning back to the example of Alicia's medical records, special category health data poses a higher risk, but if only the doctor would recognise the Alicia's identity from the anonymised records, perhaps this still enables a conclusion that the data (following a suitable de-identification process) is indeed anonymised.
The ICO's guidance
The ICO’s latest draft guidance is clear that establishing an appropriate governance structure can reduce risks, particularly if you have made a serious effort to comply with data protection law and had a genuine reason to believe that the information was not personal data. Other measures could include:
K-anonymity and technology. Document the current technology that could lead to re-identification and keep this under regular review. Test and document the progress of your anonymisation efforts, considering techniques such as k-anonymity and differential privacy.
Education. Take some time to document the people and organisations who will be part of your organisation's efforts to anonymise data.
Yell from the roof tops? Many organisations have previously taken a risk based view on whether to disclose anonymisation (which is an act of processing), however the ICO recommends transparency as a means of increasing trust.
Senior Information Risk Officer? The ICO suggests potentially appointing a Senior Information Risk Owner within the organisation to take responsibility for key decisions, consult with the DPO and identify a corporate approach to anonymisation.
Something we have seen become increasingly popular is the use of data by processors for product improvement purposes.
The GDPR limits processors to use personal data only on behalf of the relevant controller and in accordance with their documented instructions. While the ICO guidance has not released guidance on the use of anonymised data for product improvement purposes, the ICO do see it as being a beneficial use case. The French CNIL has recently released guidance on the use of anonymised data for product improvement purposes.
Keeping everyone on side
To keep regulators, customers and partners happy, what are some practical steps that organisations can think about going forward?
First, consider carrying out anonymisation assessments and DPIAs, which are a clear way to assess the impact of the proposed anonymisation activities.
Second, begin the task of reviewing customer and vendor contracts to understand how to deal with anonymisation for product improvement purposes.
Third, consider updating privacy notices to refer to anonymised data. Finally, look at the broader data governance themes within the organisation.
Sign up to our email digest