IntroductionUnless you have been happily living under a rock, you will likely be aware that artificial intelligence (AI) has taken the world by storm in last number of months. One of the consequences of this has been a lot of uncertainty regarding how AI can lawfully be used, including under strict data protection legislation like the EU's GDPR.
Art. 22 GDPR provides a right for data subjects to not be subject to certain automated decisions, but it also provides a number of exceptions for when these decisions will still be lawfully possible. By their very nature, AI tools will often involve automated decisions being made on the basis of personal data. This all means that Art. 22 GDPR should be one of the GDPR provisions that is considered before any AI tools are used by organisations.
When does Art. 22 GDPR actually apply?Art. 22(1) GDPR provides that:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
Note that European data protection authorities have indicated this this right amounts to a prohibition rather than something that needs to be actively invoked by a data subject.
Overall, this paragraph leads to a number of questions that GDPR covered organisations should consider before using AI tools. Namely:
1. Does the AI tool involve decisions being made about people?
This is the first question that should be considered and if it can be answered in the negative then any Art. 22 GDPR difficulties should be able to be avoided.
2. Are any decisions about people based solely on automated processing?
If there is meaningful human involvement in the decisions that are being made then Art. 22 GDPR should not be in scope. For example, this may be the case if AI is just used to produce information that someone uses, along with other information, to make a decision about another person.
It should be note though that, according to a recent CJEU Advocate General opinion (which is subject to the final decision of the CJEU), if consistent practice means that a "preliminary" automated decision will be strongly drawn upon for a "final decision" (even by a third party) then the "preliminary" automated decision will itself be subject to Art. 22(1) GDPR. The example in that case was automated credit score decisions that are then used by third parties to make a further decision (e.g. about whether to grant a request for a loan).
Including meaningful human involvement in any decision may therefore offer a reasonable way of avoiding Art. 22 GDPR obligations in some AI use cases, but there will also likely be a lot of AI business models where this level of human intervention is simply not possible or not desirable from a cost / resource requirement perspective.
3. Will the decision produce "legal" effects on the person who is subject to it?
The Article 29 Working Party guidance (which has been endorsed by its successor organization, the European Data Protection Board) on this topic suggested that a "decision producing legal effects" will include things like decisions that affect a person's legal rights or their rights in relation to a contract (e.g. decisions that affect freedom of association, freedom to vote or which cause a cancellation of a contract). Any AI tools whose decisions may fall into these areas will therefore need to be carefully reviewed under Art. 22 GDPR.
4. Will the decision produce "similarly significant" effects on the person who is subject to it?
Recital 71 of the GDPR provides "automatic refusal of an online credit application or e-recruiting practices without any human intervention" as examples of automatic processing decisions that will meet the "similarly significant" threshold. The Article 29 Working Party guidance mentioned above also provides the following further examples that they believe would meet this threshold:
- decisions that affect someone’s financial circumstances, such as their eligibility to credit;
- decisions that affect someone’s access to health services;
- decisions that deny someone an employment opportunity or put them at a serious disadvantage;
- decisions that affect someone’s access to education, for example university admissions.
Exceptions under Art. 22(2) GDPRUnder Art. 22(2) GDPR, the prohibition on automated individual decisions from Art. 22(1) GDPR will not apply in the three circumstances outlined below:
Exception (a) – Contractual necessity
This exception does offer some hope to organisations wanting to make use of AI tools in order to make their contract approval or performance more efficient.
However, it should be noted that the use of the automated decision must be "necessary" and that this means (according to the Art. 29 Working Party guidance) that "the controller must be able to show that this type of processing is necessary, taking into account whether a less privacy-intrusive method could be adopted. If other effective and less intrusive means to achieve the same goal exist, then it would not be ‘necessary’."
An example provided by the Article 29 Working Party in the guidelines of a hiring process where automatic decisions were used because tens of thousands of applications were received also provides some idea of how rare regulators may see this exception applying in practice.
Exception (b) – Authorised by Union or member state law
This exception will obviously be useful insofar as it applies to AI processing undertaken by an organization. However, use cases covered by this exception may be relatively limited, particularly in the commercial sector.
Exception (c) – Explicit consent
Explicit consent is also a possible exception to be used for AI products caught by Art. 22(1) GDPR. However, controllers looking to rely on this exception will need to be careful to try and ensure that the strict GDPR requirements related to (explicit) consent are properly complied with. This may well be particularly difficult if, for example, the size of the database and complexity of the tool means it is not easy to determine exactly what data types will be used, and how that data will be used, to make any automated decision.
Human intervention under Art. 22(3) GDPRArt. 22(3) GDPR introduces a further compliance burden on organisations that want to use AI tools that are caught by Art. 22(1) GDPR (unless the automated decision making is authorised by EU or member state law). That further burden is providing further privacy safeguards, including allowing affected data subjects "at least at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision." As can be imagined, these requirements can create a very large and ongoing resource burden for any organization that is caught by Art. 22(1) GDPR.
No use of special category data under Art. 22(4) GDPRUnder Art. 22(4) GDPR, automated decisions that are caught by Art. 22(1) GDPR cannot be based on special category data (refer to Art. 9(1) GDPR) unless the data subject has explicitly consented to this, or (inter alia) the processing is necessary for reasons of substantial public interest. Additionally, suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests must be in place.
This prohibition creates particular issues for the use of AI tools that may in any way make use of special category data. This may have particularly strong effects for tools that explicitly use health data or tools that use extremely large amounts of data, meaning it is hard to rule out use of special category data. Regulators and courts have also interpreted when data will be special category data widely (e.g. suggesting that relationship status can be enough to infer data about a person's sexual orientation).
Ultimately, if an AI tool is caught by Art. 22(1) GDPR and will involve special category data processing then this will likely mean that explicit consent will need to be relied on under Art. 22(4) GDPR. As pointed out above, this comes with particular challenges in the context of AI tools due to their complex nature and their use of large quantities of data, when considered against the increased specificity generally required when seeking (explicit) consent.
Transparency (Art. 13(2)(f) GDPR / Art 14(2)(g) GDPR)It should also be noted that, under the GDPR's transparency obligations, any processing caught by Art. 22(1) GDPR must also be noted in relevant privacy notices, including "meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject." This again creates a potentially difficult compliance obligation on controllers using AI tools because of the inherent complexity they involve, which will then need to be broken down into clear and comprehensible language for affected data subjects.
ConclusionAs the points above show, the compliance burden that organisations face when they are caught by Art. 22(1) GDPR can be very large. However, the development and progression in AI tool capability in the direction of automatic decision making may well now correlate with this provision of the GDPR being triggered more and more often.
Therefore, when considering the roll out of an AI tool, it will be important for GDPR covered organisations to first closely consider whether Art. 22(1) GDPR will in fact be triggered and changes that may mean that this could be avoided.
If Art. 22(1) GDPR is triggered then the organisation should also clearly think through the options for how to deal with the various compliance requirements that follow (and possibly even if the amount of work involved may actually outweigh the benefit brought by the tool itself).
Sign up to our email digest
Click to subscribe or manage your email preferences.