France: CNIL guidance on AI - A preview of the EU AI Act? | Fieldfisher
Skip to main content
Insight

France: CNIL guidance on AI - A preview of the EU AI Act?

07/06/2022

Locations

Belgium

This article was first published by DataGuidance in their Insights section.

On 5 April 2022, the French Data Protection Authority ('CNIL') released extensive publications concerning artificial intelligence (AI).1 These publications are addressed to three main target groups, namely the broader public, experts, and scientists, as well as organisations that process personal data through AI systems - both data controllers and data processors. For the latter, the CNIL guidance is twofold: it is built around general recommendations2 and a self-assessment tool3 to measure and ensure compliance with the General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR'). This is particularly valuable in the absence of comprehensive guidance at the EU level and is of relevance beyond France. Even though these publications are not binding as such, providers and users of AI systems subject to the GDPR should still assess their AI systems according to the CNIL's requirements. Sixtine Crouzet, Associate at Fieldfisher (Belgium) LLP, discusses the CNIL guidance in light of the AI Act.

Limited guidance at the EU level

AI itself is barely covered by the guidelines and recommendations available at the EU level. So far, only the Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 touch upon AI, machine learning, and algorithms.

However, if the European Data Protection Board ('EDPB') strictly follows its Work Programme for 2021/2022, we could expect further guidance on 'legal implications relating to technological issues', including AI and machine learning by the end of the year.

The CNIL's previous focus on AI systems: a mix of ethical and legal considerations

Under Article 8(I)(4)(e) of the Act No. 78-17 of 6 January 1978 on Data Processing, Data Files and Individual Liberties ('the Act'), CNIL is in charge of 'reflect[ing] on the ethical problems and social issues raised by the evolution of computer and digital technologies'.

CNIL's previous work on AI mostly aimed at identifying and grasping the main challenges posed by AI systems, such as making algorithmic systems intelligible4 and preventing automated discriminatory biases.5 In contrast, the recent CNIL guidance is much more practical and extensive, with 'ready-to-use' checklists for data controllers and processors.

CNIL's general GDPR recommendations

The CNIL guidance recalls the general data protection principles that stem from the GDPR, applying them to the specific context of AI systems. Based on previous experience in assessing AI systems, such as facial recognition use cases, CNIL also provides good practices and concrete recommendations.

The guidance draws the attention of data controllers and processors to issues leading to non-compliance. CNIL highlights that - in theory - AI is not incompatible with data protection rules. However, as demonstrated below, full compliance with these principles can - and will - prove difficult due to the inherent characteristics of AI systems.

Under the GDPR, data controllers must comply with these principles. CNIL does not clearly define the role of the organisations involved. AI providers are individuals or organisations who develop an AI system before placing it on the market or into service. AI users (e.g. businesses) differ from end users (i.e. data subjects) whose personal data is feeding into the AI system. Depending on their exact role when processing personal data, AI providers and users can act as either data controllers or data processors.

The specific recommendations are summarised in the table below.

General data protection principles

Main recommendations

Purpose limitation: Determine explicit and lawful purposes for the development, training, and implementation of an AI system.

Organisations have to identify a purpose for each phase of the AI system lifecycle. In practice, the purpose of training an AI system with training data differs from the purpose of launching that system in real life. CNIL considers that this holds true even for 'continual learning' systems, which use the data collected in real life for training purposes.

When re-using databases that will 'feed into' an AI system, organisations should ensure that the new processing purpose is compatible with the initial purpose for which the personal data has been collected.

Lawfulness: Identify an appropriate lawful ground under the GDPR for the AI system to process personal data.

Given the variety of use cases for AI systems, CNIL does not state that one specific ground is more relevant than another (e.g. legitimate interest, consent, etc.). The choice of lawful ground will rely on a case-by-case analysis.

Data minimisation: Ensure that the personal data processed through the AI system is necessary and proportionate to the given purpose.

AI systems and more specifically machine learning models are usually data driven. Datasets play a role at every step, such as during the training, evaluation, benchmarking, or validation phases. However, in practice, organisations should carefully determine the nature and the volume of personal data that are necessary for AI systems.
CNIL also recommends adopting practical processes and documentation, such as:
checking how the system performs when fed with new personal data;
segregating training data from live data; and
documenting the setting up of datasets and their characteristics.

Storage limitation: Set specific time limits to retain personal data.

While AI systems may involve retaining data for a long period to train and assess their performance, organisations should set up specific time limits to achieve the purpose for which they are processed, before deleting or anonymising the data.

Transparency: Properly inform data subjects that their data is being processed.

AI systems often rely on data that was not provided directly from the data subject to the provider or user. When re-using personal data obtained by a third party, organisations should properly inform data subjects about the data processing in clear and understandable terms. Some (limited) exceptions to transparency requirements exist.
Automated decision-making (including profiling) trigger enhanced transparency to explain to data subjects the underlying logic and the criteria used to reach a particular decision.

Besides these general principles, the guidance touches upon specific risks that are associated with AI systems, such as discrimination and vulnerabilities to data breaches.

To complement this guidance, CNIL also published a self-assessment tool that incorporates the above general principles.

CNIL's self-assessment tool

The tool takes the form of a questionnaire composed of seven checklists that providers or users of AI systems can fill out. This analysis grid ultimately aims to assess how mature the AI system is in terms of GDPR compliance and ethics.

The checklists also constitute a useful source of criteria that controllers can factor in in their due diligence questionnaire when engaging a new AI vendor. In addition, some questions overlap with the information required in a Data Protection Impact Assessment ('DPIA') and could therefore serve as an inspiration when filling out DPIAs.

Checklists are fairly detailed and technical. As a result, organisations should involve at least the following stakeholders to complete them: the data protection officer ('DPO'), the IT and security teams, and the operational team using the AI system (including data scientists). Their main requirements are set out below.

Checklists

Main issues raised

Asking the right questions before using an AI system

Organisations must clearly identify their role (i.e. data controllers vs. data processors) in the design, implementation, maintenance, and review of AI systems. They must also map the staff members in charge of these processes and allocate responsibilities. Organisations must ensure that data processing techniques are proportionate and necessary to achieve an explicit purpose.

Collecting and qualifying training data

CNIL outlines the factors to consider when setting up a database to train AI systems. These include:

  • whether the data is being re-used from third-party sources;

  • whether it contains sensitive data; and

  • the assessment of the risk of bias or re-identification of the individuals.

Ensuring the quality and representativeness of the training data is crucial to limiting the risk of error in the output data produced by AI systems. More specifically, the checklist highlights several methods to detect and address biases.

Developing and training an algorithm

In order to ensure trustworthy AI systems, this checklist documents the methods used to decide upon, design, and assess AI systems. In addition, testing and validation techniques are covered.

Using an AI system in production

This checklist questions the type and extent of human intervention and oversight in place. It also outlines technical measures to enhance the transparency and intelligibility of AI systems, as well as to ensure the quality of output data.

Securing data processing

Organisations should implement appropriate security measures during the entire AI system lifecycle, starting from the training phase. To address vulnerabilities of AI systems to ever-increasing attacks, this checklist relates to the preventive security measures in place, including log analyses, risk assessment, and the adoption of restricted data access.

Ensuring compliance with data subjects' rights

Organisations have to detail how they intend to comply with rights of data subjects (e.g. the right to information, to object, and not be subject to automated decision-making). Internal processes also need to facilitate the effective exercise of such rights.

Ensuring compliance and accountability

Certifications, adherence to codes of conduct, security standards (e.g. ISO) or best practices can prove useful to demonstrate compliance. While a DPIA may be required under the GDPR, this does not exclude drafting other accountability documents to assess all kinds of risks and to better mitigate them.

In CNIL's words, the self-assessment tool is made available 'in view of the future European regulation'.

Interaction with the AI Regulation: Anticipation or contradiction?

CNIL's publications are made available in the wake of the AI Strategy 'Artificial Intelligence for Europe' put forward by the European Commission.6 In April 2018, the AI Strategy set out ambitious objectives, including boosting the EU's technological and industrial capacity and AI uptake, as well as ensuring an appropriate ethical and legal framework. The Commission revealed a core piece of the future legal framework three years later through its proposal for a regulation on laying down harmonised rules on artificial intelligence7 ('the AI Act'). Since then, the AI Act has made slow progress within the European Parliament. The rapporteurs released their draft report on 20 April 2022 with suggested amendments.8 The joint parliamentary Committees and the Parliament still need to vote on the final text in October to November 2022. The AI Act is still under discussion between Member States within the Council.

Risk-based classification of AI use cases

The AI Act offers a classification of AI systems depending on their risk. While AI systems deemed 'unacceptable' are prohibited, high, limited, or minimal risk AI systems are permitted - albeit subject to more or less stringent obligations.

In contrast, the CNIL guidance on AI does not draw clear red lines and does not blacklist certain use cases. Completing the self-assessment tool does not result in a risk or compliance score. In accordance with the principle of accountability under the GDPR, it is up to the data controllers to document the choices made.

High-risk AI systems: Conformity assessment vs. GDPR self-assessment

Under the current AI Act, 'high-risk' AI systems would have to go through a 'conformity assessment' before being placed on the market or put into service in the EU. While some business associations advocated for a self-assessment of conformity, the Commission introduced strict assessment procedures, with competent third-party bodies in charge of verifying conformity assessments.

The conformity assessment aims at ensuring compliance with a number of requirements, including establishing a risk management system and ensuring quality of data, data governance safeguards, traceability, and collection of logs, transparency, human oversight, and cybersecurity. By nature, all these requirements are much broader than data protection requirements given that they do not only relate to personal data.

However, CNIL's approach is clearly inspired by these requirements. Indeed, CNIL is attentive to the risks involved in AI systems. Many of its checklists include measures to identify, prevent, and mitigate those risks. Its guidance focuses on assessing and reviewing the design and implementation of AI systems throughout their lifecycle. As a result, these requirements are covered to a certain extent in CNIL's detailed and technical checklists.

Against this background, it is therefore very likely that the conformity assessment under the AI Act will partially overlap with CNIL's self-assessment tool.

CNIL's self-assessment tool: A way to make sure that the GDPR is not ignored by the AI Act

The AI Act barely mentions data protection legislations. Consequently, one of the major concerns raised by the EDPB and the European Data Protection Supervisor ('EDPS') in their Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) was to assert that the GDPR does apply to the personal data processed through AI systems.9

Going even further, they consider that providers of a high-risk AI system should ensure compliance with the GDPR as a pre-condition before placing that AI system on the EU market. This would be done by including compliance with the GDPR within the conformity assessment requirements.

Supervision of AI

The AI Act refers to national supervisory authorities that would be in charge of enforcing the obligations imposed on providers and users of AI systems. However, the AI Act leaves it to Member States to designate the competent authorities.

In this regard, the EDPB and EDPS support the designation of data protection authorities as they have gained a sharp understanding in the field of AI and new technologies.10 Through its recent guidance, CNIL aims to position itself as the best-placed authority to take on that role in France. According to the EDPB and EDPS, this would also ensure a harmonised application and enforcement of data protection rules, which are very likely to apply when AI is used.

Conclusion

In conclusion, at this stage, the CNIL guidance constitutes a useful accountability tool to all providers and users of AI systems who wish to assess and demonstrate their compliance with the GDPR. In the medium term, the guidance clearly goes in the same direction as the AI Act. With a particular focus on accountability, risk assessment, and transparency, CNIL's self-assessment tool necessarily overlaps with the future 'conformity assessment' for high-risk AI systems. Providers and users of AI systems may already start familiarising themselves with the detailed CNIL checklists.


1. See at: https://www.cnil.fr/fr/intelligence-artificielle/la-cnil-publie-ressources-grand-public-professionnels (only available in French)
2. Available at: https://www.cnil.fr/fr/intelligence-artificielle/ia-comment-etre-en-conformite-avec-le-rgpd (only available in French)
3. Available at: https://www.cnil.fr/fr/intelligence-artificielle/guide (only available in French)
4. See at: https://www.cnil.fr/en/how-can-humans-keep-upper-hand-report-ethical-matters-raised-algorithms-and-artificial-intelligence
5. See at: https://juridique.defenseurdesdroits.fr/doc_num.php?explnum_id=19795 (only available in French)
6. See at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN
7. See at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206
8. See at: https://iapp.org/media/pdf/publications/CJ40_PR_731563_EN.pdf
9. Available at: https://edps.europa.eu/system/files/2021-06/2021-06-18-edpb-edps_joint_opinion_ai_regulation_en.pdf
10. Ibid.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE