Transparency requirements under the EU AI Act and the GDPR: how will they co-exist? | Fieldfisher
Skip to main content
Insight

Transparency requirements under the EU AI Act and the GDPR: how will they co-exist?

Locations

United Kingdom

As the use of AI technologies proliferates and public and regulatory scrutiny increases, expectations of transparency become higher. Many organisations already issue public facing statements about their ethical use of AI and the AI principles they adhere to. This seeks to improve transparency and address concerns over the so-called AI 'black box effect'.

European regulators have already taken steps to enforce against organisations for lack of transparency in this space. See the Garante's temporary ban of OpenAI's ChatGPT in March 2023, and despite subsequent improvements aimed at driving transparency, the recent notification by the Garante that it is continuing to investigate OpenAI's alleged violation of GDPR, including of transparency principles.

Where do AI transparency requirements originate from?

AI transparency requirements come from and expand beyond GDPR transparency.

Transparency is a key principle in AI self-regulatory governmental schemes (for instance, the OECD principles) and in private sector initiatives. The same applies to the UK 2023 White Paper on AI (A pro-innovation approach to AI regulation), which recognises Transparency and Explainability as one of the five AI principles.

In addition to this, the EU AI Act Act sets out a comprehensive set of transparency requirements which will apply to AI technology providers and deployers and will co-exist with other transparency requirements already in place. This blog reflects recent EU AI Act texts made publicly available, although the final text has not been published at the time of writing.   

See this link for an infographic comparing transparency requirements under the GDPR and the EU AI Act.

In more detail:  

GDPR and AI Act transparency requirements

Transparency under the GDPR

Transparency requirements apply if personal data is processed when using AI technologies at different stages of the AI lifecycle (e.g. when developing, testing or deploying AI technologies).

Transparency requirements apply to controllers (who are those making decisions about why and how personal data is processed). Developers and providers of AI tools who may not act as controllers will be expected to provide the relevant information to controllers so that they can comply with their transparency requirements. Transparency requirements are often fulfilled via the provision of a privacy notice. More recently, explainability statements have become tools to provide transparency to comply with general AI transparency principles (in particular, in order to explain the rationality of the AI system, where the data comes from, how it was trained, etc) and the GDPR. The use of explainability statements is not widespread but it is likely to become so as organisations look for more detailed tools to improve AI transparency levels.  

In relation to any automated processing, controllers are expected to explain the logic behind their decision-making and this would normally be provided in an explainability statement.

In the UK the ICO has provided extensive guidance on how to explain decisions made with AI, available here.

Transparency under the EU AI Act

(1) High-risk AI systems

Providers of high-risk AI systems must ensure they are designed and developed in such a way to ensure their operation is sufficiently transparent. Enough information will need to be provided so the deployer can understand how the system works and what data it processes, and so that decisions of the AI system can be explained to the user.



(2) Notices regarding certain systems and general-purpose AI models

Distinct from the transparency requirements for high-risk AI systems, providers of certain systems including general-purpose AI systems must fulfil certain transparency obligations. These include an obligation (similar to a 'just-in-time' GDPR transparency notice) to ensure individuals are aware that they are interacting with an AI system unless it is obvious from the perspective of a person who is reasonably well-informed. Providers must also ensure that synthetically generated audio, image, video or text output are detectable as artificially generated.

Another example of these transparency obligations includes the requirement for deployers to label 'deep fake' images, video and audio as being artificially generated or manipulated, as well as the requirement to label artificially generated or manipulated text which is published for the purpose of informing the public of matters of public interest as having been artificially generated or manipulated.

Similar to GDPR transparency requirements, the above information must be provided in a clear and distinguishable manner at the latest at the time of first interaction or exposure.

Providers of general-purpose AI models must also draw up and keep up-to-date the technical documentation of the model, which must, as a minimum contain certain listed elements. This documentation may be requested by the AI Office and/or national competent authorities.

Providers of general-purpose AI models must also draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their own AI system. This information and documentation must enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and contain, at a minimum, certain listed elements.

Conclusion

While there is some overlap between the GDPR and the EU AI Act, the latter is more technical in nature.

Some of the transparency requirements in the EU AI Act (i.e. those for certain systems and GPAI models) apply to deployers, who on occasion will need to be provided with information from providers of AI systems in order to meet the transparency requirements. This is analogous to GDPR where controllers / customers deploying AI technologies may need information from providers for transparency purposes. Contractual terms on these topics will need to be updated to cover the new information-provision requirements.

It is clear the heaviest transparency burden under the EU AI Act falls on providers of high-risk AI systems. The providers who are caught by the Act will need to produce much more detailed technical information and instructions for use than was previously the case.

The trend is clear: existing transparency requirements in the GDPR and AI principle-based schemes will be strengthened and significantly enhanced under the EU AI Act.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Data and Privacy