Artificial Intelligence: regulation and data protection | Fieldfisher
Skip to main content
Insight

Artificial Intelligence: regulation and data protection

Oliver Süme
27/11/2023

Locations

Germany

The triumphant advance of various tools equipped with artificial intelligence has also reached the general public since ChatGPT at the latest.

In particular, the use of generative AI software tools has gained popularity among many companies and has quickly become a ground-breaking technology capable of creating realistic and innovative content such as images, music and even text. The focus is often on chat GPT, but generative AI is not limited to text creation. Generative AI tools can also be used to optimise and efficiently support internal company processes and developments for recruitment, product development or sales.

However, in addition to its transformative and disruptive potential, generative AI also raises a number of data protection issues, particularly within the scope of the General Data Protection Regulation (GDPR). This is because generative AI algorithms naturally rely on large data sets to learn patterns and generate content. Consequently, these data sets may contain personal data, which means that the requirements of the GDPR must also be observed. In particular, the following three scenarios come into consideration: (i) when training the AI, the manufacturer uses data sets that also contain personal data (so-called training data), (ii) users enter personal data when using the AI, which is then processed by the AI, (iii) the data entered by the users is used to further train the AI.

The increasingly rapid pace of development has not bypassed the international data protection supervisory authorities. The regulatory authorities are paying close attention to the impact on data protection and privacy, as demonstrated by the brief ban on ChatGPT in Italy, for example. There are indications that particular attention is being paid to two concepts that play an important role in the context of artificial intelligence: Foundation models (such as GPT-3) and general-purpose AI (such as ChatGPT).

Companies must therefore inform themselves about regulatory developments in data protection at an early stage in order to ensure compliance with data protection requirements, follow the evolving general legal framework for AI and at the same time fully utilise the transformative potential of generative AI.

In various articles, our Technology & Data team has therefore examined the topic of generative artificial intelligence ("AI") with regard to the key legal issues, in particular data protection issues for companies using AI, which - it should be emphasised - have not yet all been conclusively clarified.

You can access the collection of these articles here

If you have any questions on this topic, please do not hesitate to contact us:

Oliver Süme, Dr. Felix Wittern, Katharina Weimer, LL.M. (UNSW), Dr. Philipp Plog, Stephan Zimprich, Thorsten Ihler

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE