Unveiling the Crucial 5 GDPR Obstacles of ChatGPT That Can’t Be Ignored | Fieldfisher
Skip to main content
Insight

Unveiling the Crucial 5 GDPR Obstacles of ChatGPT That Can’t Be Ignored

Locations

Germany


Will OpenAI adress them?
 
As ChatGPT becomes more and more popular for digital communication, it is crucial to be aware of how it interacts with GDPR laws. This article will explain five fundamental components of GDPR that ChatGPT must adhere to as well as why one might doubt their compliance.

By the end of this piece, you will have a thorough comprehension of what ChatGPT requires to comply with GDPR and why there may be hesitation. Let’s get going!

 

GDPR-principles

Before delving into the GDPR-related issues of OpenAI’s ChatGPT, let’s review the fundamental principles of the EU-GDPR. Here is a brief overview:
 
  • The EU-GDPR sets regulations for the handling of personal data by controllers and processors in the European Union (EU).
  • The EU’s data protection legislation applies to any business, entity, or person who handles personal data in the European Union, regardless of their location.
  • People are granted the authority to have access to and manage their own personal information.
  • Organizations must acquire explicit permission from people before accumulating or utilizing their personal data.
  • Organizations must guarantee the protection of personal information with the appropriate technical and organizational precautions.
  • In case of a data breach, organizations are required to notify regulatory authorities and those affected.
 

ChatGPT — what is it, and how does it work?

In November 2022, OpenAI unveiled ChatGPT, a chatbot that uses a deep learning method known as “transformer architecture.” This technique sorts through terabytes of data that contain billions of words to come up with answers for questions or prompts. It is part of the GPT-3 family of larger language models, and has been specialized with supervised and reinforcement learning approaches. Utilizing data from the web, books, and Wikipedia, ChatGPT can be used without charge to generate conversational replies.
 
The development of AI such as GPT-3 can be revolutionary, but we must also consider the risks and drawbacks. A key issue is privacy, as it can be challenging to tell if data has been used in the machine learning training. Also, the legality of using personal data to coach machine learning models, like GPT-3, can vary according to the laws and regulations of a certain country or region.
 
ChatGPT has the capability to generate text that resembles natural writing and can be used for a variety of tasks, including language translation, text generation for chatbots, and language modeling. It is one of the largest and most effective AI language processors available, with 175 billion parameters.

 

Was ChatGPT trained on personal data?

ChatGPT is a massive language model that is based on an extensive amount of web-based information, such as individual websites and social media posts.
 
As an attorney, I would insist that personal websites and social media posts contain private information. This is because the General Data Protection Regulation (GDPR) states that personal data relates to an identifiable living person. This encompasses information such as name, date of birth, email address, telephone number, home address, physical traits or location data. Additionally, the GDPR points out that a person can be recognized directly or indirectly based on their name, a unique identifier number, location details, an online identifier or by one or more elements specific to the physical, psychological, hereditary, mental, financial, ethnic or societal character of that individual.
 
Just because information is available on the web does not mean that it’s not personal data. If ChatGPT was trained with this type of data, then the system would’ve been trained based on personal information. This could have serious repercussions for one’s privacy, as well as potential legal ramifications.

 

GDPR issue 1: Data Collection

There is a strong case to be made that ChatGPT is not complying with GDPR standards regarding data collection, particularly with regards to the GDPR’s principle of data minimization. According to the GDPR, personal data must be lawfully, fairly and transparently gathered and used in connection to the data subject. Additionally, the data minimization rule states that only the bare minimum of information should be obtained and dealt with.
 
OpenAI’s Privacy Guidelines state that all data will remain undercover, used only for purposes specified in the contract. Nonetheless, it’s unclear if this includes data stored in AI models like ChatGPT. Alexander Hanff, one of the EDPB members, has doubts about OpenAI collecting data for ChatGPT’s use. He believes that gathering up billions or trillions of data points from sites with strict restrictions against third-party scraping is a breach of the contract. Also, Hanff contends that ChatGPT is a commercial product, so fair use does not apply.

 

GDPR issue 2: Data Security

Particularly, ChatGPT and other large AI models must be safeguarded from attacks and data breaches. The GDPR mandates that adequate steps are taken to guarantee safety and confidentiality of customer information.
 
ChatGPT presents potential security dangers, such as data pilfering, spam and phishing emails, and malicious software. Additionally, those with bad intentions can alter the code and use it to execute cyberattacks.
 
ChatGPT’s Privacy Policy raises several doubts about its compliance with the General Data Protection Regulation (GDPR). Article 3 of the Privacy Policy states that OpenAI may share a user’s personal information with third parties “in certain circumstances without further notice” unless the law requires otherwise. This statement may prove to be an obstacle for ChatGPT to comply with GDPR requirements for data security and privacy.

 

GDPR Issue 3: Fairness & Transparency

The EU-GDPR requires that any decision made by an AI system needs to be both explainable and justifiable. This means that the AI must ensure its decisions are both fair and clear in its reasoning. Nevertheless, it is uncertain whether ChatGPT meets this criterion.
 
When ChatGPT first made its debut, it gave wrong answers that some scientists refer to as a “hallucination”. Fake medical advice was especially concerning. Bogus social media accounts are already a problem, and bots like ChatGPT can make them even more difficult to spot. Furthermore, incorrect information might circulate if ChatGPT can make even inaccurate answers sound convincing.

 

GDPR Issue 4: Accuracy & Reliability

The GDPR stipulates that companies must be open in their handling of personal data and make sure it is accurate and trustworthy. This includes AI programs, which need to be screened, tested and monitored to guarantee accuracy and dependability.
 
It is not certain if ChatGPT adheres to the standards of Article 17 of the GDPR. This article grants individuals the right to be forgotten, meaning that if they ask, their personal data must be eliminated from the model. Unfortunately, with ChatGPT having the ability of providing incorrect answers, it is hard to erase all evidence of an individual’s personal details. I wrote a detailed article on this.

 

GDPR Issue 5: Accountability

Under the General Data Protection Regulation (GDPR), organizations must be able to prove that they have taken appropriate measures to protect people’s data and hold AI systems responsible for their outcomes. They must be able to demonstrate that these measures have been effective if requested.
 
The use of ChatGPT has raised a number of questions concerning accountability when it comes to wrong answers concerning personal data of human beings.
 
The burden of data privacy rests with the users, not OpenAI. This is concerning, as ChatGPT can generate wrong responses, some of which might be wildly wrong, leading to the spread of false information and online abuse. Additionally, OpenAI researchers and developers select the data used to prepare ChatGPT, and prejudice in the data might result in a negative effect on the model’s output.

 

Conclusion

OpenAI must unequivocally accept greater accountability for guaranteeing the precision of ChatGPT’s replies and the data it utilizes to train the model, or else face the harsh legal implications of GDPR non-compliance. Furthermore, to ensure utmost respect for privacy rights, OpenAI must unequivocally and proactively explain to users their responsibilities when operating the ChatGPT system.