Italian Data Protection Authority Raises Concerns Over OpenAI’s ChatGPT, Citing Potential GDPR Violations
The Italian Data Protection Authority, known as Garante, has recently expressed concerns regarding potential violations of the European Union’s General Data Protection Regulation (GDPR) by OpenAI’s ChatGPT. After a thorough investigation lasting several months, Garante has issued a formal notice to OpenAI, suspecting breaches of EU privacy regulations. OpenAI has been given a 30-day period to respond and present a defense against these allegations.
Earlier, the Italian authority had imposed a temporary ban on ChatGPT’s local data processing in Italy, citing various issues. One of the concerns was the lack of a suitable legal basis for collecting and processing personal data to train ChatGPT’s algorithms. Additionally, the authority raised concerns about child safety and the AI tool’s tendency to produce inaccurate information. OpenAI made temporary adjustments to address some of these issues. However, it now faces preliminary conclusions that its operations may be violating EU law. The central issue revolves around the legal basis OpenAI has for processing personal data to train its AI models, particularly considering that ChatGPT was developed using data scraped from the public internet.
Initially, OpenAI claimed that the “performance of a contract” served as a legal basis for ChatGPT model training. However, this claim was contested by the Italian authority. Now, the only potential legal bases left are consent or legitimate interests. Obtaining consent from numerous individuals whose data has been processed appears impractical, leaving legitimate interests as the primary legal basis. However, this basis requires OpenAI to allow data subjects to object to the processing, which poses challenges for the continuous operation of an AI chatbot.
In response to the increasing regulatory risks in the EU, OpenAI is seeking to establish a physical base in Ireland. This move aims to have GDPR compliance oversight led by Ireland’s Data Protection Commission and is part of a broader effort to address concerns related to data protection across the EU. Apart from the ongoing Italian investigation, OpenAI is also under scrutiny in Poland following a complaint about inaccurate information produced by ChatGPT and OpenAI’s response to the complainant.
The outcome of this investigation is likely to have significant implications not only for ChatGPT but also for the broader landscape of AI applications and their adherence to data protection standards in the EU. As this situation unfolds, it highlights the challenges and complexities that innovative technologies like AI chatbots face when navigating the stringent data protection regulations in Europe.