02.04.2023

Tutti Frutti guaranteed - Italy bans ChatGPT

Portrait von Niko Härting
Niko Härting

No German data protection official would have dared to do this: The Italian data protection authority (the "Garante") ordered a ChatGPT ban last Thursday. This was an urgent measure with immediate effect.

The ban notice can be read online (https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870832); the press release is even available in English (https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847#english).

As far as can be seen, the ban is based on four very different reasons. An Italian-style "tutti frutti":

 

1. No legal basis (Art. 6 GDPR).

The Italian data protection authority misses a legal basis for OpenAI's processing of personal data. This is an issue that is also familiar from other AI applications. Artificial intelligence is created through training data. And training data is almost always - at least partially - personal. For example, no translation software can do without personal data.

It would be naïve or foolish to make the use of personal data for machine learning always dependent on the consent of all data subjects (Art. 6 para. 1 sentence 1 lit. a DSGVO) as this would mean the end of all AI applications in Europe. This leaves the balancing of interests according to Art. 6 para. sentence 1 lit. f DSGVO). Why the "Garante" believes that ChatGPT cannot rely on "legitimate interests" is not clear from the prohibition notice.

 

2. Insufficient data protection information (Art. 13 GDPR)

There are also complaints about insufficient data protection information (Art. 13 DSGVO). Apparently, the Italian authority considers OpenAI's privacy policy (https://openai.com/policies/privacy-policy) to be insufficient and possibly believes that OpenAI must inform the respective person separately each time personal data is used as training data. The Italian data protection authority does not reveal how this is to be possible. This is also a challenge for almost every other AI application, since in Rome the impossible is demanded of AI developers.

 

3. Violation of the principle of data accuracy (Art. 5 para. 1 lit. d GDPR).

ChatGPT was asked questions in Rome that were not always answered correctly. Since these were questions about individual persons, the "Garante" believes that the principle of data accuracy (Art. 5 para. 1 lit. d DSGVO) has been violated.

Incorrect results of an AI application are of course not a special feature of ChatGPT. Once again, the impossible is demanded. That every inaccuracy of (training) data constitutes a data protection violation to be sanctioned with an immediate ban has not yet been argued by anyone (cf. only Schantz in Wolff/Brink, Beck OK, Status 1.11.2021, Art. 5 DSGVO Rz. 21 ff.).

 

4. No age check (Art. 8 GDPR)

Last but not least, the Italian authority criticises the lack of any age check for users.

One rubs one's eyes twice. Firstly, because ChatGPT is far less known for pornographic content than numerous other sites on the net. Secondly, because Article 8 of the GDPR sets strict requirements for consent when it is given by minors. However, an obligation to verify the age of the users of an online service cannot be inferred from Art. 8 GDPR by any stretch of the imagination.

 

And the moral?

Much ado about nothing? Or a courageous step in a completely diffuse direction? It will be interesting to see how this Italian story develops - especially since sooner or later the data protection authority in Rome will have to get the colleagues from the other EU countries on board, and this will certainly not only require "tutti frutti", but also persuasion.

Zurück