Concerns about GDPR compliance might extend to other AI solutions too.
On Friday, Italian regulators imposed a ban on generative AI tool ChatGPT with immediate effect while giving its creator, OpenAI, 20 days to address concerns about the way data is collected and processed under penalty of a fine of $ 21.7m or up to 4% of annual revenues (whichever is greater).
There have been indications that other European regulators may swiftly follow suit. Reports suggest that France is conducting its own inquiry; Ireland has asked Italy for more details about the basis for the ban; and the German data commissioner has said that the same action could “in principle” be taken in Germany.
Why we care. Given the immense excitement created by the availability of ChatGPT and similar tools, it was perhaps too easy to overlook warnings emerging from the legal profession over the last few months that it could run afoul of European data regulations — regulations which, in many ways, have become a de facto global standard.
If the questions that arise need to work their way through the European legal system for adjudication, that could take some time, of course. But it’s clear that regulators in European nations can take swift action in the meantime.
Lawful bases for processing data. One fundamental challenge for large language models like ChatGPT is that under European law, specifically the GDPR, there are only six lawful bases for processing personal data at all (data that can be used directly to identify an individual or indirectly to identify an individual in combination with other information). The bases are:
- Consent.
- Performance of a contract.
- A legitimate interest.
- A vital interest (a matter of life and death).
- A legal requirement.
- A public interest.
To the extent a large language model is being trained on data obtained without explicit consent, it’s by no means clear that any of these bases are applicable — unless, perhaps, one makes the bold assumption that the availability of AI solutions is in the public interest.
Data erasure. Another challenge is whether a solution by ChatGPT is competent to support the “right to be forgotten.” Under GDPR, in certain circumstances, an individual can request the erasure of their data. To be clear, ChatGPT is not scraping the web and heedlessly collecting large quantities of personal data. But it is being trained on very large sets of texts, and the question OpenAI might have to address is whether it knows what’s in those sets in terms of personally identifying information or data it might be asked to erase.
The post ChatGPT under threat from European regulators appeared first on MarTech.
MarTech(8)