Italian privacy enforcers have opened an investigation into OpenAI's ChatGPT for allegedly violating EU and Italian privacy laws by collecting personal data of the country's citizens without "a suitable legal basis".
The announcement of the probe came alongside a decree in which the Guarantor for the Protection of Personal Data (GPDP) said it was imposing an immediate "temporary limitation of the processing of personal data" of Italian citizens due to violations of both the EU's General Data Protection Regulation and Italy's own data protection code.
"The Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI," the GPDP said in a statement. It added that ChatGPT's processing of user data can provide an inaccurate picture, "as the information provided by ChatGPT does not always correspond to the real data."
The Guarantor also expressed concern that Open AI hadn't vetted the age of its users, which the Microsoft-backed firm says is designed for those 13 or older. There's no age verification process for ChatGPT users, which the GPDP said "exposes minors to absolutely unsuitable answers compared to the degree of development and self-awareness," presumably in the pre-teens using it, and not the AI itself.
In its statement, the GPDP also referenced the data exposure bug that last week caused ChatGPT to display partial payment details and chat histories for other users on people's accounts. While the breach wasn't mentioned in the limitation decree, the GPDP's mention of it in its statement implies its investigation is centered around the incident.
The Guarantor said the temporary limit extends to all personal data of interested parties being collected within Italy's borders, in essence blocking use of the service until Open AI is able to show that it has resolved the issues identified by the GPDP.
Open AI has 20 days to respond, the Guarantor said, or else it faces fines of up to €20 million ($21.7 million) and up to 4 percent of its annual global turnover.
This isn't the first time the Guarantor has taken action against an AI that it thought was behaving badly. In February the GPDP announced a similar prohibition against Replika, an AI chatbot app that allows users to customize a virtual companion for anything from friendly chats to a virtual relationship.
The GPDP said last month it was concerned that Replika may increase risks for individuals "still in a developmental stage" (ie, minors), "or in a state of emotional fragility." As we've noted in previous coverage of Replika, CEO Eugenia Kuyda has said that otherwise stable individuals have been fooled by the app into thinking their Replikas are sentient and have built relationships with their personal chatbot.
Italian authorities also made claims that Replika lacked an age verification mechanism. As such, they alleged in February, Replika is breaching the GDPR and unlawfully processing personal data.
ChatGPT, Replika and tools like it are so new that it's easy to forget widespread use has only been happening "for a matter of weeks," said Edward Machin, a London-based privacy lawyer at international law firm Ropes & Gray.
Machin told us in a statement that most users probably haven't stopped to consider the privacy implications of their data being used to train Open AI's software. "The allegation here is that users aren't being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data."
The move to ban Open AI's processing of Italians' data is one of the most powerful weapons in the GPDP's armory, Machin said. "I suspect that regulators across Europe will be quietly thanking the Garante for being the first to take this step and it wouldn't be surprising to see others now follow suit and issue similar processing bans," Machin predicted.
Open AI hadn't responded to our questions by the time of publication. ®
Will be followed soon after by SLE 15 SP 5 as org continues prep for ALP
Boffins and machines write very differently - and it's easy to tell
Brush up on your coding - more tech jobs are going to be hybrids that mix ops and software, or require AI skills
Suggest that Zuck has yet again unleashed stuff without a thought for the downsides
WWDC And makes developer-grade OS betas available to all ducking loyalists
Deadly accident said to be unavoidable
This could be a useful way to show what you're up against, or give the clueless a stick to beat you with