Italy’s data protection watchdog has laid out what OpenAI must do to lift an order against ChatGPT issued at the end of last month, when it suspected the AI chatbot service of violating the EU’s General Data Protection Regulation (GDPR) and ordered the U.S.-based company to stop processing locals’ data.
There’s no doubt that large language models like OpenAI’s GPT have collected massive amounts of personal data from the public internet to train their generative AI models to respond to natural language prompts like humans.
OpenAI geoblocked ChatGPT immediately after the Italian data protection authority ordered it. OpenAI CEO Sam Altman tweeted that the service had been discontinued in Italy, adding the usual Big Tech disclaimer that it “think[s] we are following all privacy laws.”
Italy’s Garante disagrees.
The short version of the regulator’s new compliance demand is this: OpenAI must get transparent and publish an information notice detailing its data processing; it must immediately adopt age gating to prevent minors from accessing the tech and move to more robust age verification measures; it must clarify the legal basis it’s claiming for processing people’s data for training its AI (and cannot rely on performance of a contract — meaning it must choose betw
The DPA gave OpenAI until April 30 to finish most of that. The local radio, TV, and internet awareness campaign has a slightly longer deadline of May 15.
The additional requirement has more time to switch from the weak age gating child safety tech to a more secure age verification system. OpenAI has until May 31 to submit a plan for age verification tech to filter out users under 13 and 13 to 18 without parental consent, with a September 30 deadline for implementing that more robust system.
In a press release detailing what OpenAI must do to lift the temporary suspension on ChatGPT, ordered two weeks ago when the regulator announced a formal investigation of suspected GDPR breaches, it writes:
The DPA explains that the required information notice must describe “the arrangements and logic of the data processing required for the operation of ChatGPT along with the rights afforded to data subjects (users and non-users)” and “will have to be easily accessible and placed in such a way as to be read before signing up to the service.”
Italy requires this notice and proof of age before signing up. Users who registered before the DPA’s stop-data-processing order must be shown the notice when they access the reactivated service and pass through an age gate to filter out underage users.
The Garante has limited OpenAI’s legal basis to consent or legitimate interests, requiring it to immediately remove all references to contract performance “in line with the [GDPR’s] accountability principle.” (OpenAI’s privacy policy currently cites all three grounds but appears to prioritize contract performance for services like ChatGPT.)
“This will be without prejudice to the exercise of the SA’s investigation and enforcement powers in this respect,” it adds, confirming it is withholding judgment on whether OpenAI can use the two remaining grounds legally.
Data subjects also have access rights to correct or delete their personal data under the GDPR. The Italian regulator has also demanded that OpenAI implement tools so data subjects—both users and non-users—can exercise their rights and correct chatbot misinformation about them. If correcting AI-generated lies about named individuals is “technically unfeasible,” the DPA requires the company to delete their personal data.
OpenAI must provide easy-to-use tools for non-users to exercise their right to object to the processing of their personal data for algorithm operation. It adds, “The same right will have to be afforded to users if legitimate interest is chosen as the legal basis for processing their data,” referring to another GDPR right data subjects have when legitimate interest is used to process personal data.
The Garante’s measures are all precautionary. Its press release states that its formal inquiries “to establish possible infringements of the legislation” continue and may lead to “additional or different measures if this proves necessary upon completion of the fact-finding exercise under way.”
OpenAI did not respond to our email at press time.