OpenAI to face €15 million fine over ChatGPT
ROME (AP) — Italy’s privacy watchdog, Garante, has imposed a hefty fine of €15 million ($15.6 million) on OpenAI, the creator of the popular AI chatbot ChatGPT. The fine follows an in-depth investigation into how the U.S.-based artificial intelligence company collects and processes users’ personal data. This marks another significant development in the growing scrutiny surrounding AI technologies globally.
Violations Highlighted by Italy’s Privacy Watchdog
The investigation found that OpenAI has processed personal data from its users to train ChatGPT without meeting the necessary legal requirements. Garante said that OpenAI did not establish an adequate legal basis for this data collection and infringed on transparency principles in that it failed to provide users with information about the use of their data.
OpenAI has termed the decision “disproportionate” and aims to appeal it. “When the Garante told us to cease offering ChatGPT in Italy in 2023, we collaborated very closely with them to reopen it a month later,” said an OpenAI spokesman in a statement. “They have since recognized our industry-leading privacy practices but this fine is nearly 20 times the revenue we took in Italy during the time in question.”
Despite the fine, OpenAI reaffirmed its commitment to collaborating with privacy authorities globally to ensure that artificial intelligence respects individual privacy rights.
The investigation by the watchdog identified that OpenAI had also failed to set up the proper measures on age verification. This negligence may expose users below the age of 13 to abusive AI-generated contents. Garante has then ordered OpenAI to engage in a public awareness campaign through Italian media for the next six months. It will sensitize users regarding ChatGPT and its mechanisms of collecting personal data.
Escalating Scrutiny – AI Technologies Under Regulatory Attention
Generative AI systems such as ChatGPT have gained widespread popularity and have caught the attention of regulatory bodies worldwide. US and European regulators are keeping a close eye on companies such as OpenAI, which is driving the AI revolution. Governments are competing with each other to produce rules that mitigate the risks associated with advanced AI systems. The most notable of these is the European Union’s AI Act, which aims to become a landmark piece of legislation in setting up comprehensive guidelines for the development and usage of artificial intelligence.
Global Implications for AI and Privacy
This fine underlines the delicate interplay between AI innovation and privacy protection, putting a premium on robust frameworks of regulation. As OpenAI and other companies continue to enhance the capabilities of artificial intelligence, the balance between technological progress and individual rights becomes a critical challenge for policymakers and tech leaders alike.