The Irish Data Protection Commission is investigating Google's use of personal information to train AI in the EU for compliance with the General Data Protection Regulation.
According to Techcrunch on September 12, the Irish Data Protection Commission (DPC), the agency that oversees Google's compliance with the General Data Protection Regulation (GDPR) in the EU, is investigating whether Google conducted a Data Protection Impact Assessment (DPIA) before using personal data to train AI.
DPIA is a mandatory process under the GDPR, to assess the risks to individuals' rights and freedoms when their information is used for high-risk data processing purposes, including AI training.
The investigation focuses on how Google developed several of its content-generating AI tools, including the large language model (LLM) PaLM2, which was unveiled at last year's I/O developer conference. PaLM2 is the foundation for many of Google's AI tools, such as the Gemini (formerly Bard) chatbot and AI-powered web search features.
If Google is found to have breached GDPR, the DPC could impose fines of up to 4% of the annual global turnover of Alphabet, Google's parent company.
Image taken at the I/O developer conference. Source: Techcrunch. Privacy concerns
Large language models like PaLM2 often require huge amounts of data to train, including sensitive personal information. This information can be obtained from public sources on the Internet or collected directly from users, leading to many legal issues related to privacy and copyright.
Many AI development companies such as OpenAI, Meta, and even Elon Musk-owned social media platform X (formerly Twitter) have faced GDPR-related complaints over their use of user data to train AI.
Google, like many other companies, is not transparent about the data sources used to train its AI tools. However, Google spokesperson Jay Stoll said: “We take our obligations under GDPR seriously and will work closely with the DPC to address any concerns.”
The DPC investigation is part of a broader effort by EU data protection authorities to ensure that tech companies comply with privacy rules when developing AI systems.
The agency stressed that DPIAs play an important role in ensuring that individuals’ rights and freedoms are protected when personal data is processed at high risk. If Google fails to implement a DPIA before processing EU users’ data to develop PaLM2, the company could face serious penalties.
Currently, Google has not given an official answer on this issue.