Meta has announced a pause in the launch and training of its AI assistant in Europe at the request of the Irish Data Protection Commission (DPC).

The request comes after the DPC, Meta's lead privacy regulator in the EU, asked the company to halt training of large language models (LLMs) using publicly available data collected by Facebook users. and Instagram shares.

Meta said the decision left them “disappointed” and considered it a “step backward” for AI competition and innovation in the EU. However, the company insists it complies with European regulations.

The DPC supported the suspension, saying it “welcomed the decision” and would continue to work with Meta following previous “extensive engagement.”

According to European privacy non-profit organization NOYB, Meta's request to temporarily suspend operations was the result of complaints from itself, and other organizations such as the Norwegian Consumer Council and several other data protection authorities.

This issue has been going on for many months. Meta said it had notified European data protection authorities about its activities since March.

However, the situation became more serious when Meta began notifying users about its AI training through more than two billion in-app notifications and emails on May 22. NOYB filed a complaint with 11 data protection authorities on June 6.

Meta acknowledged consulting with the DPC on June 10 and said it had incorporated the response to ensure its AI training complies with AI laws.

Meta notes that Google and OpenAI have also used AI to train on user data. The company plans to comply with EU General Data Protection Regulation (GDPR) laws on the legal basis of “Legitimate Interest,” a flexible provision that other AI companies have relied on.

This pause highlights the complexity and challenges in applying AI technology, especially in the context of increasingly stringent data protection regulations.