LinkedIn is embroiled in controversy after being caught using user data to train AI without explicit consent, especially in regions subject to GDPR like the EU.

Professional social network LinkedIn is facing criticism after TechCrunch revealed that the platform used user data to train artificial intelligence (AI) models without adequate disclosure or providing opt-out options to users in some regions.

LinkedIn users in the US can reportedly opt out through their privacy settings, but users in the EU, EEA, and Switzerland do not have this option, likely due to strict data protection regulations there, especially the General Data Protection Regulation (GDPR).

On LinkedIn’s part, the company said that AI models are trained to provide features like post suggestions and content recommendations. The platform also confirmed that some AI models may be trained by other vendors, including parent company Microsoft.

In a Q&A section on its website, LinkedIn explains that it collects and uses user data when they interact with the platform, including personal data, posts, articles, and user feedback. All of this is done in accordance with LinkedIn's privacy policy to improve and develop its services.

However, LinkedIn’s failure to update its privacy policy before it began using user data for AI training purposes violated its practice of notifying users of material changes, which gives them the opportunity to adjust their accounts or leave the platform if they disagree with the new policies.

Users can go to “Data Privacy” in the settings menu, click “Data to improve AI generation”, then turn off the “Use my data to train AI models to generate content” option to no longer have their data collected.

The Open Rights Group (ORG) has called on the UK Information Commissioner's Office (ICO) to investigate LinkedIn and other social networks for training AI on user data without explicit consent.

In relation to GDPR, the Irish Data Protection Commission (DPC) – the body that oversees GDPR compliance in the EU – has been notified by LinkedIn that it will be updating its global privacy policy.

The DPC said LinkedIn will provide an opt-out option for data to be used to train AI models, however, this does not apply to EU/EEA users as LinkedIn does not use user data in these regions for AI training purposes, the company said.

The move marks a major step forward in the trend of online platforms using user data to develop AI. A growing number of platforms, from Reddit to Stack Overflow, have licensed or used their data to train AI models. However, not all platforms offer users the right to opt out, leading to privacy and data protection controversies.

Meta, the parent company of Facebook and Instagram, recently renewed its plans to use user data to train AI, after partnering with the ICO to simplify the opt-out process. The partnership reflects a growing trend of social platforms leveraging user data for new purposes, even when users are not fully consenting.

This raises concerns about the transparency and accountability of technology platforms in their use of user data. In particular, in the context of rapid AI development, platforms need to be clearer about user rights as well as issues related to consent and data protection.