LinkedIn under fire for training AI models with user data behind the scenes

LinkedIn is facing scrutiny for using user data in AI model training without initial consent, later notifying users of the practice.

: LinkedIn has admitted to using user data to train its generative AI models without first obtaining explicit consent from users. The company updated its user agreements and privacy policy to reflect these changes. Users can now opt out of this AI data usage. European users will be excluded from this practice for now.

LinkedIn, owned by Microsoft, has come under criticism for training its AI models using user data without first notifying or obtaining consent from its users. The company later disclosed its activities in a blog post by SVP and General Counsel, Blake Lawit, where they also introduced updated user agreements and FAQs to inform users of the changes.

The new user agreement, taking effect in November, outlines how LinkedIn utilizes user data for content recommendations, moderation, and its generative AI features. LinkedIn also rolled out an updated privacy policy explaining how user information such as posts, language preferences, and feedback is used to develop AI-generated content.

LinkedIn asserts that they employ privacy-enhancing technologies to reduce personal information in their training datasets. Users can opt out by adjusting their account settings, while European users have been exempted from automatic data scraping for AI purposes until further notice.