LinkedIn has quietly opted its users into training generative AI models without explicitly asking for consent, raising concerns about data privacy on the platform. According to a report by 404Media, LinkedIn made changes to its privacy policy, stating that user data can be used to train AI models. The platform has since updated the policy, now allowing users to opt-out of this practice.
Updated LinkedIn policy reveals personal data usage on AI training
The updated policy states that LinkedIn may use personal data to “improve, develop, and provide products and Services,” as well as to train AI models. Generative AI is used for features like writing assistants, but LinkedIn claims it employs privacy-enhancing technologies to redact personal information. Users who prefer not to participate can opt-out by navigating to the “Data privacy” tab in their account settings, turning off the “Data for Generative AI Improvement” toggle.
However, opting out will only stop LinkedIn from using your data for future model training. Data that has already been used remains unaffected. Additionally, LinkedIn clarifies that users in the EU, EEA, or Switzerland are not included in AI model training.
If you’re concerned about other machine learning tools used for personalization and moderation, LinkedIn requires users to fill out a “Data Processing Objection Form” to opt-out from those uses as well.
LinkedIn’s silent opt-in move echoes similar actions from Meta, which recently admitted to scraping non-private user data for AI training dating back to 2007.
The timing of LinkedIn’s move comes at a moment when other major tech players, like OpenAI, are also facing backlash for similar practices. This pattern of quietly enrolling users in AI training without clear and prominent notifications creates a sense of unease.
OpenAI CTO Mira Murati says Sora was trained on publicly available and licensed data pic.twitter.com/rf7pZ0ZX00
— Tsarathustra (@tsarnick) March 13, 2024
It’s not just about data being used for AI—it’s about who gets to decide and how informed that decision is. The tech industry has long faced criticism for operating in the shadows when it comes to data collection, and the growing push for generative AI is only amplifying those concerns.
Can machines forget your personal data?
Another key issue is that opting out only affects future use of personal data. Any data that has already been fed into AI models remains in the system, and that lack of retroactive control may leave many users feeling powerless. The industry is also talking about “machine unlearning” to prevent this from happening, deleting data fed in AI models.
The fact that LinkedIn uses “privacy-enhancing technologies” to anonymize data is somewhat reassuring, but it doesn’t address the deeper problem: the need for more proactive, user-centered privacy standards.
Ultimately, this situation highlights the need for stronger, clearer regulations that put control back in the hands of users. The idea that tech companies can use our personal data without clear consent doesn’t sit well in times where privacy is becoming increasingly valuable.
Featured image credit: Kerem Gülen/Ideogram