NEW YORK, USA – In a world increasingly driven by artificial intelligence, tech companies are turning to an unlikely source to train their AI models—your social media content. From resumes on LinkedIn to selfies on Snapchat, social media platforms are tapping into the wealth of publicly shared data to polish their AI systems.
But users may not realize just how much of their personal content is being used, and in some cases, how difficult it can be to opt-out.
OpenAI, LinkedIn, Snapchat, and even Reddit have all made headlines recently for their usage of user data to train AI, raising questions about privacy, consent, and corporate transparency.
“Right now, there is a lot of fear being created around AI, some of it well-founded and some based in science fiction, so it’s on these platforms to be very open about how they will and won’t use our data,” said David Ogiste, founder of marketing agency Nobody’s Cafe, in an interview with CNN.
Ogiste, a frequent poster on LinkedIn, shared concerns about how transparent platforms are in alerting users that their posts might be used to train AI models. “It doesn’t feel like that has been done yet.”
LinkedIn: Opt-Out, but Not Completely
LinkedIn, the popular networking platform, recently began offering users the option to opt out of having their data used to train AI.
The platform says it may use user-generated content, such as resumes and posts, to improve its own generative AI models, and that of its affiliates, including Microsoft-backed OpenAI.
However, LinkedIn users are only able to prevent their future data from being used. Training that has already occurred using past content cannot be undone.
To opt out, users can navigate to the “Settings & Privacy” section, then select the “Data Privacy” tab, and toggle off the option for “Data for Generative AI Improvement.”
But this opt-out feature is unavailable for users in the UK and Europe, where LinkedIn claims it does not train AI on user data due to stringent privacy regulations.
X: No Alerts, Just AI
On Elon Musk’s X platform (formerly known as Twitter), user posts are also being tapped to train AI, notably the controversial Grok chatbot. Grok has garnered attention for spreading false information and generating disturbing, AI-created images.
The policy update allowing X to use user data for AI training wasn’t communicated directly to users but was instead spotted by vigilant observers.
Users can opt out by visiting the “Privacy and Safety” settings and unchecking a box under the “Data Sharing and Personalization” section.
But X users with private accounts are excluded from having their content used for AI training, according to the platform.
Snapchat: Selfies in Ads?
Snapchat’s “My Selfie” feature, which lets users create AI-generated images from their selfies, takes things a step further.
By using this feature, users agree to let Snap and its business partners use their images for a wide array of purposes, including advertisements, potentially without the users’ knowledge.
Tech news outlet 404 Media reported this week that Snapchat’s terms of service allow it to turn user-generated selfies into AI-generated ads, visible only to the users themselves.
However, Snap’s terms make it clear that these images can be used for commercial purposes globally and indefinitely.
While users must opt in to create a “My Selfie,” they can later opt out of seeing their images in ads by going to their account settings and turning off the “See My Selfie in Ads” option.
Reddit and Meta: No Escape From AI
Reddit users, too, are finding their public posts part of the AI machine. The platform has made deals with OpenAI and Google to share user content for AI training, though private messages and posts in private communities are not affected.
Redditors can’t opt out of having their public content used in this way, a situation that has frustrated some users.
Meanwhile, Meta has also acknowledged using public posts on Facebook and Instagram to train its AI.
According to Meta’s privacy policy, the company utilizes user content, including posts, profile pictures, and comments, to enhance its AI models.
However, private messages remain off-limits for AI training—though a friend tagging you in a post could still mean your image is fair game.
The New Reality of Social Media and AI
For users, this surge in AI training using social media content underscores a new reality: while platforms may provide some level of control over how data is used, posting publicly means there’s little guarantee your content won’t be used in ways you never intended.
As the debate over AI, privacy, and user consent continues, it’s clear that transparency from tech companies will be critical.
But as Ogiste pointed out, it may still be some time before the full extent of how user data is utilized by AI becomes clear to the average person.
“It doesn’t feel like that has been done yet,” he said, encapsulating the frustration felt by many in an age of digital uncertainty.