Skip to main content

Bluesky has ‘no intention’ to train generative AI on user content

Bluesky on the App Store, displayed on iPhone 16 Plus.
Cristina Alexander / Digital Trends

After adding its 16 millionth user to the platform on Friday morning, social media platform Bluesky addressed concerns from the bevy of artists and content creators streaming over from X.com. The company has pledged that it has “no intention” of using their posted content to train generative AI.

A number of artists and creators have made their home on Bluesky, and we hear their concerns with other platforms training on their data. We do not use any of your content to train generative AI, and have no intention of doing so.

— Bluesky (@bsky.app) 2024-11-15T17:17:39.921Z

Recommended Videos

“Bluesky uses AI internally to assist in content moderation, which helps us triage posts and shield human moderators from harmful content,” the Bluesky team explained in a subsequent post. “We also use AI in the Discover algorithmic feed to serve you posts that we think you’d like. None of these are Gen AI systems trained on user content.”

Granted, Bluesky’s wording is not an outright denial that the platform could change course at some point in the future, as user Casey Johnston points out.  However, it is a marked departure from the new rules being rolled out at X on Friday. The social media platform, run by billionaire Elon Musk, has announced that it has modified its privacy policy and will begin using its expansive archive of user posts to train the next generation of its Grok large language model.

This isn’t the first time that X.com has attempted to cannibalize its users’ content for private gain. The company quietly changed its privacy policy in July to give the company access to user-based training data. In mid-October, the company tried it again, this time allowing third-party “collaborators” to train models on X data, unless users opt out:

“Depending on your settings, or if you decide to share your data, we may share or disclose your information with third parties. If you do not opt out, in some instances the recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise.”

Those policy changes take effect today, Friday November 15. All public posts, including the text, images, and interactions, can be harvested for training Grok (and any other models that the company plans to pursue). If you’d prefer your content not be scraped, you may want to take a deep dive into how to clean up your social media accounts.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Adobe is giving creators a way to prove their art isn’t AI slop
Zoom blur background in Photoshop on a MacBook.

With AI slop taking over the web, being able to confirm a piece of content's provenance is more important than ever. Adobe announced on Tuesday that it will begin rolling out a beta of its Content Authenticity web app in the first quarter of 2025, enabling creators to digitally certify their works as human-made, and is immediately launching a Content Authenticity browser extension for Chrome to help protect content creators until the web app arrives.

Adobe's digital watermarking relies on a combination of digital fingerprinting, watermarking, and cryptographic metadata to certify the authenticity of images, video, and audio files. Unlike traditional metadata that is easily circumvented with screenshots, Adobe's system can still identify the creator of a registered file even when the credentials have been scrubbed. This enables to company to “truly say that wherever an image, or a video, or an audio file goes, on anywhere on the web or on a mobile device, the content credential will always be attached to it,” Adobe Senior Director of Content Authenticity Andy Parsons told TechCrunch.

Read more
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more
Here are all the Macs that will and won’t get the new AI features
Apple showing the different devices that Apple Intelligence works on.

At its Worldwide Developers Conference (WWDC), Apple lifted the lid on Apple Intelligence, its own artificial intelligence (AI) system that infuses your devices with machine learning goodness and AI power.

Want to know if your Mac is compatible? Then look no further, as we’ve listed every Mac that works with Apple Intelligence and what you’ll need to get started. Let’s see what’s required.
Which Macs work with Apple Intelligence?

Read more