Skip to main content

Apple denies reports that its AI was trained on YouTube videos

MRBeast in a video announcing NFL Sunday Ticket contests.
Phil Nickinson / Digital Trends

Update: Apple has since confirmed to 9to5Mac that the OpenELM language model that was trained on YouTube Subtitles was not used to power any of its AI or machine learning programs, including Apple Intelligence. Apple says OpenELM was created solely for research purposes and will not get future versions. The original story published on July 16, 2024 follows below:

Apple is the latest in a long line of generative AI developers — a list that’s nearly as old as the industry — that has been caught scraping copyrighted content from social media in order to train its artificial intelligence systems.

Recommended Videos

According to a new report from Proof News, Apple has been using a dataset containing the subtitles of 173,536 YouTube videos to train its AI. However, Apple isn’t alone in that infraction, despite YouTube’s specific rules against exploiting such data without permission. Other AI heavyweights have been caught using it as well, including Anthropic, Nvidia, and Salesforce.

The data set, known as YouTube Subtitles, contains the video transcripts from more than 48,000 YouTube channels, from Khan Academy, MIT, and Harvard to The Wall Street Journal, NPR, and the BBC. Even transcripts from late-night variety shows like “The Late Show With Stephen Colbert,” “Last Week Tonight with John Oliver,” and “Jimmy Kimmel Live” are part of the YouTube Subtitles database. Videos from YouTube influencers like Marques Brownlee and MrBeast, as well as a number of conspiracy theorists, were also lifted without permission.

The data set itself, which was compiled by the startup EleutherAI, does not contain any video files, though it does include a number of translations into other languages including Japanese, German, and Arabic. EleutherAI reportedly obtained its data from a larger dataset, dubbed Pile, which was itself created by a nonprofit who pulled their data from not just YouTube but also European Parliament records and Wikipedia.

Bloomberg, Anthropic and Databricks also trained models on the Pile, the companies’ relative publications indicate. “The Pile includes a very small subset of YouTube subtitles,” Jennifer Martinez, a spokesperson for Anthropic, said in a statement to Proof News. “YouTube’s terms cover direct use of its platform, which is distinct from use of The Pile dataset. On the point about potential violations of YouTube’s terms of service, we’d have to refer you to The Pile authors.”

Technicalities aside, AI startups helping themselves to the contents of the open internet has been an issue since ChatGPT made its debut. Stability AI and Midjourney are currently facing a lawsuit by content creators over allegations that they scraped their copyrighted works without permission. Google itself, which operates YouTube, was hit with a class-action lawsuit last July and then another in September, which the company argues would “take a sledgehammer not just to Google’s services but to the very idea of generative AI.”

Me: What data was used to train Sora? YouTube videos?
OpenAI CTO: I'm actually not sure about that…

(I really do encourage you to watch the full @WSJ interview where Murati did answer a lot of the biggest questions about Sora. Full interview, ironically, on YouTube:… pic.twitter.com/51O8Wyt53c

— Joanna Stern (@JoannaStern) March 14, 2024

What’s more, these same AI companies have severe difficulty actually citing where they obtain their training data. In a March 2024 interview with The Wall Street Journal’s Joanna Stern, OpenAI CTO Mira Murati stumbled repeatedly when asked whether her company utilized videos from YouTube, Facebook, and other social media platforms to train their models. “I’m just not going to go into the details of the data that was used,” Murati said.

And this past July, Microsoft AI CEO Mustafa Suleyman made the argument that an ethereal “social contract” means anything found on the web is fair game.

“I think that with respect to content that’s already on the open web, the social contract of that content since the ’90s has been that it is fair use,” Suleyman told CNBC. “Anyone can copy it, re-create with it, reproduce with it. That has been freeware, if you like, that’s been the understanding.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Everything you need to know about AI agents and what they can do
a hollow man under light

The agentic era of artificial intelligence has arrived. Billed as "the next big thing in AI research," AI agents are capable of operating independently and without continuous, direct oversight, while collaborating with users to automate monotonous tasks. In this guide, you'll find everything you need to know about how AI agents are designed, what they can do, what they're capable of, and whether they can be trusted to act on your behalf.
What is an agentic AI?
Agentic AI is a type of generative AI model that can act autonomously, make decisions, and take actions towards complex goals without direct human intervention. These systems are able to interpret changing conditions in real-time and react accordingly, rather than rotely following predefined rules or instructions. Based on the same large language models that drive popular chatbots like ChatGPT, Claude, or Gemini, agentic AIs differ in that they use LLMs to take action on a user's behalf rather than generate content.

AutoGPT and BabyAGI are two of the earliest examples of AI agents, as they were able to solve reasonably complex queries with minimal oversight. AI agents are considered to be an early step towards achieving artificial general intelligence (AGI). In a recent blog post, OpenAI CEO Sam Altman argued that, “We are now confident we know how to build AGI as we have traditionally understood it,” and predicted, "in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

Read more
ChatGPT just dipped its toes into the world of AI agents
OpenAI's ChatGPT blog post is open on a computer monitor, taken from a high angle.

OpenAI appears to be just throwing spaghetti at this point, hoping it sticks to a profitable idea. The company announced on Tuesday that it is rolling out a new feature called ChatGPT Tasks to subscribers of its paid tier that will allow users to set individual and recurring reminders through the ChatGPT interface.

Tasks does exactly what it sounds like it does: It allows you to ask ChatGPT to do a specific action at some point in the future. That could be assembling a weekly news brief every Friday afternoon, telling you what the weather will be like in New York City tomorrow morning at 9 a.m., or reminding you to renew your passport before January 20. ChatGPT will also send a push notification with relevant details. To use it, you'll need to select "4o with scheduled tasks" from the model picker menu, then tell the AI what you want it to do and when.

Read more
Microsoft nixes its Dall-E upgrade after image quality complaints
Robot holding a video camera, generated by Bing.

Microsoft has had to roll back its latest update to its Bing Image Generation system, which installed the latest iteration of OpenAI's Dall-E model, called PR16, after Bing users vociferously complained about a decline in image quality.

https://x.com/JordiRib1/status/1869425938976665880

Read more