Update: Apple has since confirmed to 9to5Mac that the OpenELM language model that was trained on YouTube Subtitles was not used to power any of its AI or machine learning programs, including Apple Intelligence. Apple says OpenELM was created solely for research purposes and will not get future versions. The original story published on July 16, 2024 follows below:
Apple is the latest in a long line of generative AI developers — a list that’s nearly as old as the industry — that has been caught scraping copyrighted content from social media in order to train its artificial intelligence systems.
According to a new report from Proof News, Apple has been using a dataset containing the subtitles of 173,536 YouTube videos to train its AI. However, Apple isn’t alone in that infraction, despite YouTube’s specific rules against exploiting such data without permission. Other AI heavyweights have been caught using it as well, including Anthropic, Nvidia, and Salesforce.
The data set, known as YouTube Subtitles, contains the video transcripts from more than 48,000 YouTube channels, from Khan Academy, MIT, and Harvard to The Wall Street Journal, NPR, and the BBC. Even transcripts from late-night variety shows like “The Late Show With Stephen Colbert,” “Last Week Tonight with John Oliver,” and “Jimmy Kimmel Live” are part of the YouTube Subtitles database. Videos from YouTube influencers like Marques Brownlee and MrBeast, as well as a number of conspiracy theorists, were also lifted without permission.
The data set itself, which was compiled by the startup EleutherAI, does not contain any video files, though it does include a number of translations into other languages including Japanese, German, and Arabic. EleutherAI reportedly obtained its data from a larger dataset, dubbed Pile, which was itself created by a nonprofit who pulled their data from not just YouTube but also European Parliament records and Wikipedia.
Bloomberg, Anthropic and Databricks also trained models on the Pile, the companies’ relative publications indicate. “The Pile includes a very small subset of YouTube subtitles,” Jennifer Martinez, a spokesperson for Anthropic, said in a statement to Proof News. “YouTube’s terms cover direct use of its platform, which is distinct from use of The Pile dataset. On the point about potential violations of YouTube’s terms of service, we’d have to refer you to The Pile authors.”
Technicalities aside, AI startups helping themselves to the contents of the open internet has been an issue since ChatGPT made its debut. Stability AI and Midjourney are currently facing a lawsuit by content creators over allegations that they scraped their copyrighted works without permission. Google itself, which operates YouTube, was hit with a class-action lawsuit last July and then another in September, which the company argues would “take a sledgehammer not just to Google’s services but to the very idea of generative AI.”
Me: What data was used to train Sora? YouTube videos?
OpenAI CTO: I'm actually not sure about that…(I really do encourage you to watch the full @WSJ interview where Murati did answer a lot of the biggest questions about Sora. Full interview, ironically, on YouTube:… pic.twitter.com/51O8Wyt53c
— Joanna Stern (@JoannaStern) March 14, 2024
And this past July, Microsoft AI CEO Mustafa Suleyman made the argument that an ethereal “social contract” means anything found on the web is fair game.
“I think that with respect to content that’s already on the open web, the social contract of that content since the ’90s has been that it is fair use,” Suleyman told CNBC. “Anyone can copy it, re-create with it, reproduce with it. That has been freeware, if you like, that’s been the understanding.”