Skip to main content

Apple denies reports that its AI was trained on YouTube videos

MRBeast in a video announcing NFL Sunday Ticket contests.
Phil Nickinson / Digital Trends

Update: Apple has since confirmed to 9to5Mac that the OpenELM language model that was trained on YouTube Subtitles was not used to power any of its AI or machine learning programs, including Apple Intelligence. Apple says OpenELM was created solely for research purposes and will not get future versions. The original story published on July 16, 2024 follows below:

Apple is the latest in a long line of generative AI developers — a list that’s nearly as old as the industry — that has been caught scraping copyrighted content from social media in order to train its artificial intelligence systems.

According to a new report from Proof News, Apple has been using a dataset containing the subtitles of 173,536 YouTube videos to train its AI. However, Apple isn’t alone in that infraction, despite YouTube’s specific rules against exploiting such data without permission. Other AI heavyweights have been caught using it as well, including Anthropic, Nvidia, and Salesforce.

The data set, known as YouTube Subtitles, contains the video transcripts from more than 48,000 YouTube channels, from Khan Academy, MIT, and Harvard to The Wall Street Journal, NPR, and the BBC. Even transcripts from late-night variety shows like “The Late Show With Stephen Colbert,” “Last Week Tonight with John Oliver,” and “Jimmy Kimmel Live” are part of the YouTube Subtitles database. Videos from YouTube influencers like Marques Brownlee and MrBeast, as well as a number of conspiracy theorists, were also lifted without permission.

The data set itself, which was compiled by the startup EleutherAI, does not contain any video files, though it does include a number of translations into other languages including Japanese, German, and Arabic. EleutherAI reportedly obtained its data from a larger dataset, dubbed Pile, which was itself created by a nonprofit who pulled their data from not just YouTube but also European Parliament records and Wikipedia.

Bloomberg, Anthropic and Databricks also trained models on the Pile, the companies’ relative publications indicate. “The Pile includes a very small subset of YouTube subtitles,” Jennifer Martinez, a spokesperson for Anthropic, said in a statement to Proof News. “YouTube’s terms cover direct use of its platform, which is distinct from use of The Pile dataset. On the point about potential violations of YouTube’s terms of service, we’d have to refer you to The Pile authors.”

Technicalities aside, AI startups helping themselves to the contents of the open internet has been an issue since ChatGPT made its debut. Stability AI and Midjourney are currently facing a lawsuit by content creators over allegations that they scraped their copyrighted works without permission. Google itself, which operates YouTube, was hit with a class-action lawsuit last July and then another in September, which the company argues would “take a sledgehammer not just to Google’s services but to the very idea of generative AI.”

Me: What data was used to train Sora? YouTube videos?
OpenAI CTO: I'm actually not sure about that…

(I really do encourage you to watch the full @WSJ interview where Murati did answer a lot of the biggest questions about Sora. Full interview, ironically, on YouTube:… pic.twitter.com/51O8Wyt53c

— Joanna Stern (@JoannaStern) March 14, 2024

What’s more, these same AI companies have severe difficulty actually citing where they obtain their training data. In a March 2024 interview with The Wall Street Journal’s Joanna Stern, OpenAI CTO Mira Murati stumbled repeatedly when asked whether her company utilized videos from YouTube, Facebook, and other social media platforms to train their models. “I’m just not going to go into the details of the data that was used,” Murati said.

And this past July, Microsoft AI CEO Mustafa Suleyman made the argument that an ethereal “social contract” means anything found on the web is fair game.

“I think that with respect to content that’s already on the open web, the social contract of that content since the ’90s has been that it is fair use,” Suleyman told CNBC. “Anyone can copy it, re-create with it, reproduce with it. That has been freeware, if you like, that’s been the understanding.”

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
This new free tool lets you easily train AI models on your own
Gigabyte AI TOP utility branding

Gigabyte has announced the launch of AI TOP, its in-house software utility designed to bring advanced AI model training capabilities to home users. Making its first appearance at this year’s Computex, AI TOP allows users to locally train and fine-tune AI models with a capacity of up to 236 billion parameters when used with recommended hardware.

AI TOP is essentially a comprehensive solution for local AI model fine-tuning, enhancing privacy and security for sensitive data while providing maximum flexibility and real-time adjustments. According to Gigabyte, the utility comes with a user-friendly interface and has been designed to help beginners and experienced users easily navigate and understand the information and settings. Additionally, the utility includes AI TOP Tutor, which offers various AI TOP solutions, setup guidance, and technical support for all types of AI model operators.

Read more
Apple is tackling one of the most frustrating aspects with AI today
Apple Intelligence on AI

As companies like Google, Anthropic, and OpenAI update and upgrade their AI models, the way that those LLMs interact with users is sure to change as well. However, getting used to the new system can become a hassle for users who then have to adjust how they pose their queries in order to get the results they've come to expect. An Apple research team has developed a new method to streamline that upgrade transition while reducing inconsistencies between the two versions by as much as 40%.

As part of their study, "MUSCLE: A Model Update Strategy for Compatible LLM Evolution," published July 15, the researchers argue that when upgrading their models, developers tend to focus more on upping the overall performance, rather than making sure that the transition between models is seamless for the user. That includes making sure that negative flips, wherein the new model predicts the incorrect output for a test sample that was correctly predicted by the older model, are kept to a minimum.

Read more
Steve Jobs predicted Apple Intelligence almost 40 years ago
Steve Jobs in Steve Jobs: The Man in the Machine.

The generative AI revolution began long before ChatGPT made its debut in November 2022. Computer Science researchers and sci-fi authors alike have been imagining the potential of thinking, feeling computers from the days of pulling literal bugs out of mainframes. Visionary Apple co-founder Steve Jobs was just as enamored with that ideal, as he describes in this 1985 video recording, almost 40 years ago.

https://twitter.com/vijayshekhar/status/1702939329654661278

Read more