Skip to main content

57% of the internet may already be AI sludge

a cgi word bubble
Google Deepmind / Pexels

It’s not just you — search results really are getting worse. Amazon Web Services (AWS) researchers have conducted a study that suggests 57% of content on the internet today is either AI-generated or translated using an AI algorithm.

The study, titled “A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism,” argues that low-cost machine translation (MT), which takes a given piece of content and regurgitates it in multiple languages, is the primary culprit. “Machine generated, multi-way parallel translations not only dominate the total amount of translated content on the web in lower resource languages where MT is available; it also constitutes a large fraction of the total web content in those languages,” the researchers wrote in the study.

They also found evidence of selection bias in what content is machine translated into multiple languages compared to content published in a single language. “This content is shorter, more predictable, and has a different topic distribution compared to content translated into a single language,” the researchers’ wrote.

What’s more, the increasing amount of AI-generated content on the internet combined with increasing reliance on AI tools to edit and manipulate that content could lead to a phenomenon known as model collapse, and is already reducing the quality of search results across the web. Given that frontier AI models like ChatGPT, Gemini, and Claude rely on massive amounts of training data that can only be acquired by scraping the public web (whether that violates copyright or not), having the public web stuffed full of AI-generated, and often inaccurate, content could severely degrade their performance.

“It is surprising how fast model collapse kicks in and how elusive it can be,” Dr. Ilia Shumailov from the University of Oxford told Windows Central. “At first, it affects minority data—data that is badly represented. It then affects diversity of the outputs and the variance reduces. Sometimes, you observe small improvement for the majority data, that hides away the degradation in performance on minority data. Model collapse can have serious consequences.”

The researchers demonstrated those consequences by having professional linguists classify 10,000 randomly selected English sentences from one of 20 categories. The researchers observed “a dramatic shift in the distribution of topics when comparing 2-way to 8+ way parallel data (i.e. the number of language translations), with ‘conversation and opinion’ topics increasing from 22.5% to 40.1%” of those published.

This points to a selection bias in the type of data that is translated into multiple languages, which is “substantially more likely” to be from the “conversation and opinion” topic.

Additionally, the researchers found that “highly multi-way parallel translations are significantly lower quality (6.2 Comet Quality Estimation points worse) than 2-way parallel translations.” When the researchers audited 100 of the highly multi-way parallel sentences (those translated into more than eight languages), they found that “a vast majority” came from content farms with articles “that we characterized as low quality, requiring little or no expertise, or advance effort to create.”

That certainly helps explain why OpenAI’s CEO Sam Altman keeps keening on about how its “impossible” to make tools like ChatGPT without free access to copyrighted works.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
The backlash against AI has officially begun
Someone drawing on an iPad.

Someone had to be first.

Sure, plenty of people out there have been watching the ongoing storm of excitement, overpromising, and disaster with a look of disgust. But few companies have come out and said what lots of ordinary people are thinking. And that's that generative AI can kind of suck. Specifically -- generative AI in the creative world.

Read more
Microsoft: AI is no replacement for human expertise
Microsoft Copilot allows you to ask an AI assistant questions within Office apps.

Microsoft has updated the terms of service going into effect at the end of September and is clarifying that its Copilot AI services should not be used as a replacement for advice from actual humans.

AI-based agents are popping up across industries as chatbots are increasingly used for customer service calls, health and wellness applications, and even doling out legal advice. However, Microsoft is once again reminding its customers that its chatbots responses should not be taken as gospel. “AI services are not designed, intended, or to be used as substitutes for professional advice,” the updated Service Agreement reads.

Read more
AI image generation just took a huge step forward
Professor Dumbledore by a pool in Wes Anderson's Harry Potter.

We've been living with AI-generated images for a while now, but this week, some of the major players took some big steps forward. In particular, I'm talking about significant updates to Midjourney, Google's new model, and Grok.

Each company shows the technology evolving at different paces and in different directions. It's still very much an open playing field, and each company demonstrates just how far the advances have come.
Midjourney hits the web
An AI image generated in Midjourney. Channel/Midjourney

Read more