Skip to main content

Is AI already plateauing? New reporting suggests GPT-5 may be in trouble

A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.
Viralyft / Unsplash

OpenAI’s next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it’s been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a “far smaller” improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion “isn’t reliably better than its predecessor [GPT-4] in handling certain tasks,” specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Recommended Videos

The Information’s report cites a “dwindling supply of high-quality text and other data” on which to train new models as a major factor in the new model’s insubstantial gains. In short, the AI industry is quickly running into a training data bottleneck, having already stripped the easy sources of social media data from sites like X, Facebook, and YouTube (the latter on two different occasions.) As such, these companies are increasingly having difficulty finding the sorts of knotty coding challenges that will help advance their models beyond their current capabilities, slowing down their pre-release training.

That reduced training efficiency has massive ecological and commercial implications. As frontier-class LLMs grow and further push their parameter counts into the high trillions, the amount of energy, water, and other resources is expected to increase six-fold in the next decade. This is why we’re seeing Microsoft try to restart Three Mile Island, AWS buy a 960 MW plant, and Google purchase the output of seven nuclear reactors, all to provide the necessary power for their growing menageries of AI data centers — the nation’s current power infrastructure simply can’t keep up.

In response, as TechCrunch reports, OpenAI has created a “foundations team” to circumvent the lack of appropriate training data. Those techniques could involve using synthetic training data, such as what Nvidia’s Nemotron family of models can generate. The team is also looking into improving the model’s performance post-training.

Orion, which was originally thought to be the code name for OpenAI’s GPT-5, is now expected to arrive at some point in 2025. Whether we’ll have enough available power to see it in action, without browning out our municipal electrical grids, remains to be seen.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s robotics plans aim to ‘bring AI into the physical world’
The Figure 02 robot looking at its own hand

OpenAI continued to accelerate its hardware and embodied AI ambitions on Tuesday, with the announcement that Caitlin Kalinowski, the now-former head of hardware at Oculus VR, will lead its robotics and consumer hardware team.

"OpenAI and ChatGPT have already changed the world, improving how people get and interact with information and delivering meaningful benefits around the globe," Kalinowski wrote on a LinkedIn announcement. "AI is the most exciting engineering frontier in tech right now, and I could not be more excited to be part of this team."

Read more
ChatGPT Search is here to battle both Google and Perplexity
The ChatGPT Search icon on the prompt window

ChatGPT is receiving its second new search feature of the week, the company announced on Thursday. Dubbed ChatGPT Search, this tool will deliver real-time data from the internet in response to your chat prompts.

ChatGPT Search appears to be both OpenAI's answer to Perplexity and a shot across Google's bow.

Read more
Elon Musk reportedly will blow $10 billion on AI this year
Elon Musk at Tesla Cyber Rodeo.

Between Tesla and xAI, Elon Musk's artificial intelligence aspirations have cost some $10 billion dollars in bringing training and inference compute capabilities online this year, according to a Thursday post on X (formerly Twitter) by Tesla investor Sawyer Merritt.

"Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas – and will have 50,000 H100 capacity by the end of October, and ~85,000 H100 equivalent capacity by December," Merritt noted.

Read more