Skip to main content

OpenAI could release its next-generation model by December

OpenAI plans to release its next-generation frontier model, code-named Orion and rumored to actually be GPT-5, by December, according to an exclusive report from The Verge. However, OpenAI boss Sam Altman is already pushing back.

According to “sources familiar with the plan,” Orion will not initially be released to the general public, as the previous GPT-4 variants were. Instead, the company intends to hand the new model over to select businesses and partners, who will then use it as a platform to build their own products and services. This is the same strategy that Nvidia is pursuing with its NVLM 1.0 family of large language models (LLMs).

Recommended Videos

What’s more, The Verge reports that Microsoft is planning to host the new model beginning in November. There is no confirmation yet that Orion will actually be called GPT-5 when it is released, though the model is reportedly considered by its engineers to be GPT-4’s successor.

Within hours of the report’s publication on Friday, Altman coyly denied the claims, stating “fake new is out of control.” However, he did not deny outright that Orion would be released in December, nor did he cite any specific aspects of The Verge’s story as being factually inaccurate.

He has not expounded on his position since publishing that tweet, leaving us confused both about his statement’s meaning and his company’s eventual plans for Orion.

This report comes a little over a month after OpenAI’s surprise release of Project Strawberry, officially known as 01 preview and 01-mini. Its “reasoning” architecture is designed to deduce answers as a human would and accurately solve complex questions on various subjects — including science, coding, and math — faster than a person can.

However, the 01 models have not been met with the same degree of fervor that GPT-4 received, in part because the new models are limited in their functionality, incapable of uploading files or analyzing images, and as VentureBeat notes, are expensive for OpenAI to operate.

In the run-up to 01 preview’s release, Altman published a series of cryptic tweets featuring the fruit. He appears to be doing the same with Orion. In September, just as the company was reportedly finishing up training Orion using synthetic data generated by 01, Altman fired off a conspicuous tweet about visiting the Midwest. As The Verge notes, the dominant constellation in the Northern Hemisphere’s winter sky is, you guessed it, Orion.

Rumors about GPT-5 have flooded the internet for many months now, basically since the day GPT-4 launched in March 2023. Reports initially put the release date sometime in the summer of 2024, but once that passed, the goalpost had been moved to this fall. On the other hand, former Chief Technology Officer Mira Murati said in an interview this past June (before leaving teh company) that the “next-gen” model was not due out for another year and a half.

So, while a launch later this December seems plausible, timed with the two-year anniversary of ChatGPT, it’s just as likely that it won’t come until 2025 based on how inaccurate all the predictions have been so far.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more
How you can try OpenAI’s new o1-preview model for yourself
The openAI o1 logo

Despite months of rumored development, OpenAI's release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn't be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here's how to get access for yourself.

Read more
OpenAI Project Strawberry: Here’s everything we know so far
a strawberry

Even as it is reportedly set to spend $7 billion on training and inference costs (with an overall $5 billion shortfall), OpenAI is steadfastly seeking to build the world's first Artificial General Intelligence (AGI).

Project Strawberry is the company's next step toward that goal, and as of mid September, it's officially been announced.
What is Project Strawberry?
Project Strawberry is OpenAI's latest (and potentially greatest) large language model, one that is expected to broadly surpass the capabilities of current state-of-the-art systems with its "human-like reasoning skills" when it rolls out. It just might power the next generation of ChatGPT.
What can Strawberry do?
Project Strawberry will reportedly be a reasoning powerhouse. Using a combination of reinforcement learning and “chain of thought” reasoning, the new model will reportedly be able to solve math problems it has never seen before and act as a high-level agent, creating marketing strategies and autonomously solving complex word puzzles like the NYT's Connections. It can even "navigate the internet autonomously" to perform "deep research," according to internal documents viewed by Reuters in July.

Read more