Skip to main content

The Microsoft AI CEO just dropped a huge hint about GPT-5

The timeline on GPT-5 continues to be a moving target, but a recent interview with Microsoft AI CEO Mustafa Suleyman sheds some light on what GPT-5 and even what its successor will be like.

Mustafa Suleyman on Defining Intelligence

The interview was with AI and tech investor Seth Rosenberg, where he and Suleyman discuss a wide range of topics on the future of generative AI and attempting to “define intelligence.” Rosenberg asks Suleyman about the idea of autonomous agents and how far the chatbots we have access to today are from achieving that.

Recommended Videos

“It’s still pretty hard to get these models to follow instructions with subtlety and nuance over extended periods of time,” Suleyman responds. “To really get it to consistently do it in novel environments is pretty hard. I think that’s going to be not one but two orders of magnitude more computation of training the models. So, not GPT-5 but more like GPT-6-scale models. I think we’re talking about two years before we have systems that can really take action.”

Microsoft AI CEO Mustafa Suleyman says it won't be until GPT-6 in 2 years time that AI models will be able to follow instructions and take consistent action pic.twitter.com/KOPFYt1GZr

— Tsarathustra (@tsarnick) June 24, 2024

There are a couple of interesting things about these comments. First off, the timeline doesn’t quite line up with the recent interview on GPT-5 that OpenAI CTO Mira Murati gave just a few days ago. Murati didn’t refer to it by the name “GPT-5,” but certainly described it as a next-gen model.

“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati said. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

Clearly, nothing is set in stone about the next generation of OpenAI’s GPT systems. It seems more likely that both Murati and Suleyman are describing the same next major milestone in development, regardless of what it’s called. Then again, it’s a bit odd that Suleyman specifically says both GPT-5 and GPT-6, and notes that GPT-6 is only around two years away. So, is GPT-5 coming later this year, followed by GPT-6 next year? Or, as Murati implies, are we going to be waiting another two years before we see a step function improvement to GPT?

That remains unclear. GPT-5 has been rumored to launch for a long time, starting at the end of 2023, and then, again, this summer. Beyond just timing, Suleyman offers some interesting observations about where this is all headed.

“First of all, I don’t think we’re on a path toward fully autonomous. I think that’s actually quite undesirable,” he said. “I think fully autonomous is quite dangerous. If you have an agent that can formulate its own plans, come up with its goals, acquire its own resources — just objectively speaking, that’s going to be potentially risky than not.”

Instead, Suleyman suggests that where we’re headed is more about “narrow lanes of autonomy,” where an AI agent might be deployed to handle a given task that requires some degrees of reasoning and planning, but restricted by some tight borders. For Suleyman, he considers regulation to be the solution to keeping things in check. Suleyman also talked about his work at Microsoft right now with Copilot, as they fine-tune OpenAI models and work more on memory and personalization.

Suleyman just joined Microsoft in March, but has been a pioneer in the field of AI, as the co-founder and former head of AI at DeepMind, the company that was then acquired by Google.

When it comes to the GPT-5 release date, though, the water is still muddy. GPT-4 came out in March of 2023, and we still don’t know the cadence of how OpenAI seems more interested right now in building out its ecosystem and exploring more multimodal. After all, the integration into Apple Intelligence is a big move, as is the eventual introduction of low-latency voice chat — something that was first shown off in May. The company even made an acquisition this week that hints at more plans in the PC and desktop world.

Luke Larsen
Luke Larsen is the Senior Editor of Computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
The ChatGPT app is transforming my Mac right before my eyes
The ChatGPT Mac app running in macOS Sequoia.

Apple is all in on AI for the Mac. It's called Apple Intelligence, and it's really only starting to get off the ground.

Meanwhile, OpenAI went ahead and launched its own ChatGPT app earlier this year, and supported it with a recent update that made it even more useful, bringing ChatGPT’s web-searching powers to its Mac app.

Read more
ChatGPT unveils Sora with up to 20-second AI video generation
An AI generated image of a woman who walks the streets of Tokyo.

OpenAI has been promising to release its next-gen video generator model, Sora, since February. On Monday, the company finally dropped a working version of it as part of its "12 Days of OpenAI" event.

"This is a critical part of our AGI roadmap," OpenAI CEO Sam Altman said during the company's live stream.

Read more
ChatGPT’s new Pro subscription will cost you $200 per month
glasses and chatgpt

Sam Altman and team kicked off the company's "12 Days of OpenAI" event Thursday with a live stream to debut the fully functional version of its 01 reasoning model, as well as a new subscription tier called ChatGPT Pro. But to gain unlimited access to these new features and capabilities, you're going to need to shell out an exorbitant $200 per month.

The 01 model, originally codenamed Project Strawberry, was first released in September as a preview, alongside a lighter-weight o1-mini model, to ChatGPT-Plus subscribers. o1, as a reasoning model, differs from standard LLMs in that it is capable of fact-checking itself before returning its generated response to the user. This helps such models reduce their propensity to hallucinate answers but comes at the cost of a longer inference period and slower response.

Read more