Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

ChatGPT may have more paid subscribers than this popular streaming service

OpenAI CEO Sam Altman standing on stage at a product event.
Andrew Martonik / Digital Trends

OpenAI’s steamrolling of its rivals continued apace this week, and a new study estimates just how much success it’s had in winning over paid subscribers through ChatGPT Plus.

According to a report published by Futuresearch this week, OpenAI’s products are far and away the most popular — and profitable — in the AI space. Per the study, OpenAI has an estimated annual recurring revenue of $3.4 billion dollars.

a graph showing OpenAI's estimated ARR for 2024
Futuresearch

Some 55% of that, or $1.9 billion, comes from its 7.7 million ChatGPT Plus subscribers who pay $20 a month for the service. Another 21%, or $714 million, comes from the company’s 1.2 million $50/month ChatGPT Enterprise subscribers. Just 15%, or $510 million, is generated from the AI’s API while the remaining 8%, or $290 million, comes in from its 980,000 ChatGPT Teams subscribers who pay $25/month. In all, OpenAI is estimated to have some 9.88 million monthly subscribers.

Recommended Videos

That’s nearly 2 million more than the 8 million subscribers that YouTube TV, the nation’s fourth-largest cable television network, reportedly enjoys; though to be fair, Disney+ saw more than 10 million signups for its streaming service on its opening day. Still, it’s quite an achievement, especially at $20 per month.

The startling income begs the question: What’s the company doing with all this money? Well, another piece of news today ties directly into that answer.

Per a report from Bloomberg Thursday, OpenAI has developed a five-tier scale for measuring the capabilities of its AI systems as the company seeks to achieve AGI within the next decade. The company shared its scale internally with employees and investors earlier in the week.

OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.” The company states that it will “attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

The scale starts at Level 1 and describes AI that can interact with people in a conversational manner — essentially your run-of-the-mill chatbot. Level 2, where the company believes we are currently approaching, identifies Reasoners, AI that can solve problems in the same way (and as well as) a person with doctorate-level education could. We’re already seeing evidence of this given how often AI are passing state bar and medical school exams these days.

Level 3 describes Agents, AI that can operate on a user’s behalf across multiple days and systems — think Apple Intelligence but even more capable. Level 4, or Innovators, would be AI that can create its own novel solutions to a given problem or task, while Level 5 details “Organizations,” literally AI that can perform the same tasks as an entire company’s human workforce. The company was quick to point out that this categorization is still in its preliminary stages and could be adjusted as needed in the future.

The notion of interacting with an artificial intelligence as smart and capable as the people who built it has been around nearly as long as computers, though the requisite breakthroughs have always seemed to remain “a few years” out of reach. However, the release of ChatGPT in 2022 has drastically accelerated the estimated time frame for achieving that goal. Shane Legg, co-founder of Google’s DeepMind and the company’s lead AGI researcher, told Time last year that he estimates a 50-50 chance to develop AGI by 2028. Anthropic CEO Dario Amodei, on the other hand, believes AGI will be achieved in the next 24 months.

OpenAI certainly appears to be in position to achieve that goal.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT’s latest model may be a regression in performance
chatGPT on a phone on an encyclopedia

According to a new report from Artificial Analysis, OpenAI's flagship large language model for ChatGPT, GPT-4o, has significantly regressed in recent weeks, putting the state-of-the-art model's performance on par with the far smaller, and notably less capable, GPT-4o-mini model.

This analysis comes less than 24 hours after the company announced an upgrade for the GPT-4o model. "The model’s creative writing ability has leveled up–more natural, engaging, and tailored writing to improve relevance & readability," OpenAI wrote on X. "It’s also better at working with uploaded files, providing deeper insights & more thorough responses." Whether those claims continue to hold up is now being cast in doubt.

Read more
ChatGPT just improved its creative writing chops
a phone displaying the ChatGPT homepage on a beige bbackground.

One of the great strengths of ChatGPT is its ability to aid in creative writing. ChatGPT's latest large language model, GPT-4o, has received a bit of a performance boost, OpenAI announced Wednesday. Users can reportedly expect "more natural, engaging, and tailored writing to improve relevance & readability" moving forward.

https://twitter.com/OpenAI/status/1859296125947347164

Read more
ChatGPT already listens and speaks. Soon it may see as well
ChatGPT meets a dog

ChatGPT's Advanced Voice Mode, which allows users to converse with the chatbot in real time, could soon gain the gift of sight, according to code discovered in the platform's latest beta build. While OpenAI has not yet confirmed the specific release of the new feature, code in the ChatGPT v1.2024.317 beta build spotted by Android Authority suggests that the so-called "live camera" could be imminently forthcoming.

OpenAI had first shown off Advanced Voice Mode's vision capabilities for ChatGPT in May, when the feature was first launched in alpha. During a demo posted at the time, the system was able to identify that it was looking at a dog through the phone's camera feed, identify the dog based on past interactions, recognize the dog's ball, and associate the dog's relationship to the ball (i.e. playing fetch).

Read more