Skip to main content

ChatGPT already listens and speaks. Soon it may see as well

ChatGPT meets a dog
OpenAI

ChatGPT’s Advanced Voice Mode, which allows users to converse with the chatbot in real time, could soon gain the gift of sight, according to code discovered in the platform’s latest beta build. While OpenAI has not yet confirmed the specific release of the new feature, code in the ChatGPT v1.2024.317 beta build spotted by Android Authority suggests that the so-called “live camera” could be imminently forthcoming.

OpenAI had first shown off Advanced Voice Mode’s vision capabilities for ChatGPT in May, when the feature was first launched in alpha. During a demo posted at the time, the system was able to identify that it was looking at a dog through the phone’s camera feed, identify the dog based on past interactions, recognize the dog’s ball, and associate the dog’s relationship to the ball (i.e. playing fetch).

The feature was an immediate hit with alpha testers as well. X user Manuel Sainsily employed it to great effect in answering verbal questions about his new kitten based on the camera’s video feed.

Recommended Videos

Trying #ChatGPT’s new Advanced Voice Mode that just got released in Alpha. It feels like face-timing a super knowledgeable friend, which in this case was super helpful — reassuring us with our new kitten. It can answer questions in real-time and use the camera as input too! pic.twitter.com/Xx0HCAc4To

— Manuel Sainsily (@ManuVision) July 30, 2024

Advanced Voice Mode was subsequently released in beta to Plus and Enterprise subscribers in September, albeit without its additional visual capabilities. Of course, that didn’t stop users from going wild in testing the feature’s vocal limits. Advanced Voice, “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions,” according to the company.

The addition of digital eyes would certainly set Advanced Voice Mode apart from OpenAI’s primary competitors Google and Meta, both of whom have in recent months introduced conversational features of their own.

Gemini Live may be able to speak more than 40 languages, but it cannot see the world around itself (at least until Project Astra gets off the ground) — nor can Meta’s Natural Voice Interactions, which debuted at the Connect 2024 event in September, use camera inputs.

OpenAI also announced today that Advanced Voice mode was now also available for paid ChatGPT Plus accounts on desktop. It was available exclusively on mobile for a bit, but can now be accessed right at your laptop or PC as well.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Your ChatGPT conversation history is now searchable
ChatGPT chat search

OpenAI debuted a new way to more efficiently manage your growing ChatGPT chat history on Tuesday: a search function for the web app. With it, you'll be able to quickly surface previous references and chats to cite within your current ChatGPT conversation.

"We’re starting to roll out the ability to search through your chat history on ChatGPT web," the company announced via a post on X (formerly Twitter). "Now you can quickly & easily bring up a chat to reference, or pick up a chat where you left off."

Read more
GPT-5: everything we know so far about OpenAI’s next frontier model
A MacBook Pro on a desk with ChatGPT's website showing on its display.

There's perhaps no product more hotly anticipated in tech right now than GPT-5. Rumors about it have been circulating ever since the release of GPT-4, OpenAI's groundbreaking foundational model that's been the basis of everything the company has launched over the past year, such as GPT-4o, Advanced Voice Mode, and the OpenAI o1-preview.

Those are all interesting in their own right, but a true successor to GPT-4 is still yet to come. Now that it's been over a year a half since GPT-4's release, buzz around a next-gen model has never been stronger.
When will GPT-5 be released?
OpenAI has continued a rapid rate of progress on its LLMs. GPT-4 debuted on March 14, 2023, which came just four months after GPT-3.5 launched alongside ChatGPT. OpenAI has yet to set a specific release date for GPT-5, though rumors have circulated online that the new model could arrive as soon as late 2024.

Read more
One of the hottest AI apps just came to the Mac (and it’s not ChatGPT)
the Perplexity desktop app

Perplexity announced Thursday the release of a new native app for Mac that will put its "answer engine" directly on the desktop, with no need for a web browser.

Currently available through the Apple App Store, the Perplexity desktop app promises a variety of features "exclusively for Mac." These include Pro Search, which is a "guided AI search for deeper exploration," the capability for both text and voice prompting, and "cited sources" for every answer.

Read more