Skip to main content

ChatGPT’s new upgrade finally breaks the text barrier

OpenAI is rolling out new functionalities for ChatGPT that will allow prompts to be executed with images and voice directives in addition to text.

The AI brand announced on Monday that it will be making these new features available over the next two weeks to ChatGPT Plus and Enterprise users. The voice feature is available in iOS and Android in an opt-in capacity, while the images feature is available on all ChatGPT platforms. OpenAI notes it plans to expand the availability of the images and voice features beyond paid users after the staggered rollout.

OpenAI image prompt.
Twitter/X

The voice chat functions as an auditory conversation between the user and ChatGPT. You press the button and say your question. After processing the information, the chatbot gives you an answer in auditory speech instead of in text. The process is similar to using virtual assistants such as Alexa or Google Assistant and could be the preamble to a complete revamp of virtual assistants as a whole. OpenAI’s announcement comes just days after Amazon revealed a similar feature coming to Alexa.

To implement voice and audio communication with ChatGPT, OpenAI uses a new text-to-speech model that is able to generate “human-like audio from just text and a few seconds of sample speech.” Additionally, its Whisper model can “transcribe your spoken words into text.”

OpenAI says it’s aware of the issues that could arise due to the power behind this feature, including, “the potential for malicious actors to impersonate public figures or commit fraud.”

This is one of the main reasons the company plans to limit the use of its new features to “specific use cases and partnerships.” Even when the features are more widely available they will be accessible mainly to more privileged users, such as developers.

ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). https://t.co/uNZjgbR5Bm pic.twitter.com/paG0hMshXb

— OpenAI (@OpenAI) September 25, 2023

The image feature allows you to capture an image and input it into ChatGPT with your question or prompt. You can use the drawing tool with the app to help clarify your answer and have a back-and-forth conversation with the chatbot until your issue is resolved. This is similar to Microsoft’s new Copilot feature in Windows, which is built on OpenAI’s model.

OpenAI has also acknowledged the challenges of ChatGPT, such as its ongoing hallucination issue. When aligned with the image feature, the brand decided to limit certain functionalities, such as the chatbot’s “ability to analyze and make direct statements about people.”

ChatGPT was first introduced as a text-to-speech tool late last year; however, OpenAI has quickly expanded its prowess. The original chatbot based on the GPT-3 language model has since been updated to GPT-3.5 and now GPT-4, which is the model that is receiving the new feature.

When GPT-4 first launched in March, OpenAI announced various enterprise collaborations, such as Duolingo, which used the AI model to improve the accuracy of the listening and speech-based lessons on the language learning app. OpenAI has collaborated with Spotify to translate podcasts into other languages while preserving the sound of the podcaster’s voice. The company also spoke of its work with the mobile app, Be My Eyes, which works to aid blind and low-vision people. Many of these apps and services were available ahead of the images and voice update.

Fionna Agomuoh
Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…
ChatGPT: the latest news and updates on the AI chatbot that changed everything
ChatGPT app running on an iPhone.

In the ever-evolving landscape of artificial intelligence, ChatGPT stands out as a groundbreaking development that has captured global attention. From its impressive capabilities and recent advancements to the heated debates surrounding its ethical implications, ChatGPT continues to make headlines.

Whether you're a tech enthusiast or just curious about the future of AI, dive into this comprehensive guide to uncover everything you need to know about this revolutionary AI tool.
What is ChatGPT?
ChatGPT is a natural language AI chatbot. At its most basic level, that means you can ask it a question and it will generate an answer. As opposed to a simple voice assistant like Siri or Google Assistant, ChatGPT is built on what is called an LLM (Large Language Model). These neural networks are trained on huge quantities of information from the internet for deep learning -- meaning they generate altogether new responses, rather than just regurgitating canned answers. They're not built for a specific purpose like chatbots of the past -- and they're a whole lot smarter.

Read more
All the wild things people are doing with ChatGPT’s new Voice Mode
Nothing Phone 2a and ChatGPT voice mode.

ChatGPT's Advanced Voice Mode arrived on Tuesday for a select few OpenAI subscribers chosen to be part of the highly anticipated feature's alpha release.

The feature was first announced back in May. It is designed to do away with the conventional text-based context window and instead converse using natural, spoken words, delivered in a lifelike manner. It works in a variety of regional accents and languages. According to OpenAI, Advanced Voice, "offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions."

Read more
ChatGPT’s highly anticipated Advanced Voice could arrive ‘next week’
screencap. two people sitting at a desk talking to OpenAI's Advanced Voice mode on a cellphone

OpenAI CEO and co-founder Sam Altman revealed on X (formerly Twitter) Thursday that its Advanced Voice feature will begin rolling out "next week," though only for a few select ChatGPT-Plus subscribers.

The company plans to "start the alpha with a small group of users to gather feedback and expand based on what we learn."

Read more