Skip to main content

Here’s how to rewatch the first public demo of ChatGPT-4

OpenAI hosted a developer live stream that showed the first public demo of ChatGPT-4. The new Large Language Model (LLM) has reportedly been in development for a few years, and Microsoft confirmed it’s the tech powering the company’s new Bing Chat service.

The presentation started at 1 p.m. PT on Monday, March 14. OpenAI President and co-founder Greg Brockman led the presentation, walking through what GPT-4 is capable of, as well as its limitations. You can see a replay of the of the event below.

GPT-4 Developer Livestream

OpenAI has already announced that ChatGPT-4 will only be available to ChatGPT Plus subscribers. The free version of ChatGPT will continue to run on the GPT-3.5 model.

Recommended Videos

The live stream is focused on how developers can leverage GPT-4 in their own AI applications. OpenAI has recently made its API available to developers, and companies like Khan Academy and Duolingo have already announced that they plan on using GPT-4 in their own apps.

Although speculation has been wild for what GPT-4 could be capable of, OpenAI is describing it as an evolution of the existing model. The new model will be able to mimic a particular writing style more closely, for example, as well as process up to 25,000 words of text from the user.

OpenAI says that ChatGPT-4 doesn’t need text, either. It can receive an image as a prompt and generate a response based on it.

Otherwise, the new version has updated security features. OpenAI claims it’s 82% less likely to offer disallowed responses, and it provides 40% more factual responses. It’s tough to say what that means in practice at the moment, however.

Although the new model could vastly expand the capabilities of ChatGPT, it also comes with some worries. Microsoft’s Bing Chat has already shown some unhinged responses, and it uses the GPT-4 model. OpenAI warns that the new model could still have these issues, occasionally showing “social biases, hallucinations, and adversarial prompts.”

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
OpenAI’s robotics plans aim to ‘bring AI into the physical world’
The Figure 02 robot looking at its own hand

OpenAI continued to accelerate its hardware and embodied AI ambitions on Tuesday, with the announcement that Caitlin Kalinowski, the now-former head of hardware at Oculus VR, will lead its robotics and consumer hardware team.

"OpenAI and ChatGPT have already changed the world, improving how people get and interact with information and delivering meaningful benefits around the globe," Kalinowski wrote on a LinkedIn announcement. "AI is the most exciting engineering frontier in tech right now, and I could not be more excited to be part of this team."

Read more
ChatGPT Search is here to battle both Google and Perplexity
The ChatGPT Search icon on the prompt window

ChatGPT is receiving its second new search feature of the week, the company announced on Thursday. Dubbed ChatGPT Search, this tool will deliver real-time data from the internet in response to your chat prompts.

ChatGPT Search appears to be both OpenAI's answer to Perplexity and a shot across Google's bow.

Read more
ChatGPT’s Advanced Voice Mode just came to PCs and Macs
ChatGPT Advanced Voice Mode Desktop app

You can now speak directly with ChatGPT right on your PC or Mac, thanks to a new Advanced Voice Mode integration, OpenAI announced on Wednesday. "Big day for desktops," the company declared in an X (formerly Twitter) post.

Advanced Voice Mode (AVM) runs atop the GPT-4o model, OpenAI's current state of the art, and enables the user to speak to the chatbot without the need for text prompts.

Read more