Skip to main content

GPT-4 Turbo is the biggest update since ChatGPT’s launch

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

OpenAI has just unveiled the latest updates to its large language models (LLM) during its first developer conference, and the most notable improvement is the release of GPT-4 Turbo, which is currently entering preview. GPT-4 Turbo comes as an update to the existing GPT-4, bringing with it a greatly increased context window and access to much newer knowledge. Here’s everything you need to know about GPT-4 Turbo.

OpenAI claims that the AI model will be more powerful while simultaneously being cheaper than its predecessors. Unlike the previous versions, it’s been trained on information dating to April 2023. That’s a hefty update on its own — the latest version maxed out in September 2021. I just tested this myself, and indeed, using GPT-4 allows ChatGPT to draw information from events that happened up until April 2023, so that update is already live.

Recommended Videos

GPT-4 Turbo has a significantly larger context window than the previous versions. This is essentially what GPT-4 Turbo takes into consideration before it generates any text in reply. To that end, it now has a 128,000-token (this is the unit of text or code that LLMs read) context window, which, as OpenAI reveals in its blog post, is the equivalent of around 300 pages of text.

That’s an entire novel that you could potentially feed to ChatGPT over the course of a single conversation, and a much greater context window than the previous versions had (8,000 and 32,000 tokens).

Context windows are important for LLMs because they help them stay on topic. If you interact with large language models, you’ll find that they may go off topic if the conversation goes on for too long. This can produce some pretty unhinged and unnerving responses, such as that time when Bing Chat told us that it wanted to be human. GPT-4 Turbo, if all goes well, should keep the insanity at bay for a much longer time than the current model.

GPT-4 Turbo is also going to be cheaper to run for developers, with the cost reduced to $0.01 per 1,000 input tokens, which rounds up to roughly 750 words, while outputs will cost $0.03 per 1,000 tokens. OpenAI estimates that this new version is three times cheaper than the ones that came before it.

The company also says that GPT-4 Turbo does a better job of following instructions carefully, and can be told to use the coding language of choice to produce results, such as XML or JSON. GPT-4 Turbo will also support images and text-to-speech, and it still offers DALL-E 3 integration.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

This wasn’t the only big reveal for OpenAI, which also introduced GPTs, custom versions of ChatGPT that anyone can make for their own specific purpose with no knowledge of coding. These GPTs can be made for personal or company use, but can also be distributed to others. OpenAI says that GPTs are available today for ChatGPT Plus subscribers and enterprise users.

Lastly, in light of constant copyright concerns, OpenAI joins Google and Microsoft in saying that it will take legal responsibility if its customers are sued for copyright infringement.

With the enormous context window, the new copyright shield, and an improved ability to follow instructions, GPT-4 Turbo might turn out to be both a blessing and a curse. ChatGPT is fairly good at not doing things it shouldn’t do, but even still, it has a dark side. This new version, while infinitely more capable, may also come with the same drawbacks as other LLMs, except this time, it’ll be on steroids.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
ChatGPT Search is here to battle both Google and Perplexity
The ChatGPT Search icon on the prompt window

ChatGPT is receiving its second new search feature of the week, the company announced on Thursday. Dubbed ChatGPT Search, this tool will deliver real-time data from the internet in response to your chat prompts.

ChatGPT Search appears to be both OpenAI's answer to Perplexity and a shot across Google's bow.

Read more
ChatGPT’s Advanced Voice Mode just came to PCs and Macs
ChatGPT Advanced Voice Mode Desktop app

You can now speak directly with ChatGPT right on your PC or Mac, thanks to a new Advanced Voice Mode integration, OpenAI announced on Wednesday. "Big day for desktops," the company declared in an X (formerly Twitter) post.

Advanced Voice Mode (AVM) runs atop the GPT-4o model, OpenAI's current state of the art, and enables the user to speak to the chatbot without the need for text prompts.

Read more
Your ChatGPT conversation history is now searchable
ChatGPT chat search

OpenAI debuted a new way to more efficiently manage your growing ChatGPT chat history on Tuesday: a search function for the web app. With it, you'll be able to quickly surface previous references and chats to cite within your current ChatGPT conversation.

"We’re starting to roll out the ability to search through your chat history on ChatGPT web," the company announced via a post on X (formerly Twitter). "Now you can quickly & easily bring up a chat to reference, or pick up a chat where you left off."

Read more