Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

GPT-4: everything you need to know about ChatGPT’s standard AI model

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot originally powered by the GPT-3.5 large language model. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).

What is GPT-4?

GPT-4 is the newest language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which was previously based on GPT-3.5 but has since been updated. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

Recommended Videos

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even “learning a user’s writing style.”

GPT-4 Developer Livestream

The longer context plays into this as well. GPT-4 can now process up to 128k tokens of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page. OpenAI says this can be helpful for the creation of long-form content, as well as “extended conversations.”

GPT-4 can also now receive images as a basis for interaction. In the example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. It is not currently known if video can also be used in this same way.

Image used with permission by copyright holder

Lastly, OpenAI also says GPT-4 is significantly safer to use than the previous generation. It can reportedly produce 40% more factual responses in OpenAI’s own internal testing, while also being 82% less likely to “respond to requests for disallowed content.”

OpenAI says it’s been trained with human feedback to make these strides, claiming to have worked with “over 50 experts for early feedback in domains including AI safety and security.”

In the initial weeks after it first launched, users posted some of the amazing things they’ve done with it, including inventing new languages, detailing how to escape into the real world, and making complex animations for apps from scratch. One user apparently made GPT-4 create a working version of Pong in just sixty seconds, using a mix of HTML and JavaScript.

How to use GPT-4

Bing Chat shown on a laptop.
Jacob Roach / Digital Trends

GPT-4 is available to all users at every subscription tier OpenAI offers. Free tier users will have limited access to the full GPT-4 modelv (~80 chats within a 3-hour period) before being switched to the smaller and less capable GPT-4o mini until the cool down timer resets. To gain additional access GPT-4, as well as be able to generate images with Dall-E, is to upgrade to ChatGPT Plus. To jump up to the $20 paid subscription, just click on “Upgrade to Plus” in the sidebar in ChatGPT. Once you’ve entered your credit card information, you’ll be able to toggle between GPT-4 and older versions of the LLM.

If you don’t want to pay, there are some other ways to get a taste of how powerful GPT-4 is. First off, you can try it out as part of Microsoft’s Bing Chat. Microsoft revealed that it’s been using GPT-4 in Bing Chat, which is completely free to use. Some GPT-4 features are missing from Bing Chat, however, and it’s clearly been combined with some of Microsoft’s own proprietary technology. But you’ll still have access to that expanded LLM (large language model) and the advanced intelligence that comes with it. It should be noted that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.

There are lots of other applications that are currently using GPT-4, too, such as the question-answering site, Quora.

When was GPT-4 released?

A laptop opened to the ChatGPT website.
Shutterstock

GPT-4 was officially announced on March 13, as was confirmed ahead of time by Microsoft, and first became available to users through a ChatGPT-Plus subscription and Microsoft Copilot. GPT-4 has also been made available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demonstration of GPT-4 was livestreamed on YouTube, showing off its new capabilities.

What is GPT-4o mini?

GPT-4o mini is the newest iteration of OpenAI’s GPT-4 model line. It’s a streamlined version of the larger GPT-4o model that is better suited for simple but high-volume tasks that benefit more from a quick inference speed than they do from leveraging the power of the entire model.

GPT-4o mini was released in July 2024 and has replaced GPT-3.5 as the default model users interact with in ChatGPT once they hit their three-hour limit of queries with GPT-4o. Per data from Artificial Analysis, 4o mini significantly outperforms similarly sized small models like Google’s Gemini 1.5 Flash and Anthropic’s Claude 3 Haiku in the MMLU reasoning benchmark.

Is GPT-4 better than GPT-3.5?

The free version of ChatGPT was originally based on the GPT 3.5 model; however, as of July 2024, ChatGPT now runs on GPT-4o mini. This streamlined version of the larger GPT-4o model is much better than even GPT-3.5 Turbo. It can understand and respond to more inputs, it has more safeguards in place, provides more concise answers, and is 60% less expensive to operate.

The GPT-4 API

As mentioned, GPT-4 is available as an API to developers who have made at least one successful payment to OpenAI in the past. The company offers several versions of GPT-4 for developers to use through its API, along with legacy GPT-3.5 models. Upon releasing GPT-4o mini, OpenAI noted that GPT-3.5 will remain available for use by developers, though it will eventually be taken offline. The company did not set a timeline for when that might actually happen.

The API is mostly focused on developers making new apps, but it has caused some confusion for consumers, too. Plex allows you to integrate ChatGPT into the service’s Plexamp music player, which calls for a ChatGPT API key. This is a separate purchase from ChatGPT Plus, so you’ll need to sign up for a developer account to gain API access if you want it.

Is GPT-4 getting worse?

As much as GPT-4 impressed people when it first launched, some users have noticed a degradation in its answers over the following months. It’s been noticed by important figures in the developer community and has even been posted directly to OpenAI’s forums. It was all anecdotal though, and an OpenAI executive even took to Twitter to dissuade the premise. According to OpenAI, it’s all in our heads.

No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.

Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.

— Peter Welinder (@npew) July 13, 2023

Then, a study was published that showed that there was, indeed, worsening quality of answers with future updates of the model. By comparing GPT-4 between the months of March and June, the researchers were able to ascertain that GPT-4 went from 97.6% accuracy down to 2.4%.

It’s not a smoking gun, but it certainly seems like what users are noticing isn’t just being imagined.

Where is the visual input in GPT-4?

One of the most anticipated features in GPT-4 is visual input, which allows ChatGPT Plus to interact with images not just text, making the model truly multimodal. Uploading images for GPT-4 to analyze and manipulate is just as easy as uploading documents — simply click the paperclip icon to the left of the context window, select the image source and attach the image to your prompt.

What are GPT-4’s limitations?

While discussing the new capabilities of GPT-4, OpenAI also notes some of the limitations of the new language model. Like previous versions of GPT, OpenAI says the latest model still has problems with “social biases, hallucinations, and adversarial prompts.”

In other words, it’s not perfect. It’ll still get answers wrong, and there have been plenty of examples shown online that demonstrate its limitations. But OpenAI says these are all issues the company is working to address, and in general, GPT-4 is “less creative” with answers and therefore less likely to make up facts.

The other primary limitation is that the GPT-4 model was trained on internet data up until December 2023 (GPT-4o and 4o mini cut off at October of that year). However, since GPT-4 is capable of conducting web searches and not simply relying on its pretrained data set, it can easily search for and track down more recent facts from the internet.

GPT-4o is the latest release, of course, and GPT-5 is still incoming.

Alan Truly
Alan Truly is a Writer at Digital Trends, covering computers, laptops, hardware, software, and accessories that stand out as…
ChatGPT Search is here to battle both Google and Perplexity
The ChatGPT Search icon on the prompt window

ChatGPT is receiving its second new search feature of the week, the company announced on Thursday. Dubbed ChatGPT Search, this tool will deliver real-time data from the internet in response to your chat prompts.

ChatGPT Search appears to be both OpenAI's answer to Perplexity and a shot across Google's bow.

Read more
ChatGPT’s Advanced Voice Mode just came to PCs and Macs
ChatGPT Advanced Voice Mode Desktop app

You can now speak directly with ChatGPT right on your PC or Mac, thanks to a new Advanced Voice Mode integration, OpenAI announced on Wednesday. "Big day for desktops," the company declared in an X (formerly Twitter) post.

Advanced Voice Mode (AVM) runs atop the GPT-4o model, OpenAI's current state of the art, and enables the user to speak to the chatbot without the need for text prompts.

Read more
Your ChatGPT conversation history is now searchable
ChatGPT chat search

OpenAI debuted a new way to more efficiently manage your growing ChatGPT chat history on Tuesday: a search function for the web app. With it, you'll be able to quickly surface previous references and chats to cite within your current ChatGPT conversation.

"We’re starting to roll out the ability to search through your chat history on ChatGPT web," the company announced via a post on X (formerly Twitter). "Now you can quickly & easily bring up a chat to reference, or pick up a chat where you left off."

Read more