Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI’s GPT-3 algorithm is here, and it’s freakishly good at sounding human

When the text-generating algorithm GPT-2 was created in 2019, it was labeled as one of the most “dangerous” A.I. algorithms in history. In fact, some argued that it was so dangerous that it should never be released to the public (spoiler: It was) lest it ushers in the “robot apocalypse.” That, of course, never happened. GPT-2 was eventually released to the public, and after it didn’t destroy the world, its creators moved on to the next thing. But how do you follow up the most dangerous algorithm ever created?

The answer, at least on paper, is simple: Just like the sequel to any successful movie, you make something that’s bigger, badder, and more expensive. Only one xenomorph in the first Alien? Include a whole nest of them in the sequel, Aliens. Just a single nigh-indestructible machine sent back from the future in Terminator? Give audiences two of them to grapple with in Terminator 2: Judgment Day.

OpenAI

The same is true for A.I. — in this case, GPT-3, a recently released natural language processing neural network created by OpenAI, the artificial intelligence research lab that was once (but no longer) sponsored by SpaceX and Tesla CEO Elon Musk.

Recommended Videos

GPT-3 is the latest in a series of text-generating neural networks. The name GPT stands for Generative Pretrained Transformer, referencing a 2017 Google innovation called a Transformer which can figure out the likelihood that a particular word will appear with surrounding words. Fed with a few sentences, such as the beginning of a news story, the GPT pre-trained language model can generate convincingly accurate continuations, even including the formulation of fabricated quotes.

This is why some worried that it could prove itself to be dangerous, by helping to generate false text that, like deepfakes, could help spread fake news online. Now, with GPT-3 it’s bigger and smarter than ever.

Tale of the tape

GPT-3 is, as a boxing-style “tale of the tape” comparison would make clear, a real heavyweight bruiser of a contender. OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. 2019’s GPT-2, which caused much of the previous uproar about its potential malicious applications, possessed 1.5 billion parameters. Last month, Microsoft introduced what was then the world’s biggest similar pre-trained language model, boasting 17 billion parameters. 2020’s monstrous GPT-3, by comparison, has an astonishing 175 billion parameters. It reportedly cost around $12 million to train.

“The power of these models is that in order to successfully predict the next word they end up learning really powerful world models that can be used for all kinds of interesting things,” Nick Walton, chief technology officer of Latitude, the studio behind A.I. Dungeon, an A.I.-generated text adventure game powered by GPT-2, told Digital Trends. “You can also fine-tune the base models to shape the generation in a specific direction while still maintaining the knowledge the model learned in pre-training.”

The computational resources needed to actually use GPT-3 in the real world make it extremely impractical.

Gwern Branwen, a commentator and researcher who writes about psychology, statistics, and technology, told Digital Trends that the pre-trained language model GPT represents has become an “increasingly a critical part of any machine learning task touching on text. In the same way that [the standard suggestion for] many image-related tasks have become ‘use a [convolutional neural network], many language-related tasks have become ‘use a fine-tuned [language model.’”

OpenAI — which declined to comment for this article — is not the only company doing some impressive work with natural language processing. As mentioned, Microsoft has stepped up to the plate with some dazzling work of its own. Facebook, meanwhile, is heavily investing in the technology and has created breakthroughs like BlenderBot, the largest ever open-sourced, open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. As anyone who has used a computer in the past few years will know, machines are getting better at understanding us than ever — and natural language processing is the reason why.

Size matters

But OpenAI’s GPT-3 still stands alone in its sheer record-breaking scale.“GPT-3 is generating buzz primarily because of its size,” Joe Davison, a research engineer at Hugging Face, a startup working on the advancement of natural language processing by developing open-source tools and carrying out fundamental research, told Digital Trends.

The big question is what all of this will be used for. GPT-2 found its way into a myriad of uses, being employed for various text-generating systems.

Davison expressed some caution that GPT-3 could be limited by its size. “The team at OpenAI have unquestionably pushed the frontier of how large these models can be and showed that growing them reduces our dependence on task-specific data down the line,” he said. “However, the computational resources needed to actually use GPT-3 in the real world make it extremely impractical. So while the work is certainly interesting and insightful, I wouldn’t call it a major step forward for the field.”

GPT-2 AI Text Generator
OpenAI

Others disagree, though. “The [internal-link post_id="2443861"]artificial intelligence[/internal-link] community has long observed that combining ever-larger models with more and more data yields almost predictable improvements in the power of these models, very much like Moore’s Law of scaling compute power,” Yannic Kilcher, an A.I. researcher who runs a YouTube channel, told Digital Trends. “Yet, also like Moore’s Law, many have speculated that we are at the end of being able to improve language models by simply scaling them up, and in order to get higher performance, we would need to make substantial inventions in terms of new architectures or training methods. GPT-3 shows that this is not true and the ability to push performance simply through scale seems unbroken — and there’s not really an end in sight.”

Passing the Turing Test?

Branwen suggests that tools like GPT-3 could be a major disruptive force. “One way to think of it is, what jobs involve taking a piece of text, transforming it, and emitting another piece of text?” Branwen said. “Any job which is described by that — such as medical coding, billing, receptionists, customer support, [and more] would be a good target for fine-tuning GPT-3 on, and replacing that person. A great many jobs are more or less ‘copying fields from one spreadsheet or PDF to another spreadsheet or PDF’, and that sort of office automation, which is too chaotic to easily write a normal program to replace, would be vulnerable to GPT-3 because it can learn all of the exceptions and different conventions and perform as well as the human would.”

Ultimately, natural language processing may be just one part of A.I., but it arguably cuts to the core of the artificial intelligence dream in a way that few other disciplines in the field do. The famous Turing Test, one of the seminal debates that kick-started the field, is a natural language processing problem: Can you build an A.I. that can convincingly pass itself off as a person? OpenAI’s latest work certainly advances this goal. Now what remains is to be seen what applications researchers will find for it.

“I think it is the fact that GPT-2 text could so easily pass for human that it is getting difficult to hand-wave it away as ‘just pattern recognition’ or ‘just memorization,’” Branwen said. “Anyone who was sure that the things that deep learning does is nothing like intelligence has to have had their faith shaken to see how far it has come.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
One of the hottest AI apps just came to the Mac (and it’s not ChatGPT)
the Perplexity desktop app

Perplexity announced Thursday the release of a new native app for Mac that will put its "answer engine" directly on the desktop, with no need for a web browser.

Currently available through the Apple App Store, the Perplexity desktop app promises a variety of features "exclusively for Mac." These include Pro Search, which is a "guided AI search for deeper exploration," the capability for both text and voice prompting, and "cited sources" for every answer.

Read more
OpenAI uses its own models to fight election interference
chatGPT on a phone on an encyclopedia

OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.

OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. "Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the report says.

Read more
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more