Skip to main content

Get ready to waste your day with this creepily accurate text-generating A.I.

Whether you believe it was one of the most dangerous versions of artificial intelligence created or dismiss it as a massive unnecessary PR exercise, there’s no doubt that the GPT-2 algorithm created by research lab OpenA.I. caused a lot of buzz when it was announced earlier this year.

Revealed in February, OpenA.I. said it developed an algorithm too dangerous to release to the general public. Although only a text generator, GPT-2 supposedly generated text so crazily humanlike that it could convince people that they were reading a real text written by an actual person. To use it, all a user had to do would be to feed in the start of the document, and then let the A.I. take over to complete it. Give it the opening of a newspaper story, and it would even manufacture fictitious “quotes.” Predictably, news media went into overdrive describing this as the terrifying new face of fake news. And for potentially good reason.

Recommended Videos

Jump forward a few months, and users can now have a go at using the A.I. for themselves. The algorithm appears on a website, called “Talk to Transformer,” hosted by machine learning engineer Adam King.

“For now OpenA.I. has decided only to release small and medium-sized versions of it which aren’t as coherent but still produce interesting results,” he writes on his website. “This site runs the new (May 3) medium-sized model, called 345M for the 345 million parameters it uses. If and when [OpenA.I.] release the full model, I’ll likely get it running here.

On a high level, GPT-2 doesn’t work all that differently from the predictive mobile keyboards which predict the word that you’re going to want to write next. However, as King notes, “While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That’s without ever being told that it would be evaluated on those tasks.”

The results are, frankly, little unnerving. Although it’s still prone to the odd bit of A.I.-generated nonsense, it’s nowhere near the level of silliness as the various neural nets used to generate chapters from new A Song of Ice and Fire novels or monologs from Scrubs. Faced with the first paragraph of this story, for instance, it did a pretty serviceable job at turning out something convincing — complete with a bit of subject matter knowledge to help sell the effect.

Thinking that this is the Skynet of fake news is probably going a bit far. But it’s definitely enough to send a small shiver down the spine.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
GPTZero: how to use the ChatGPT detection tool
A MidJourney rendering of a student and his robot friend in front of a blackboard.

In terms of world-changing technologies, ChatGPT has truly made a massive impact on the way people think about writing and coding in the short time that it's been available.

However, this ability has come with a significant downside, particularly in education, where students are tempted to use ChatGPT for their own papers or exams. That brand of plagiarism prevents students from learning as much as they could and has given teachers a whole new headache: how to detect AI use.

Read more
GPT-4: everything you need to know about ChatGPT’s standard AI model
A laptop opened to the ChatGPT website.

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot originally powered by the GPT-3.5 large language model. But when the highly anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, with some calling it the early glimpses of AGI (artificial general intelligence).
What is GPT-4?
GPT-4 is the newest language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which was previously based on GPT-3.5 but has since been updated. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even "learning a user's writing style."

Read more