Skip to main content

MIT built an A.I. bot that writes scary stories — and some are terrifying

mit collaborative ai horror writer 30536936 l
Solarseven/123RF
If you want something really spooky to get you in the mood for Halloween, how about the prospect of machines which don’t just carry out regular routinized work, but can actually be creative — thereby performing a function we typically view as being quintessentially human? That’s (kind of) what researchers at Massachusetts Institute of Technology (MIT) have developed with a new October 31-themed artificial intelligence project: The world’s first collaborative A.I. horror writer.

Found on Twitter, “Shelley” tweets out the beginning of a new horror story every hour, alongside the hashtag #yourturn as an invitation to human co-writers. Anyone is welcome to reply to the tweet with the next installment of the story, thereby prompting Shelley to reply again with the next part.

Recommended Videos

“Shelley is a deep learning-based A.I. that took her name [from] horror story writer, Mary Shelley,” Pinar Yanardhag, one of the researchers on the project, told Digital Trends. “She initially trained on over 140,000 horror stories on Reddit’s popular r/nosleep subreddit, and is able to generate random snippets based on what she learned, or continue a story given a text. We expect Shelley to inspire people to write the weirdest and scariest horror stories ever put together. So far, Shelley has co-authored over 100 stories with Twitter users, and some of them are really scary.”

Meghan Murphy
Meghan Murphy

A collection of some of the stories generated by Shelley can be found here. The researchers say that the work is designed to tap into some of the fears that surround humans and A.I. relationships, and the project builds on the success of MIT’s 2016 Halloween project, which used neural networks to generate scary images.

Can machines really replace human writers? “Human authors have nothing to fear in the short term,” Iyad Rahwan, an associate professor in MIT’s Media Lab, told us. “Today, A.I. algorithms can generate highly structured content, such as reports on market developments or sports games. They can also generate less structured, more creative content, like short snippets of text. But algorithms are still not very good at generating complex narrative. It will be a while before we have an A.I. version of J. K. Rowling or Stephen King, [although] there are no guarantees about where things are headed in the medium or long term — and machines may eventually be able to construct complex narratives, and explore new creative spaces in fiction.”

As Rahwan points out, however, if we really do build machines that are able to experience the world around them as fully as humans can, and use this to generate their own unique ideas, we have bigger problems than simply losing a few jobs in creative writing.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
Fake news? A.I. algorithm reveals political bias in the stories you read
newspaper stack

Here in 2020, internet users have ready access to more news media than at any other point in history. But things aren’t perfect. Click-driven ad models, online filter bubbles, and the competition for readers’ attention means that political bias has become more entrenched than ever. In worst-case scenarios, this can tip over into fake news. Other times, it simply means readers receive a slanted version of events, without necessarily realizing that this is the case.

What if artificial intelligence could be used to accurately analyze political bias to help readers better understand the skew of whatever source they are reading? Such a tool could conceivably be used as a spellcheck- or grammar check-type function, only instead of letting you know when a word or sentence isn’t right, it would do the same thing for the neutrality of news media -- whether that be reporting or opinion pieces.

Read more