Skip to main content

Truly creative A.I. is just around the corner. Here’s why that’s a big deal

A.I.
Sean Gallup/Getty Images

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good.

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That’s exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I.

“What is a hospital?” Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. “A hospital is a lot like a high school: the most amazing man is dying, and you’re the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons. And even though it sucks about Doctor Tapioca, not even that’s sad.”

Today’s machine creativity typically involves humans making some of the decisions

Yes, it’s nonsense — but it’s charming nonsense. Created by Botnik Studios, who recently used the same same statistical predictive tools to write an equally bonkers new Harry Potter story, the A.I. mimics the writing style of the show’s real scripts. It sounds right enough to be recognizable but wrong enough to be obviously the work of a silly machine, like the classic anecdote about the early MIT machine translation software which translated the Biblical saying “The spirit is willing, but the flesh is weak” into Russian and back again, ending up with “The whisky is strong, but the meat is rotten.”

As Braff’s publicizing of the Scrubs-bot shows, the topic of computational creativity is very much in right now. Once the domain of a few lonely researchers, trapped on the fringes of computer science and the liberal arts, the question of whether a machine can be creative is everywhere. Alongside Botnik’s attempts at Harry Potter and Scrubs, we’ve recently written about a recurrent neural network (RNN) that took a stab at writing the sixth novel in the Song of Ice and Fire series, better known to TV fans as Game of Thrones. The RNN was trained for its task by reading and analyzing the roughly 5,000 pages of existing novels in the series.

Larger companies like Google have gotten in on the act, too, with its Deep Dream project, which purposely magnifies some of the recognition errors in Google’s deep learning neural networks to create wonderfully trippy effects.

Pouff - Grocery Trip

Right now, we’re at the “laughter” stage of computational creativity for the most part. That doesn’t have to mean outright mocking A.I.’s attempts to create, but it’s extremely unlikely that, say, an image generated by Google’s Deep Dream will hang in an art gallery any time soon — even if the same image painted by a person may be taken more seriously.

It’s fair to point out that today’s machine creativity typically involves humans making some of the decisions, but the credit isn’t split between both in the same way that a movie written by two authors would be. Rightly or wrongly, we give A.I. the same amount of credit in these scenarios that we might give to the typewriter that “War and Peace” was written on. In other words, very little.

Right now, we’re in the “laughter” stage of AI creativity, but that may change soon. 

But that could change very soon. Because computational creativity is doing a whole lot more than generating funny memes and writing parody scripts. NASA, for example, has employed evolutionary algorithms, which mimic natural selection in machine form, to design satellite components. These components work well — although their human “creators” are at a loss to explain exactly how.

Legal firms, meanwhile, are using A.I. to formulate and hone new arguments and interpretations of the law, which could be useful in a courtroom. In medicine, the U.K.’s University of Manchester is using a robot called EVE to formulate hypotheses for future drugs, devise experiments to test these theories, physically carry out these experiments, and then interpret the results.

IBM’s “Chef Watson” utilizes A.I. to generate its own unique cooking recipes, based on a knowledge of 9,000 existing dishes and an awareness of which chemical compounds work well together. The results are things like Turkish-Korean Caesar salads and Cuban lobster bouillabaisse that no human chef would ever come up with, but which taste good nevertheless.

In another domain, video game developers Epic Stars recently used a deep learning A.I. to compose the main theme for its new game Pixelfield, which was then performed by a live orchestra.

Making of "Battle Royale" - The World's first AI-composed score for a video game

Finally, newspapers like the Washington Post are eschewing sending human reporters to cover events like the Olympics, in place of letting machines do the job. To date, the newspaper’s robo-journalist has written close to 1,000 articles.

Which brings us to our big point: Should a machine’s ability to be creative serve as the ultimate benchmark for machine intelligence? Here in 2017, brain-inspired neural networks are getting bigger, better, and more complicated all the time, but we still don’t have an obvious test to discern when a machine is finally considered intelligent.

We still don’t have definitive method for discerning when a machine is intelligent.

While it’s not a serious concern of most A.I. researchers, the most famous test of machine intelligence remains the Turing Test, which suggests that if a machine is able to fool us into thinking it’s intelligent, we must therefore agree that it is intelligent. The result, unfortunately, is that machine intelligence is reduced to the level of an illusionist’s trick — attempting to pull the wool over the audience’s eyes rather than actually demonstrating that a computer can have a mind.

An alternative approach is an idea called the Lovelace Test, named after the pioneering computer programmer Ada Lovelace. Appropriately enough, Ada Lovelace represented the intersection of creativity and computation — being the daughter of the Romantic poet Lord Byron, as well as working alongside Charles Babbage on his ill-fated Analytical Engine in the 1800s. Ada Lovelace was impressed by the idea of building the Analytical Engine, but argued that it would never be considered capable of true thinking, since it was only able to carry out pre-programmed instructions. As she said, “The Analytical Engine has no pretensions whatever to originate anything,’ she famously wrote. ‘It can do [only] whatever we know how to order it to perform.”

The broad idea of the Lovelace Test involves three separate parts: the human creator, the machine component, and the original idea. The test is passed only if the machine component is able to generate an original idea, without the human creator being able to explain exactly how this has been achieved. At that point, it is assumed that a computer has come up with a spontaneous creative thought. Mark Riedl, an associate professor of interactive computing at Georgia Tech, has proposed a modification of the test in which certain constraints are given — such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.”

“Where I think the Lovelace 2.0 test plays a role is verifying that novel creation by a computational system is not accidental,” Riedl told Digital Trends. “The test requires understanding of what is being asked, and understanding of the semantics of the data it is drawing from.”

It’s an intriguing thought experiment. This benchmark may be one that artificial intelligence has not yet cracked, but surely it’s getting closer all the time. When machines can create patentable technologies, dream up useful hypotheses, and potentially one day write movie scripts that will sell tickets to paying audiences, it’s difficult to call their insights accidental.

To coin a phrase often attributed to Mahatma Gandhi, “First they ignore you, then they laugh at you, then they fight you, then you win.” Computational creativity has been ignored. Right now, either fondly or maliciously, it is being laughed at. Next it will start fighting our preconceptions — such as the kinds of jobs which qualify as creative, which are the roles we are frequently assured are safe from automation.

And after that? Just maybe it can win.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more
We used an A.I. design tool to come up with a new logo. Here’s what happened
Digital Trends AI logo

No matter what industry you work in, you’ve probably heard that artificial intelligence is coming for your job. Factory workers, news reporters, even stock brokers have all seen A.I. move into their fields, automating some of their roles. Proponents of automation point out that it tackles the menial, repetitive tasks, freeing workers to focus on more creative aspects.

Now, gig economy marketplace Fiverr recently announced a new A.I.-powered tool that helps businesses create a logo.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more