Skip to main content

Museum’s AI exhibit compares art masterpieces to latest news photography

The Tate Britain art museum has just launched a new artificial intelligence exhibit that applies machine learning technology to images in some pretty unique ways.

Called “Recognition,” the exhibit is the winner of Tate’s annual IK Prize, which was created in association with Microsoft and awards “digital innovation.” It was conceived by Fabrica, a communication research centre based in Italy.

Recommended Videos

It compares images from Tate’s enormous archive of artwork with up-to-the-minute Reuters news photography, based on various pattern-recognition tools. These include object recognition, facial recognition, composition analysis, and even natural language processing for looking at captions and text.

“From the moment it launched, it’s continually scanning both databases and comparing images, trying to find works which are comparable — whether that be visually or thematically — and then publishing them online in a virtual gallery,” Tony Guillan, producer of the IK Prize, told Digital Trends. “That gallery will keep growing over the course of the exhibition and, by virtue of including up-to-the-minute news images, will become a sort of time capsule of this period.”

The exhibit runs from now until November 27. There’s both an online and offline component, since the galleries are viewable online, where you can check out the matching process in all its surprisingly hypnotic glory.

Go into London’s Tate Britain museum, however, and there’s the opportunity to influence the way the algorithms do their matching by carrying out some of the algorithm training yourself.

One of the most interesting aspects of the project, Guillan said, is what it asks us about the line between machines and the creative process. “A lot of people find this a challenging idea because of the way AI has been presented in the media as an entity of technology we should be afraid of,” he said. “The fact that it can do something that’s creative does provoke a strong reaction from people.”

Guillan adds that he views exercises like Recognition not as an example of something where machines can replace humanity, but rather an interesting example of how it can augment us.

“Reaction has learned to do something we can’t do, which is to scan up-to-the-minute photography and the entire Tate collection in nanoseconds,” he concluded. “At the same time, when we look at pictures, there are numerous frames of reference we’ll use to judge them based on our lived experiences. The main job of a human curator [in a museum] is to put artworks together in a way that creates new meanings through comparisons or contrasts. A machine doesn’t do that. The meaning is produced by the human audiences who fill in the extra connections for themselves.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
I pitched my ridiculous startup idea to a robot VC
pitched startup to robot vc waterdrone

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
An android touches a face on the wall in Ex Machina.

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more