Skip to main content

Dreamcatcher is an A.I. that could help analyze the world’s dreams

Edyta Bogucka

Google search queries and social media posts provide a means of peering into the ideas, concerns, and expectations of millions of people around the world. Using the right web-scraping bots and big data analytics, everyone from marketers to social scientists can analyze this information and use it to draw conclusions about what’s on the mind of massive populations of users.

Could A.I. analysis of our dreams help do the same thing? That’s a bold, albeit intriguing, concept — and it’s one that researchers from Nokia Bell Labs in Cambridge, U.K., have been busy exploring. They’ve created a tool called “Dreamcatcher” that can, so they claim, use the latest Natural Language Processing (NLP) algorithms to identify themes from thousands of written dream reports.

Recommended Videos

Dreamcatcher is based on an approach to dream analysis referred to as the continuity hypothesis. This hypothesis, which is supported by strong evidence from decades of research into dreams, suggests that our dreams are reflections of the everyday concerns and ideas of dreamers.

That might sound like common sense. But it’s a very different way of thinking about dreams than the more complex interpretations put forward by theorists like Freud and Jung, who viewed dreams as windows into hidden libidinal desires and other usually obscured thought processes.

The automatic dream analyzer

The A.I. tool — which Luca Aiello, a senior research scientist at Nokia Bell Labs, told Digital Trends is an “automatic dream analyzer” — parses written description of dreams and then scores them according to an established dream analysis inventory called the Hall-Van De Castle scale.

“This inventory consists of a set of scores that measure by how much different elements featured in the dream are more or less frequent than some normative values established by previous research on dreams,” Aiello said. “These elements include, for example, positive or negative emotions, aggressive interactions between characters, presence of imaginary characters, et cetera. The scale, per se, does not provide an interpretation of the dream, but it helps quantify interesting or anomalous aspects in them.”

Edyta Bogucka

The written dream reports came from an archive of 24,000 such records, taken from DreamBank, the largest public collection of English language dream reports yet available. The team’s algorithm is capable of pulling these reports apart and reassembling them in a way that makes sense to the system — for instance, by sorting references into categories like “imaginary beings,” “friends,” “male characters,” “female characters” and so on. It can then further categorize these categories by filtering them into groups like “aggressive,” “friendly,” “sexual” to indicate different types of interaction.

By taking note of the person recording the dream and its content, the researchers can discover some interesting links. A written record might be something like: “I was at a house. Ezra and a friend were on the computer. This unicorn thing kept running towards me when I opened a door. There were other strange creatures there and ones like chickens. They kept trying to attack me.” The Dreamcatcher tool can start with this description and automatically extract various insights; ultimately filing it under “Teenage concerns and activities.” (The dream was, in fact, recorded by Izzy, an “adolescent schoolgirl.”)

Aiello said that some of these insights are expected, while others reveal surprising lines of possible future inquiry. “For example, an adolescent’s dreams were characterized by increasing frequency of sexual interactions as she approached her adult life,” Aiello said. “More surprisingly, we found that blind people’s dreams feature more imaginary characters than the norm, which suggests that our senses influence the way we dream.”

This kind of analysis is something that psychologists looking at this data could also do — although nowhere near as quickly as an A.I. tool. “It is exciting to witness the growing ability of NLP to capture increasingly complex and intangible aspects of language,” Aiello said. “However, it is even more exciting to think that thanks to these techniques we gained the ability to perform dream analysis on a very large scale, something that would be impossible through the time-consuming process of manual dream annotation.”

Sweet dreams are made of these

Comparing the Dreamcatcher system to scores calculated by psychologists, the A.I. algorithm matched 76% of the time. That suggests that further improvements could be made. Nonetheless, it’s a valuable start. Aiello — along with fellow researchers Alessandro Fogli and Daniele Quercia — believe the finished product could have profound applications.

“As more people volunteer to share their dreams, we envision the possibility of analyzing the dreams of a whole population — even of a whole country — to monitor its psychological well-being over time”

One might be for something like a mood-tracking app that asks users to record their dreams, and then pulls out recurrent imagery over a certain duration. Aiello said such a tool could make daily dream reporting a habit for people; rewarding them with on-the-fly dream analysis.

However, the more intriguing concept is the one described at the start of this article: a kind of large scale dream-tracking project that could map the world’s dreams onto real events to see how one informs the other. As with so many other forms of big data analysis, this would become more useful — and captivating — the more it was combined and cross-referenced with other real-world data.

“As more people volunteer to share their dreams, we envision the possibility of analyzing the dreams of a whole population — even of a whole country — to monitor its psychological well-being over time,” Aiello said. “Clearly, this would be only possible with the use of automated tools like ours that make dream analysis feasible on a large scale. This opportunity would be particularly compelling in the wake of global challenges that have an impact on everyone’s psyche. Today it’s COVID, next year it will likely be the economic crisis, and in three or four years it could be global warming.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
AI-controlled robots can be jailbroken, and the results could be disastrous
The Figure 02 robot looking at its own hand

Researchers at Penn Engineering have reportedly uncovered previously unidentified security vulnerabilities in a number of AI-governed robotic platforms.

"Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world," George Pappas, UPS Foundation Professor of Transportation in Electrical and Systems Engineering, said in a statement.

Read more
Perplexity’s two new features take it beyond just a chatbot
An abstract image showing floating squares used for a Perplexity blog post.

Perplexity AI, makers of the popular chatbot by the same name, announced Thursday that it is rolling out a pair of new features that promise to give users more flexibility over the sorts of sources they employ: Internal Knowledge Search and Spaces.

"Today, we're launching Perplexity for Internal Search: one tool to search over both the web and your team's files with multi-step reasoning and code execution," Perplexity AI CEO Aravind Srinivas wrote on X (formerly Twitter). Previously, users were able to upload personal files for the AI to chew through and respond upon, the same way they could with Gemini, ChatGPT, or Copilot. With Internal Search, Perplexity will now dig through both those personal documents and the internet to infer its response.

Read more
Boston Dynamics gave its Atlas robot an AI brain
The electric atlas from boston dynamics

Boston Dynamics and Toyota Research Institute (TRI) announced on Tuesday that they are partnering to develop general-purpose humanoid robots. Boston Dynamics will contribute its new electric Atlas robot to the task, while TRI will utilize its industry-leading Large Behavior Models.

Boston Dynamics, which launched in 1992 as an offshoot from the Massachusetts Institute of Technology (MIT), has been at the forefront of robotics development for more than 30 years. It burst into the mainstream in 2009 with the BigDog and LittleDog quadrupedal systems and debuted the first iteration of its bipedal Atlas platform in 2013. Atlas' capabilities have undergone a steady evolution in the past decade, enabling the robot to perform increasingly difficult acrobatics and dexterity tasks, from dancing and doing back flips to to conquering parkour courses and navigating simulated construction sites.

Read more