Skip to main content

Human Screenome Project wants you to share everything you do on your smartphone

Roo Lewis/Getty Images

You’ve almost certainly seen them on YouTube. “Noah takes a photo of himself every day for 20 years” (5 million views.) “Portrait of Lotte, 0 to 20 years” (10.9 million views.) “Age 12 to married — I took a photo every day” (an astonishing 110 million views.) Heck, even Homer Simpson and Family Guy’s Peter Griffin have parodied the format. In an age of selfies and ubiquitous smartphone cameras, this increasingly popular genre of time-lapse videos depicting the aging process lets people self-chronicle their lived experiences in a quintessentially modern way that would have been all but impossible just a couple of decades ago.

But what if the bigger story wasn’t some YouTube star’s changing facial features, but rather the fact that tens of millions of us would dedicate minutes of our day to watching them? And, maybe after that, tweeted out a link to the video we’d just watched. Or sent it to a buddy on WhatsApp. Or fired up the camera app on our own smartphone and started making our own version. Or just forgot about what we’d just watched entirely, and played a quick game on Mario Kart Tour.

Child Using Smartphone
Image used with permission by copyright holder

In a world in which we live digitally, the way we consume media on our screens (and, particularly, on our smartphones) may just turn out to be the most profound way of documenting life in 2020. At least, that’s the idea of an ambitious new initiative called the Human Screenome Project. Created by researchers at Stanford and Penn State University, it’s a new mass data collection exercise that asks users to agree to share information about every single thing they do on their smartphones.

Recommended Videos

Special software developed by the project’s creators will take screenshots of these mobile devices every five seconds they’re active, encrypt it, send it off to a research server, and then use artificial intelligence algorithms to analyze exactly what it is that’s being looked at. In the process, the researchers want to create a multidimensional map of people’s changing digital lives in the twenty-first century; providing a moment-by-moment look at changes over the course of days, weeks, months, and potentially even years and decades.

“The digital media environment has [advanced] so much in the last few years,” Nilam Ram, professor of human development and psychology at Penn State University, told Digital Trends. “We don’t really have a good idea of how people are using their devices, and what it is that they’re being exposed to. Typically, research studies about screen time will rely on self-reports about how long people engaged with social media over the past week. That’s a really complicated question for people to answer. The evidence suggests that people are over- or underestimating their own engagement by as much as a few hours.”

A resource for researchers everywhere

According to Ram, the project traces back seven years to a chance meeting between himself and Byron Reeves, a professor of communication at Stanford. Reeves was interested in media and its effects on people. Ram was interested in behavioral time series data; a type of behavioral analytics that works with regular data points gathered in chronological order. This can be used to study — and predict — things about the behavior of individuals.

At first, the pair set out to explore multitasking. They developed software that they could use to see how rapidly student participants switched between tasks when they are working. They discovered that they would switch windows approximately every 20 seconds. “That was faster than anyone at that time thought anyone was switching from task to task,” Ram said. “From there, we developed software to do it on a smartphone.”

A Screenome Sample

They figured that this would be a natural extension of their work with multitasking. But when the initial flow of data, from a small group of students, came in, they realized they had tapped a far deeper well than they thought. “Once we started watching the time-lapse footage of what people were actually doing on their phone, we realized that, wow, there are so many different types of human behaviors that are expressed here,” Ram said. “That could be engagement with politics, mental health issues, social media issues, interpersonal relations, climate change. We can see things like the gender distribution of faces that people look at on their phones, the racial distribution of those faces — there’s so much richness in it.”

If this sounds like it’s too much for one pair of researchers to look at, you’d be absolutely correct. The hope is that the Human Screenome Project — whose name is a nod to the previous Human Genome Project — will create a vast sharable database of information that will be available for other researchers to explore as well. This will be part-ongoing user survey (albeit without users having to actively answer questions) and part-historical artifact, like a digital Mass Observation Project. The potential value of such an archive could be immense. Some researchers might use it to track the rise and fall of memes as they appear, flourish, and disappear into the cybernetic ether. Students of design could use it to look at how changing app user interfaces reflect transitions in that particular field. Others may use it, alongside cross-referenced information, to study the potential health impacts of social media. Or how screen time impacts concentration.

“The idea of the Human Genome Project was that, if we could map the human genome, it would change the way we approach disease and the treatment of disease,” Ram said. “I think it did that. Here, we’re in some ways trying to take the same kind of theoretical leap, saying that if we can map out the Human Screenome it will change the way we think about digital media and how it’s affecting people.”

It’s like mass surveillance but… good?

But is a project like this workable? The same thing that makes it so tantalizing from a research point of view is what also raises challenges. Simply put, as Apple co-founder and former CEO Steve Jobs predicted way back in 2007, the smartphone has become a consolidation of all the separate devices we once carried around. It is our laptop, our personal organizer, our portable music player, our GPS system, and more.

A group of people using smartphones
Image used with permission by copyright holder

With the requirement of physical user interaction and millions of available apps, a smartphone is a far more dynamic media environment than its antecedent: the living room television with its choice of a handful of channels to pick from. As windows into our interests go, smartphones are the epitome of what media theorist Marshall “the medium is the message” McLuhan would have called “extension of ourselves.” However, that makes them personal in a way that few other devices are. Allowing researchers to see everything you do on your smartphone will, for some users, simply be a step too far.

Still, Ram is confident this will not hold true for everyone. “Generally we find in our conversations with participants that they are well aware that their data is being collected by the big data companies on a very regular basis,” he said. “It’s being used in ways that they have no control over. They seem aware of that, and excited about the possibility that those data might instead be used for research purposes to understand human behavior.”

As of now, the Stanford Screenomics Lab has collected over 30 million data points from more than 600 participants. While it has yet to open up its platform to whoever wants to get involved, Ram hopes the number of users could eventually scale this number to far more epic proportions, with multi-year contributions from users.

And what about when smartphones finally give way to some other dominant technology? “[This is something that] could go on forever,” Ram said. “[That will mean that it has to] transform in different ways as screens move from being separate devices to ones that are embedded somehow, whether it’s a chip or a Google Glass-style evolution. We want to evolve our data collection paradigm along with the emergence of those technologies.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more
Rivian, VW venture kicks off next-gen platform for R1, Scout EVs
Rivian R2, R3, and R3X

The big challenge for Rivian, the EV maker known for its innovative electric and software systems, has long been how to reach the next stage of growth.

That stage came within reach in June, when the California-based company and Volkswagen announced a joint venture involving a $5 billion injection from the German automaker.

Read more
Hyundai teases Ioniq 9 electric SUV’s interior ahead of expected launch
hyundai ioniq 9 teaser launch 63892 image1hyundaimotorpresentsfirstlookationiq9embarkingonaneweraofspaciousevdesign

The Ioniq 9, the much anticipated three-row, electric SUV from Hyundai, will be officially unveiled at the Los Angeles Auto Show next week.

Selected by Newsweek as one of America’s most anticipated new vehicles of 2025, the Ioniq 9 recently had its name changed from the Ioniq 7, which would have numerically followed the popular Ioniq 6, to signal the SUV as Hyundai’s new flagship EV model.

Read more