Life on the screen

By Jihye Lee , a doctoral candidate in the Department of Communication at Stanford University and a member of the Screenomics Lab.

Jihye Lee explains the Human Screenome project at Stanford University, a transdisciplinary effort to produce and analyze a comprehensive record of a person’s digital experience by tracking everything people see and do on their screens in real-time. Jihye discusses how this interdisciplinary effort can contribute to capturing the breadth of life experiences reflected on our screens and providing key opportunities for social scientists, users, and policymakers.

Principal investigators: Bryan Reeves, Nilam Ram, and Thomas Robinson


As technology becomes more integral to everything we do, the time we spend in front of screens such as smartphones and computers continues to increase. The pervasiveness of screen time has raised concerns among researchers, policymakers, educators, and health care professionals about the effects of digital technology on well-being. Despite growing concerns about digital well-being, it has been a challenge for scientists to measure how we actually navigate the digital landscape through our screens. For example, it is well documented that self-reports of one’s media use are often inaccurate despite survey respondents’ best efforts. Just knowing screen time spent on individual applications does not fully capture a person’s usage of the digital device either. Some could spend an hour on YouTube watching people play video games whereas others might spend the same amount of time watching late night television talk shows to keep up to date with the news. Even though the screen time is the same for the same application, the intentions and values of consumption of certain types of content can be vastly different among users.  

The Human Screenome project at Stanford University attempts to bridge this gap through a collective effort to provide and analyze recordings of everything people see and do on their screens. In brief, research software runs in the background of participants’ phones and takes screenshots of all the content that appears on the phone screen every 5 seconds. Intensive, longitudinal streams of screenshots collected over extended periods capture the breadth of life experiences reflected on our screens, unlocking novel sources of data that help facilitate new lines of research about individuals’ unique experiences on digital devices. In this post, I’ll explain how the Human Screenome approach provides a new way to understand a person's digital experience as well as key opportunities for social scientists, users, and policymakers. A better understanding of digital media use can empower individual users to take charge of their technology use in ways that fit their needs and provide insight about the types of content people are actually being exposed to, ensuring more informed, accurate, and relevant research and policymaking. 

The screenomes: Example, data collection, and data security 

The Human Screenome project uses interdisciplinary science to produce a comprehensive record of a person’s digital experience by tracking every part of their smartphone engagement in real-time. As genomics tries to understand the structure, function, and expression of a person’s unique traits, individuals’ digital experience is identified through the sequence of screenshot images (“screenomes”) and subsequent analyses of psychological, cognitive, and behavioral dimensions extracted from the digital record.   

Example

Below is a sample video of one person’s smartphone use collected by the Human Screenome method. This video of screens images represents 15 minutes of one person’s use over approximately two hours of one day (shared by permission of the user).  It demonstrates that digital content is radically diverse and fragmented with different content threaded into sequences where the average task on personal screens lasts only seconds. For instance, the user switches from messaging to music player to phone setting and then back to messaging within 35 seconds. 

Note. The color bar indicates different phone application types. The “fuzz” on the outside of the color bar indicates the number of words on each screenshot as a proportion of the maximum number of words (Video produced by Sarah Chey; Visualization produced by Nilam Ram, Stanford Screenomics Lab, https://screenomics.stanford.edu/)

Data collection

The basic procedure of smartphone screenome collection is as follows. When volunteers indicate interest in and agree to the privacy and security protocols of our study, they are asked to download the research application from Google Play Store on their phone. Participants then simply use their devices as usual while the application runs in the background. The research application appears only as a small icon on the screen in order to minimize the interruption of participants' phone use. Hence, the screenome collection is unobtrusive and occurs in a natural setting. Navigating different modalities of smartphone devices, researchers have developed research software that is compatible with various Android devices, and an extensive number of smartphone screenomes have been gathered from a wide range of demographics including a nationally representative household sample, adolescents, and lower-income individuals in the US and globally, including in China and Myanmar.

Data security

Screenomes have been collected and processed in line with rigorous data security and privacy protocols, including encryption and storage on secure research servers. Procedures are vetted and approved by the University. Access to raw screenshot images is strictly restricted to a limited number of university-based researchers who have completed research ethics education and are certified to conduct human subjects research. Researchers monitor the data for data loss prevention. 

How social scientists can use the screenome 

Screenomes allow for detailed analyses of individuals' unique usage of their smartphones in various contexts throughout their days and nights. Here I'll touch upon a few selected features, using an example of smartphone screenomes collected from two adolescents — from Reeves, Robinson, & Ram, 2020.  

Sessions

A screenome analysis helps understand moment-by-moment changes in the temporal patterns of one's digital experience across the day. The figure below shows whether the smartphone screen was on during each five-second interval and the colors of the bars represent different types of applications that were engaged during each interval. The panels highlight how smartphone use varies substantially between the two people and between days and hours within each person. When a session is defined as the interval between the screen light up and going dark again, Participant A on the top had more and shorter sessions with each session lasting 1.19 minutes on average. Participant B on the bottom had fewer and longer sessions where each session lasted 2.54 minutes on average and tended to engage with his smartphone more frequently in the afternoon and evening than Participant A.

App use

While taking screenshots every five seconds, the research software notes when users switch to a new application. These application logs shed light on how individuals navigate different platforms and engage with different content (e.g., entertainment, news stories, conversations). For instance, when we zoom in on a two-hour period from 10 pm (left) to midnight (right), Participant A quickly switched between different types of applications in the first 15 minutes and then mostly engaged with social media (orange; mostly Snapchat and Instagram). For Participant B, there were extended periods watching YouTube (purple), followed by quick switching with substantial creation of content in the last 30 minutes.  

Text

Text extracted from the screenomes can also help researchers explore the emotional and cognitive dimensions underlying language use. Through a customized OCR module, screenshot images are first converted from RGB to grayscale, parsed into binary counterparts (i.e., black and white), and categorized into blocks of text and images. The resulting collection of text can be analyzed further via natural language processing methods. 

Opportunities for the screenome  

Users

The screenome’s resolution and comprehensiveness can help users get a better sense of their smartphone use and balance their use to enhance their own sense of digital well-being. One of our studies indicates that drug and disease-related signals reflected in the screenomes relate to diabetes. This example illustrates that the screenome metrics can enable better tracking of health conditions, ultimately promoting healthier lifestyles. Fine-grained analysis of emotionality reflected in each screen also contributes to better understanding users’ emotional state associated with content and promoting psychological well-being. Inspired by these findings, a group of doctoral affiliates at the Human Screenome project at Stanford University have been working on creating a user-friendly dashboard based on a screenome analysis with the hope of empowering users with a more accurate depiction of their smartphone use to help them better balance their lives — they were awarded a 2019-2020 Magic Grant by The Brown Institute of Media Innovation

Policy benefits

While the massive amounts of information that can be extracted from mobile devices raise policy concerns such as privacy and surveillance, when used for research and in a user’s interest, Screenomics may help policymakers protect citizens online effectively. For example, policymakers can gain insight into how consumers are being subjected to content promoting false claims or scams across different channels (e.g., text, robo calls, email, and social media ads). This can help inform more effective policies to stop unfair and deceptive practices online and protect vulnerable populations. 

In this post, I outlined exciting opportunities Screenomics analysis can offer — from pushing the frontiers of knowledge to providing insights for users and policymakers.

About

Jihye Lee is a doctoral candidate in the Department of Communication at Stanford University and member of the Screenomics Lab. Her PhD dissertation explores questions of inequality in the domain of technology, focusing on how disadvantaged populations navigate the digital landscape based on the Screenomics analysis. This blog post is based on the presentation titled “Screenomics: A new approach to understanding information inequality in the digital space (by Jihye Lee, James T. Hamilton, Nilam Ram, Thomas Robinson, and Byron Reeves)” at the 2020 Conference on Computational Sociology. 

Previous
Previous

How can you analyze online talk? Researchers demonstrate!

Next
Next

Christina Silver and the Five-Level QDA Method