← Back to Blog

The Problem Isn't Screen Time. It's the Scroll.

March 4, 2026

Part 2 of 3: Rethinking the Screen Time Panic

In the last post, I showed that teen mental health was improving through the 2000s while screen time was climbing fast. The crisis didn't start until 2010-2012, when a specific cluster of platform features (likes, algorithmic feeds, infinite scroll, public metrics) transformed the internet from a tool into an engagement trap.

That raises an obvious question. If "screen time" isn't the right variable, what is? And how worried should we actually be?

I went through the research. What I found is both more reassuring and more specific than the headlines suggest. And it applies to adults just as much as kids.

The biggest study you haven't heard of

In 2017, Andrew Przybylski at the Oxford Internet Institute published a study that should have changed the entire conversation. Using pre-registered methods (meaning he committed to his analysis plan before looking at the data, which prevents cherry-picking), he analyzed screen time and mental wellbeing data from 120,115 British 15-year-olds.

His key finding: the relationship between screen time and wellbeing isn't linear. It's a U-shape. People with zero screen time reported the same or worse wellbeing as people with excessive screen time. The sweet spot was in the middle.

The specific tipping points where screen time went from neutral-or-positive to potentially negative were surprisingly generous. On weekdays, video game play was fine up to about 1 hour 40 minutes. Watching videos was fine up to about 3 hours 41 minutes. On weekends, the thresholds were even higher.

He called this the "Goldilocks hypothesis." Not too much, not too little, but just right.

Below the tipping points, the relationship between screen time and wellbeing was either significantly positive or flat. Not negative. With one exception (weekend smartphone use), more screen time was associated with equal or better wellbeing up to the sweet spot, not worse.

A subsequent study using an independent Irish dataset replicated the core finding. Moderate digital technology use was not harmful and appeared to be beneficial compared to both extremes.

The potatoes problem

In 2019, Przybylski and his colleague Amy Orben went further. They analyzed three large datasets covering over 350,000 adolescents and found that screen time accounted for at most 0.4% of the variation in wellbeing. To put that number in context, they compared it to other variables in the same datasets. Eating potatoes had a similar-sized negative association with wellbeing. Wearing corrective lenses had a larger one. Getting regular sleep and eating breakfast had positive associations that were many times stronger.

The headlines wrote themselves: "Screen time no worse than eating potatoes."

But that 0.4% number is hiding something important.

The number that fooled everyone

Jean Twenge, one of Haidt's primary collaborators, published a sharp response. She argued that Przybylski and Orben had made a critical error: they lumped all screen time together, including TV watching, which has been declining since 2012 and has weaker associations with mental health. When Twenge re-ran the analyses separating social media and internet use from TV and gaming, the picture changed. For girls specifically, the association between social media use and poor mental health was considerably larger than the combined average suggested.

Even Orben herself acknowledged this distinction in a 2020 review, concluding that findings for "screen time" as a broad category are too small and inconsistent to worry about, but findings for social media specifically show larger and more consistent effects, with correlations in the range of 0.10 to 0.15.

Now here's the part that neither camp has said clearly enough.

In the same Orben and Przybylski analysis, watching TV on weekends showed a median positive association with wellbeing. In the Goldilocks study, the slopes below the tipping points for games and TV were either significantly positive or flat. Not negative. Depression rates in Twenge's own data only start climbing after three or four hours a day for TV and gaming, but after just one hour for social media.

Think about what that means for the "0.4%" headline number. That figure is an average across all screen types. It blends the negative association of social media with the flat or positive association of games, TV, and messaging. The near-zero aggregate doesn't mean nothing is happening. It means two opposite things are happening at the same time and canceling each other out.

Social media is pulling wellbeing down. Non-algorithmic screen time is pulling it up (or at least holding steady). Average them together and you get "no worse than eating potatoes." But that average is meaningless. It's like saying the average temperature of a room with one hand in a fire and one hand in a freezer is "comfortable."

Strip the algorithm out of the data and the remaining screen activities (games, TV, shows, direct messaging) don't just look neutral. They look mildly positive at moderate levels. The Goldilocks curve isn't flat at the bottom. It goes up before it comes back down. The "screen time" debate has been arguing over a blended number that obscures the actual finding: non-manipulative screen time may genuinely be good for you, and algorithmic social media is dragging the average into the mud.

Both camps in this debate agree on the underlying data, even if they haven't framed it this way:

Video games, TV, and direct messaging at moderate levels: no signal of harm, and possible benefit.

Social media with algorithmic feeds and public metrics: real and concerning negative signal, especially for girls.

The argument isn't really about screens. It's about what's on them.

What makes social media different?

This brings us back to the 2009-2012 timeline from the first post. The features that distinguish harmful platforms from benign ones are not about the screen. They are about the business model.

Algorithmic feeds decide what you see based on what keeps you engaged, not what you asked for. You open YouTube to watch one video. Thirty minutes later you're five recommendations deep into content you never chose. The algorithm chose it for you because it predicted you wouldn't close the tab.

Likes and public metrics turn every interaction into a scored performance. A text to a friend is a conversation. An Instagram post is a test with a public grade.

Infinite scroll removes the natural stopping point that every previous form of media had. A book has a last page. A TV episode ends. A game round finishes. Your feed never ends because ending means you leave, and your leaving is the one thing the platform cannot afford.

Engagement optimization means every element of the interface is tested and tuned to maximize the time you spend there. Not the value you get. Not your satisfaction. Your time. Because your time is what they sell to advertisers.

These aren't features for users. They are features for the business. And they don't just affect teenagers. If you've ever picked up your phone to check one thing and looked up 40 minutes later wondering where the time went, you've felt it too.

The displacement question

The screen time research consistently finds that the things with the biggest positive associations with wellbeing are sleep, physical activity, and face-to-face social connection. These effect sizes dwarf anything screen-related.

This points to what researchers call the "displacement hypothesis." Screens aren't harmful because of what they are. They're harmful when they displace something better. An hour of a video game after dinner doesn't displace much. Three hours of algorithmic scrolling at midnight displaces sleep, and sleep deprivation has well-documented effects on mood, anxiety, and cognitive function.

The critical insight: algorithmic platforms are specifically designed to maximize displacement. That's what engagement optimization is. A game ends. A show ends. An infinite feed does not end because it was designed not to. The algorithm's job is to make sure you don't do the other thing.

A platform without an algorithm lets you decide when you're done. A platform with one fights to make sure you never are.

Where this leaves us

The research doesn't support a blanket war on screens. It supports a targeted war on manipulation.

The aggregate "screen time" number is a mirage. It averages together things that are mildly good for you (games, shows, messaging) with things that are measurably bad for you (algorithmic social media), and produces a near-zero result that tells you nothing useful. It's the wrong number. The right question is whether the specific thing you're doing on the screen has an algorithm between you and the content.

The 2000s proved that people can have a rich digital life and be fine. Games, shows, messaging, creative tools, browsing with intent. All of it was associated with stable or improving wellbeing. The thing that went wrong wasn't the screen. It was the arrival of systems designed to capture attention and hold it indefinitely.

The practical question isn't "how many hours of screen time." It's "does this app let me decide when I'm done, or does it fight to keep me?"

In the final post, I'll talk about what that looks like in practice, and what we're building at Last Gen Labs to give people a real alternative.

Next: Strip the Algorithm, Keep the Joy


References

← Back to Blog

Last Gen Labs