Religion isn’t the opiate of the masses — AI is

Religion isn’t the opiate of the masses — AI is

Overall leisure time has increased by over an hour per day in the past ten years. How are we spending these extra 365 hours per year? Not playing with our kids, working on passion projects, exercising, or any other wholesome activity you may have guessed — we’re spending it in front of screens.

In 2007, just 33 percent of leisure time was spent on screens; today, that number has increased to 47 percent — over three and a half hours per day. According to research from McKinsey, smartphone users interact with their devices an average of 85 times a day and almost half of all users (46 percent) report they could not live without their smartphone.

Why are we becoming increasingly addicted to our devices?

The answer lies in increasingly sophisticated algorithms and strategies that social media sites, video streaming platforms, and mobile games employ to increase engagement, dependence, and loyalty.  

How AI taps into human weakness

Numerous studies have demonstrated the deleterious effects of social media, screen time, and passive consumption of internet content. Even Facebook has examined these studies, and suggested that user behavior can change how interacting with Facebook affects mood and overall anxiety levels.

A study from UC San Diego and Yale found that people who clicked on four times as many links as the average person reported worse mental health. And yet, our dependence on screen time is growing at alarming rates — and shows no signs of slowing down.

The reason is that AI-based applications have become increasingly adroit at exploiting human weakness. For instance, I recently wrote about how Netflix leverages predictive analytics to choose shows — oftentimes the TV streaming giant won’t even watch the pilot before investing! The algorithm is highly effective at selecting content (and presenting it — the 12 seconds between episodes don’t leave much time for exerting self-control) in ways that promote binge-watching: 61 percent of users regularly watch between two to six episodes of a show in one sitting.

Similarly, the Facebook NewsFeed shows content to users based on a variety of signals (including internet speed, type of content the user typically engages with, and timeliness), with the intention of predicting which content users will engage with the most — thereby increasing engagement. Likewise, mobile games use various tactics including push notifications sent at strategic moments, based on predictive analytics, community chat options, and freemium models to push users back into the app. And it works: the average American spends between 30 minutes to an hour playing mobile games, and $87 per year on so-called “free-to-play” games.

Even tools like Buffer and Hootsuite rely on AI to game content for increased user engagement. The reality is, almost every consumer-facing company out there uses AI to try and hook people on more screen time. As McKinsey noted, it’s commonplace for “engineers [to] combine data-driven behavioral insights with psychological techniques to nudge and persuade individuals to spend more time on their devices. Academics and industry insiders have detailed examples of persuasive in-software design.”

While on the surface, this isn’t particularly sinister (and I’m certainly not advocating that companies stop using AI to increase engagement) what it means is that we’re training algorithms to exploit human nature in a way that’s not necessarily healthy. Maybe the threat of AI isn’t robot overlords turning the world into the Matrix, but rather a day when AI has us so effectively hooked on screens that overall human happiness declines.

The line between understanding and exploiting

AI is not summarily a force for good or evil; it’s simply a force. And as its prevalence expands, we need to be careful to examine that force and protect ourselves against it when necessary. Whether it’s the spread of fake news, children turning into smart-speaker tyrants, addiction to a game, or just binge-watching TV instead of reading a book or going on a run. AI can be harmful, whether or not we intend it to be when we employ it.

There’s not a great answer to this problem. Obviously, Netflix, Facebook, mobile games, and virtually every content platform out there (including this one) will continue to leverage AI to get people to use their products. The impetus for protection will either land on the individual (although self-control becomes harder and harder as engagement mechanics improve) or on the burgeoning market of ‘Unplug’ apps. Products like Offtime, an app that helps users unplug by blocking Facebook and games may become increasingly popular as people see their time being eaten up by binge-worthy TV shows.

As consumer awareness grows alongside the use of AI, protecting against addiction will become a major point of focus and contention.

Read next: Research: Restricting free speech isn’t the solution to violence and hate speech