This article was published on October 15, 2019

Mozilla unveils 28 horror stories about YouTube’s recommendation algorithm

Mozilla hopes YouTube will change its ways


Mozilla unveils 28 horror stories about YouTube’s recommendation algorithm

Mozilla just launched a site featuring 28 user-submitted stories, detailing incidents where YouTube’s recommendation algorithm served bizarre and horrifying videos the users had shown no interest in. This included recommendations featuring racism, conspiracies, and violence.

YouTube’s recommendation algorithm has faced a lot of scrutiny this year for radicalization, pedophilia, and for generally being “toxic” — which is problematic because 70 percent of the platform’s viewing time comes from recommendations. That’s why Mozilla launched the #YouTubeRegrets project, to highlight the issue and urge YouTube to change its practice.

Credit: Mozilla
The stories gathered by Mozilla show how YouTube’s recommendations can go wrong.

The stories of the darker sides of YouTube’s recommendations are chilling, and put the spotlight on whether or not their purpose is justified.

“The stories show the algorithm values engagement over all else — it serves up content that keeps people watching, whether or not that content is harmful,” Ashley Boyd, Mozilla’s VP of Advocacy, told TNW.

Gore, violence, and hate

Many of the stories describe the effects of recommendations on more vulnerable groups such as children:

When my son was preschool age, he liked to watch “Thomas the Tank Engine” videos on YouTube. One time when I checked on him, he was watching a video compilation that contained graphic depictions of train wrecks.

Users can’t turn recommendations off, so children can be fed problematic content without having the means to steer clear of it. But that doesn’t mean adults are unaffected:

I started by watching a boxing match, then street boxing matches, and then I saw videos of street fights, then accidents and urban violence… I ended up with a horrible vision of the world and feeling bad, without really wanting to.

Often the recommendations go completely against the viewer’s interests in harmful and upsetting ways:

I used to occasionally watch a drag queen who did a lot of positive affirmation/confidence building videos and vlogs. Otherwise, I watched very little that wasn’t mainstream music. But my recommendations and the sidebar were full of anti-LGBT and similar hateful content. It got to the point where I stopped watching their content and still regretted it, as the recommendations followed me for ages after.

Credit: Mozilla
Conspiracy theory videos are also mentioned as they’re frequently recommended, causing students to be misinformed, elderly people being duped, and feeding into the paranoia of people with mental health problems.

Mozilla acknowledges that the stories are anecdotal and not cold hard data, but they do highlight the bigger issue at hand.

“We believe these stories accurately represent the broad problem with YouTube’s algorithm: recommendations that can aggressively push bizarre or dangerous content,” Boyd explains. “The fact that we can’t study these stories more in-depth — there’s no access to the proper data — reinforces that the algorithm is opaque and beyond scrutiny.”

And therein lies the issue. YouTube has denounced methodologies employed by critics of the recommendation algorithm, but doesn’t explain why they’re inaccurate.

Mozilla points out that YouTube hasn’t even provided data for researchers to verify the company’s own claim that it has reduced recommendations of “borderline content and harmful misinformation” by 50 percent. So there’s now way to know whether YouTube actually has made any progress.

Solution?

Judging by these personal stories and recent news reports, it does seem something needs to happen — and fast. Earlier this year, Guillaume Chaslot, a former Google employee, told TNW the “best short-term solution is to simply delete the recommendation function.” 

While that particular solution might not be realistic, Mozilla presented YouTube with three concrete steps the company could take to improve its service in late September:

  • Provide independent researchers with access to meaningful data, including impression data (e.g. number of times a video is recommended, number of views as a result of a recommendation), engagement data (e.g. number of shares), and text data (e.g. creator name, video description, transcription and other text extracted from the video)
  • Build simulation tools for researchers, which allow them to mimic user pathways through the recommendation algorithm
  • Empower, rather than restrict, researchers by changing its existing API rate limit and providing researchers with access to a historical archive of videos

Boyd says YouTube’s representatives acknowledged that they have a problem with their recommendation algorithm, and said they’re working to fix it. “But, we don’t think this is a problem that can be solved in-house. It’s too serious and too complex. YouTube must empower independent researchers to help solve this problem,” says Boyd.

You can read all the stories on Mozilla’s website. And if you’re looking to get rid of some algorithms in your life, then try an extension called Nudge which removes addictive online features like Facebook’s News feed and YouTube recommendations.

Update: YouTube’s spokesperson responded to Mozilla’s initiative and says they cannot verify the stories as they don’t have access to the data in question:

While we welcome more research on this front, we have not seen the videos, screenshots or data in question and can’t properly review Mozilla’s claims. Generally, we’ve designed our systems to help ensure that content from more authoritative sources is surfaced prominently in recommendations. We’ve also introduced over 30 changes to recommendations since the beginning of the year, resulting in a 50% percent drop in watchtime of borderline content and harmful misinformation coming from recommendations in the U.S.

YouTube also points out that only a tiny fraction of the content on the platform is harmful and the Community Guidelines clearly prohibit violent, graphic, and hateful content. The company has also taken steps to improve how it connects users to content, including how it suggests videos in search results and through recommendations

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with