Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 25, 2020

Algorithm researcher says ‘lying on YouTube is still a profitable business’


Algorithm researcher says ‘lying on YouTube is still a profitable business’

TNW Answers is a live Q&A platform where we invite interesting people in tech who are much smarter than us to answer questions from TNW readers and editors for an hour. 

YouTube, which has more than a billion users who watch over a billion hours of content per day, shows us limited data on the videos uploaded to the site including its number of views, likes, and dislikes — but the video-streaming site hides more in-depth stats about each video, like how often it recommends a video to other people.

Guillaume Chaslot is working to change this. A computer programmer and ex-YouTube insider, Chaslot previously worked on recommendations at YouTube and is now the founder of AlgoTransparancy, a project fighting to bring more transparency to how people find videos on YouTube.

[Read: TikTok’s learning from YouTube’s mistakes and YouTubers should be taken seriously, says researcher]

Earlier this week, Chaslot hosted a TNW Answers session where he explained the importance of comparing algorithms, YouTube’s responsibility in recommending videos, and limiting the amount of conspiracy theories about the coronavirus. 

YouTube’s recommended videos appear in the “Up next” list on the right of the screen and they’ll also play automatically when you’ve got autoplay enabled. According to Chaslot, these are the videos you should be wary of.

Last year, Chaslot told TNW: “It isn’t inherently awful that YouTube uses AI to recommend videos for you, because if the AI is well tuned it can help you get what you want. This would be amazing. But the problem is that the AI isn’t built to help you get what you want — it’s built to get you addicted to YouTube. Recommendations were designed to waste your time.”

It doesn’t take many clicks and searches to find yourself in a YouTube rabbit hole with a sense of being ‘algorithmically guided and manipulated.’ “On YouTube, you have this feeling of ‘zoom in’ into a particular topic,” Chaslot said. “But you could have algorithms that ‘zoom out’ and make you discover new things and it’s pretty easy to implement — I did it at Google. It’s just that those algorithms are not as efficient with watch time.”

YouTube
’s business model relies heavily on ads and ‘watch time’ to generate revenue, it’s as simple as that. Chaslot argued that YouTube doesn’t prioritize the user’s interests by saying: “[YouTube] tries to understand what’s best for the advertisers and pretend that it’s also best for the users.” By asking users what they really want from the platform, the user experience would improve, says Chaslot.

Recommended radicalization, misinformation, and problematic content 

Chaslot argued that highlighting YouTube’s algorithm incentives (i.e. watch time doesn’t equal quality) should prove its negative effect on our society.

If YouTube was filled with only funny cat videos, how it generates its “Up next” videos wouldn’t be a cause for concern. But as people rely on YouTube for information and news consumption, Chaslot worries recommendations will edge people further to extremes, whether they’re seeking it out or not.

This worry is also applicable to platforms like TikTok. “The problem is not user generated content: Wikipedia is using it. The problem is when algorithms decide who gets amplified, and who doesn’t,” Chaslot said. “TikTok has many potential issues, especially with censorship. Think about this: our kids are slowly learning that they shouldn’t criticize the Chinese government. Not because they get threatened, but because when they do, their posts don’t get traction. Meanwhile the Chinese government is pushing the narrative that the coronavirus comes from the US.”

Platform or publisher?

Facebook, Twitter, and YouTube have long had a simple answer to anyone who disapproved of what their users were up to — they’re platforms and not publishers. They claim to be merely tools that serve for free expression, and not publishers who take responsibility for the content they distribute.

“The legislation that says that YouTube is a platform is called CDA 230 and was voted in 1996,” Chaslot said. “At that time, AI didn’t exist. Recommendations didn’t exist at the time. Nowadays, YouTube recommends some videos billions of times and takes no responsibility for it — that’s a loophole. If you promote someone 1 billion times, you should be responsible as a publisher.”

Last year, YouTube announced it was taking a ‘tougher stance’ towards videos with supremacist content, which included limiting recommendations and features like comments and the ability to share the video. According to the platform, this step reduced views to these videos on average 80%.

“Thanks to these changes, there was little fake news about the coronavirus in English, which is a radical change. A few years ago, the YouTube algorithm was blasting anti-vax conspiracy by the hundreds of millions. But there are still many issues,” Chaslot said. “In France, which is in full confinement, one guy who said the virus was ‘man made’ got nearly a million views in a few days. He was probably promoted millions of times by the algorithm. This guy doubled the total number of views of his channel, and quadrupled his followers or something in that order. So lying on YouTube is still a profitable business.”

Currently, Chaslot believes platforms like Facebook, YouTube, Google’s harness AI technology to exploit our human weaknesses. However, he’s more optimistic about the future: “AIs will help people achieve their full potential.”

You can read Guillaume Chaslot’s full TNW Answers session here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with