Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on January 23, 2018

YouTube’s ‘Intelligence Desk’ could screw legitimate creators


YouTube’s ‘Intelligence Desk’ could screw legitimate creators

Last week, Buzzfeed broke the news YouTube was building an “intelligence desk” to help head off potential disasters early. It’s a good idea, but it might cause more problems for YouTube’s content creators than it solves for the site itself.

The “multi-pronged” early warning system would “bring together Google data, social media trends and third party expertise” to find problematic videos fairly soon after they’re uploaded. Said videos would be either removed, or YouTube would stop ads from running on them.

It makes sense in the wake of the now-infamous Logan Paul Suicide Forest video, as one of the main criticisms of YouTube during that fiasco was that it didn’t respond quickly enough — indeed, the video trended for a while before it was removed. YouTube eventually punished Paul by removing him from the projects he’d been working on for the company, but that struck many (myself included) as too little, too late. If there are more people involved, as the story suggests, it could prevent repeats of previous catastrophes.

Containment, rather than clean-up, is always preferable — YouTube took way, way too long to take down the glut of creepy videos involving children — but it’s also double-edged sword for YouTubers. Unless there are a number of humans double-checking everything, this intelligence desk sounds like it could flag just as many innocent videos as potential troublemakers, as YouTube’s proven on more than one occasion that it’s not the best at protecting its users.

YouTube’s always had a thorny history with automated systems, like the infamous Content ID system which is infamous flagging videos in a very scattershot manner. It’s been known to allow people to attempt to claim ad revenue on other’s videos. Most recently, it allowed someone to claim they had a copyright on white noise.

Youtube’s system for demonetizing content has also come under fire recently. Content creators have found their videos demonetized for for arbitrary, even unknown reasons, and the company’s reasoning has been demonstrably inconsistent. If YouTube uses any kind of automated, non-human intelligence to track down problematic videos, I’d bet money that a few innocent videos will get caught in the mix. We’ve contacted Google to find out what, if any safeguards are in place to prevent this from happening.

So far, the company’s other measure for containing the situation has been to raise the floor in terms of which video makers can run ads on their videos. Whereas before videos needed 10,000 total views to be part of Google’s tier of advertiser-friendly videos, they now have to have 1,000 subscribers and 4,000 hours of watch time. The change is already hurting those who run smaller channels and can no longer collect even modest revenue from them.

It’s not a surprise YouTube is trying an early warning system to appease its advertisers. The Logan Paul incident was widely publicized — failing to remove a video with as many red flags as that isn’t a good look. But YouTube’s previous measures haven’t proven to be best for the community, and it’s already hurting its own people with advertising changes. Its attempt to head off the next disaster before it happens could cause as many problems for content creators as it solves for YouTube and its advertisers.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with