YouTube says it took down a record number of videos in the second quarter of this year due to an increased use of AI in its content review efforts.
In total, 10.85 million of the 11.4 million videos removed from the platform between April and June were flagged by automated systems, according to YouTube’s latest Community Guidelines Enforcement Report.
AI played an even bigger role in the removal of user comments. Of the 2.1 million comments taken down, 99.2% were detected by automated systems.
In a blogpost, YouTube said it was forced to rely more heavily on AI due to the impact of COVID-19.
[Read: 4 ridiculously easy ways you can be more eco-friendly]
The company normally uses a combination of human reviewers and machine learning to remove harmful content. But after telling staff they could work from home during the pandemic, the streaming giant says it was forced to choose between two new approaches: rely on its reduced workforce of human reviewers or ramp up the use of AI.
The former would maintain the accuracy of removals but risk more harmful content being viewed, while the latter would increase the speed of removals but lead to more legitimate videos being taken down.
“Because responsibility is our top priority, we chose the latter—using technology to help with some of the work normally done by reviewers,” the company said.
As a result, more than double the number of videos were removed last quarter than in the previous three months. Over a third (33.5%) were taken down due to child safety risks, while 28.3% were flagged as spam, misleading, or scams. In addition, almost 2 million channels were removed, 92% of which were flagged as spam, misleading, or scams.
Has YouTube struck the right balance?
Not every type of video was given the same treatment. For particularly sensitive content, such as violent extremism and child safety, YouTube relied more heavily on AI. This led the company to remove three times as much content flagged as violent extremism or potentially harmful to children.
But sacrificing accuracy for safety meant more content was removed that didn’t violate YouTube’s policies. The company says it’s taken several steps to reduce this disruption.
YouTube previously used a “three strikes and you’re out” policy for videos that violated its guidelines. But after expanding the role of automation in enforcement, YouTube stopped issuing strikes for content removed without human review — unless it was highly confident that the video broke the rules.
In addition, YouTube made more staff available to review appeals. The company says the number of both appeals and reinstatements were double that of the previous quarter.
YouTube claims it’s only “temporarily relying more on technology.” But with staff now set to work from home until at least next July, it looks like AI will play a big role in enforcement until at least next summer.
So you like our media brand Neural? You should join our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.
Get the TNW newsletter
Get the most important tech news in your inbox each week.