After removing thousands of videos featuring slain extremist cleric Anwar al-Awlaki (pictured above) from its platform earlier this week, YouTube is expanding its takedown policy to cover even more content that includes “people and groups that have been designated as terrorist by the U.S. or British governments,” reports Reuters.
The change will affect videos that don’t feature acts of violence or hateful speech (which are already barred), and the move signals further efforts by the company to prevent the spread of extremist propaganda that could help radicalize viewers, as it comes under pressure from various governments to do so.
That’s a shift from the changes Google outlined in June, when it noted that videos containing inflammatory religious or supremacist content would appear behind an interstitial warning and wouldn’t be recommended to viewers or be eligible to be monetized through ads.
While Juniper Downs, YouTube’s global director of public policy, noted this week that such measures could make it more difficult for certain videos to gain an audience, I believe there’s greater merit in creating a space that’s free of extremist content and safer for people across the world to use.
As for the task of triaging the massive streams of video that are uploaded every minute to YouTube to flag questionable content: the company is already using AI to sniff this stuff out, and such systems will likely improve over time to further improve and automate the process.
Arguably, the most difficult part of cleaning up YouTube is deciding what should go, as opting for automated removal runs the risk of hiding content from people documenting related media. The Intercept noted earlier this month that YouTube’s AI took down videos and channels run by well-known organizations dedicated to covering the civil war in Syria, and other conflict areas.
Tackling this issue won’t be easy, but it’s a challenge that YouTube will need to step up to if it wants to continue being the go-to source for video around the world. A nuanced approach with human intervention, backed by sophisticated AI, could certainly help things along.
It’s good to see YouTube take a harder stance against those who promote violence and hate, even if it means that the platform must exercise more editorial control over the content that’s published on its platform.