Google recently stepped up its efforts to reduce the amount of extremist content available on YouTube.
A couple months back the company announced it would be increasing its ability to find and flag extreme videos by implementing machine-learning, and the results are in: AI has proven to be a dramatic upgrade over humans when it comes to flagging terrorist content.
On Google’s blog the company revealed a significant increase over previous efforts, including being able to flag posts before a human can 75 percent of the time. Researchers also report the AI is able to review twice the content humans can; that number is expected to increase as they further develop the system.
YouTube is going beyond simply flagging and removing extremist videos, or re-directing searches for that content in order to debunk and de-legitimize extreme viewpoints, as we previously reported.
Google will also impose stricter guidelines on all videos, even for content that doesn’t actually violate policy — according to a post on its official blog yesterday.
In the next few weeks Google’s AI will begin policing videos that get reported as being hate-speech, offensive, or violent in new ways. Even if it doesn’t violate any specific YouTube policies, it can be placed in a state of limited access. This restriction would keep posters from monetizing videos, and block comments, likes, and search prioritization.
Pssst, hey you!
Do you want to get the sassiest daily tech newsletter every day, in your inbox, for FREE? Of course you do: sign up for Big Spam here.