Twitter today released its biannual transparency report, one section of which detailed requests it receives from government officials around the world to remove content that violates the site’s Terms of Service. Of particular note in the report is that, according to Twitter, it now receives 80 percent fewer TOS requests relating to the promotion of terrorism, apparently thanks to more rigorous internal checks.
The report covers the period from January 1, 2017 through June 30. In that time, Twitter received 338 reports on 1,200 accounts. That’s down from the 716 reports they received on the previous report.
Blockchain and cryptocurrency news minus the bullshit.
Visit Hard Fork.
That’s a major reduction considering last year Twitter received reports on 5,929 accounts. But, just as with last year, that only accounts for a fraction of the accounts suspended for promotion of terrorism in total.
According to to the Twitter Public Policy blog, 95 percent of the time, Twitter’s own internal systems detect potentially troubling accounts, and eliminate three-quarters of them before they make their first tweet. In total, Twitter says it removed 299,649 troublesome accounts from the site during the reporting period.
Twitter has expanded the section of the report detailing TOS requests, which covered only “promotion of terrorism” in the last transparency report. The new categories are “Abusive Content,” “Copyright,” and “Trademark.” Twitter says government TOS requests for promotion of terrorism account for only two percent of such requests. The vast majority of takedown requests were filed under “Abusive Content.” In contrast to the terrorist category, where Twitter took action against 92 percent of accounts reported, it only complied against 12 percent in the latter case.
We’ve contacted Twitter for information on the internal tools it’s apparently used to such great effect to remove extremist accounts. We’ll update if we hear anything interesting.
Update 3:45 pm CST: A Twitter spokesperson told us, “We are reluctant to share details of how these tools work as we do not want to provide information that could be used to try to avoid detection. We can say that these tools enable us to take signals from accounts found to be in violation of our TOS and to work to continuously strengthen and refine the combinations of signals that can accurately surface accounts that may be similar.”