Ivan covers Big Tech, India, policy, AI, security, platforms, and apps for TNW. That's one heck of a mixed bag. He likes to say "Bleh." Ivan covers Big Tech, India, policy, AI, security, platforms, and apps for TNW. That's one heck of a mixed bag. He likes to say "Bleh."
Earlier today, Facebook removed the account of T. Raja Singh, a leader of India’s ruling Bhartiya Janta Party (BJP), for hateful speech against Muslim minorities.
This step came after the Wall Street Journal (WSJ) pointed out the company’s policy executive, Ankhi Das, stopped moderators from removing hateful posts and accounts of politicians of BJP. More reporting outlined how Das and Facebook have sided with the Indian government over the years.
Earlier this week, the company’s executives also appeared in front of a parliamentary committee, where both sides of the aisle questioned its executives about hate speech and misuse prevention on the platform.
It took Facebook constant reporting from the media, internal protests, letters from ministers, and a parliamentary hearing to ban a politician that violated its hate speech rules.
After multiple stories in India about how Facebook executives courted politicians for years, the company can’t even pretend to not understand the culture.
Does Facebook know better? For sure. Does it want to be proactive and take action? Its actions suggest that’s not the case.
The biggest social network in the world has done it time and time again. Take no action against hate speech despite getting reported by numerous people.
There’s a well-defined cycle. Wait for media reports to emerge. Issue statements of apology. Take some minor rectifying actions. Rinse. Repeat.
From Cambridge Analytica to its problems with violence-inciting posts in Myanmar, Facebook has repeatedly opted to address conflicts only after outrage ensues. And more often than not the higher management at the company has refused to take a definitive step during these crises.
Maria Ressa, a journalist from Philippines, contacted Facebook about fake news, harassment, and abuse in 2016, and was ignored for the longest time despite having met top company executives including Zuckerberg. The list goes on.
In March, a report from WSJ suggested that Facebook knew that its algorithm is spreading extremist content, but did next to nothing to stop the polarization.
This year, while Twitter has proactively banned misinformative and hateful posts by Donald Trump, Facebook kept some of them up and took a long time to take a few down. And despite employee outrage, Zuckerberg and co. maintained that stance giving the age-old reason of free speech.
Did you know we have an online event about product design coming up? Join the Sprint track at TNW2020 to explore the latest trends and emerging best practices in product development.
Get the TNW newsletter
Get the most important tech news in your inbox each week.