Roughly two weeks after the terrorist attack on a mosque in Christchurch, New Zealand, Facebook has committed to clamping down on white nationalism and white separatism on its platform, as well as on Instagram. Finally.
The company has previously tried to implement policies to tackle hate-fueled content from racial supremacy groups for years now. But it took a mass murder of 50 innocent people – who were gunned down and killed at a place of worship – to urge the company to impose this ban.
It’s hard to understand why it took this long for a company operating the world’s largest social network to figure this out and take a stand. As a multi-billion dollar firm, it has the resources to engage the best minds in the business of understanding civil rights, race relations, and human psychology, and get ahead of such issues. Instead, it chose to stand idly by, while a terrorist tailored their atrocities and messaging to go viral on social media platforms.
This isn’t necessarily an insurmountable problem. We know that these firms can and do use AI to spot problematic content and weed it out of their sites. But as I’ve written before, it shouldn’t be too difficult to ramp up the implementation of such technologies, as well as introduce other measures, to curb the spread of violative content. It comes down to how companies choose to run their businesses and make money.
I don’t mean to only go after Facebook. Others like Twitter are guilty of this too.
And today, Axios reports that Google is being called out for failing to remove an app (PDF, footnote on page 62) that Human Rights Campaign – an LGBTQ rights group – says “supports the practice of so-called ‘conversion therapy.'” Microsoft and Apple have already pulled this app from their app stores, so it’s puzzling as to why Google thinks it prudent to stick to its guns with this one.
As my colleague Mix noted, it’s concerning that companies have perfected the art of issuing hygienic statements for their failure to tackle these issues:
list of fav big tech excuses:
"we're making progress, but we *know* we've got more work to do"
"we make mistakes, but we always *make sure* to learn from those and evolve"
"we *know* we can do better"
"unfortunately, [x] will always be an issue and we'll have to deal with it"
— Mix (@Mixtatiq) March 28, 2019
Tech firms should be proactive about stamping out hate speech and other kinds of content that are easily identifiable as problematic for societies. In pretending like there’s gray area that’s hard to parse with such issues, they’re really just waiting for another tragedy to strike before issuing an apology.
TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech extravaganza by clicking here.