This article was published on February 26, 2019

Facebook’s global content moderation fails to account for regional sensibilities


Facebook’s global content moderation fails to account for regional sensibilities

Over the past year, Facebook’s biggest challenge has been moderating billions of posts every day in more than a hundred languages. It’s proven almost impossible to maintain a balance between what is deemed as “hate speech” and “free speech” since social media is global, but our perception of “free speech” is determined by region.

During the TechChill conference in Riga last week, David Ryan Polgar, a Tech Ethicist and founder at All Tech is Human, explained the challenges platforms like Facebook and Twitter face when it comes to moderating what they believe to be “hate speech.” For hundreds of years, it was down to governments to make decisions that benefit their culture and country. But in a corporate structure, companies like Facebook and Twitter make decisions that benefit the bottom line.

During Polgar’s talk, “An ethical dilemma: The difficult tradeoffs with fighting hate speech and misinformation online” he said: “They [social media companies] don’t even want this power. If they could give away this power back to government, they would.” He added: “But social media is unfortunately doing a lousy job at solving some of these issues.”

When platforms ban speech deemed as inappropriate, they are often accused of censoring speech with a bias. However, when platforms allow “harmful” content online, they are accused of being neglectful by the other side of the debate. But what may be perceived as hateful or inappropriate in one country might not be in another.

Facebook’s history of moderating “hate speech”

Facebook has been attacked from all corners when it comes to moderating “hate speech” found on its platform. Currently, Facebook’s community guidelines regarding hate speech outline: “We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.” It claims to define hate speech as a “direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.”

However, Facebook is constantly fighting a losing battle with conservatives and their definition of liberal bias, liberals for allowing white nationalist ideas online, governments for allowing fake news and the spread of disinformation, and human rights activists for allowing hate speech targeted towards a race, ethnicity, or gender.

Facebook Jailed, an activist group working towards “exposing Facebook’s white male bias,” proves the tech giant has double standards when it comes to sexist censorship on its  platform. The group  found users posting threats of lynching, personal attacks on women, and posts stating “women are scum” had all escaped the Facebook community ban. However, if a user posts “men are trash” this is treated as “hate speech” and punished with a 30 day ban.

However, in January, Facebook updated their policy stating that moderators should take into account users’ recent romantic upheaval when moderating posts that express hatred towards a gender. The Verge noted: “‘I hate all men’ has always violated the policy. But ‘I just broke up with my boyfriend, and I hate all men” no longer does.”

Westernizing the meaning of “hate speech”

Since cultures are consolidated in different countries with their own values and beliefs, it’s seems impossible for Facebook, a platform with over 2 billion people, to possibly scale the idea of global uniformity. As social media companies claim to provide a “marketplace of ideas,” this would also include the opposing arguments to topics, otherwise it’s a space depicting speech everyone agrees with.

According to Polgar: “There’s immense pressure on Facebook to do ‘the right thing,’ but what is the right thing?” The definition of “hate speech” is still debatable and he argument continues around the idea of Facebook creating a universal standard to define what the term actually means.. If this were to be happen, Polgar said: “Obviously, this would have a very heavy imprint of American ideals and this may not be something every country wants.” He added: “There are major distinctions in cultural differences.”

Up until now, Polgar believes we have “relied on the truth to naturally rise to the top.” Seemingly, this has not worked online, as the spread of disinformation floods our feeds everyday, misogynistic views are easy to find, and racial hatred is just a click away.

Since social media seems to have all the power in moderating and deciding what “hate speech” is, Polgar said: “The government has the incentive for truth and has the incentive for civil discourse. Unfortunately, corporations don’t always have that same incentive.” He added, “Often at times, what is engaging content online is often something that is bad for society, like conspiracy theories.”

Multiple studies have shown people read and spread fake news more than they do real news and this has significant societal impact. Yet we have transferred this power to companies that allegedly don’t want the power to control what is posted online. Last year, Jack Dorsey, CEO of Twitter, told Wired: “When we started the company, we weren’t thinking about this at all.”

What is said online will inevitably have consequences offline, but perhaps the responsibility to solve the issue shouldn’t fall on one person or a single corporation. Polgar believes: “Everybody is responsible. Dealing with misinformation online takes participation from all actors, it’s a social responsibility.”

TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech bonanza by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top