Entering the anti-Semitic term “6mwe” into Pinterest’s search box won’t yield any results but will offer an admonishment: “Pinterest isn’t a place for hateful images, content or activities. Learn more about our policies.”
But searching through Pinterest’s site from Google for the term “6mwe hoodie” easily produces dozens of pins for content that clearly violates Pinterest’s restrictions on “hate-based conspiracy theories and misinformation, like Holocaust denial,” content the platform says it has limited or removed.
This backdoor to restricted content on Pinterest raises questions about the efficacy of the platform’s moderation strategy, which focuses on preventing people from finding certain content on the site instead of taking that content down. While that strategy has drawn wide praise, it has also drawn criticism for failing to consider the ways that offensive content can spill over into other platforms and persist long after being hidden.
The Markup found 64 pins featuring merchandise containing “6mwe”–a term that, according to the Anti-Defamation League, is shorthand for “6 million wasn’t enough,” a reference to the number of Jewish people killed during the Holocaust. During rallies attended by members of the right-wing group the Proud Boys, some of them were spotted wearing clothing featuring a symbol of an eagle with the phrase “6mwe” below it. (See our data.)
The Markup was able to save and organize these pins onto a private board. None of Pinterest’s recommendation tools suggested content with these terms.
The Markup also found nine pins for throw pillows, backpacks, travel mugs, and face masks featuring the words “Camp Auschwitz” over a skull and the words “Work brings freedom.”
After The Markup shared our findings with Pinterest, the pins were removed. “There’s no place for content like this on Pinterest,” said Crystal Espinosa, a Pinterest spokesperson. Espinosa added, “When we identify or are made aware of content that violates our policies, we review and take action.” Asked why these pins were available on the platform, Espinosa explained that its moderation efforts aren’t perfect, and that “given the volume and complexity of content, there is always more work to do, and we also recognize that we have more opportunities to improve.”
“There is no nuance required to make a judgment call on whether or not a ‘Camp Auschwitz’ throw pillow crosses the line,” Jonathan A. Greenblatt, CEO of the Anti-Defamation League, told The Markup. “The fact that this hateful and offensive content is so easily accessible on Pinterest demonstrates how far tech companies still have to go to address even the most obvious examples of hate. If Pinterest wants to create a welcoming and safe space for all people, it must invest significantly more resources into its content moderation efforts going forward.”
The “Camp Auschwitz” pins all pointed to listings on Redbubble’s marketplace that had already been removed. Redbubble spokesperson Marissa Hermo told The Markup that its platform identified these items as violations, and that they were removed the same day they were uploaded. “We have a suspend-for-review mechanism in place for specific topics, so this content should not have been available for public view,” she said.
Pinterest has been criticized in the past for using its search system to try to restrict harmful content rather than focusing on removing such content outright.
Last year an investigation by OneZero found that Pinterest’s decision to just hide certain content rather than remove it outright from the platform allowed sexualized images of young girls, misinformation about the coronavirus and the vaccines, and content related to QAnon to persist on the platform, which the publication discovered through the same type of targeted Google search The Markup did. Pinterest has since announced that QAnon content is prohibited on its platform, a move that the company put in place in 2018.
Google spokesperson Jane Park told The Markup that any website owner can remove content from Google’s search index by first removing the content from their own site, and then using the “Outdated Content” tool to have it removed from Google. “Our results reflect information available on the open web, and sites can choose if they want to have their pages indexed by Google,” said Park.
The Markup asked Pinterest why these results hadn’t been removed from Google’s index using such tools. “We have oversight of the content that appears on our platform and work with Google to expedite removal of links to content that has been removed from Pinterest. We’re always working to speed up this process so that policy-violating content is not persistent elsewhere,” said Pinterest’s Espinosa.
Before the coronavirus pandemic began, Pinterest moved aggressively to fight anti-vaccine misinformation on the platform by not returning results for related searches. But it appeared to leave the offending pins in place, allowing users to save and curate them on boards.
In December 2020, Newsweek reported on “6mwe” merchandise being sold on Amazon and other e-commerce sites. Amazon removed the products that Newsweek found.
Content spreads quickly across platforms, which can create challenges for moderation. In August 2020, a manipulated video of Nancy Pelosi (the second such viral video), which first appeared on TikTok, soon spread to other platforms. Both YouTube and Twitter promptly removed the video, citing policies prohibiting manipulated media. A Facebook spokesperson told The Washington Post, however, that the video did not violate its policies, and left it up. Andy Stone, Facebook’s policy communications director, also told The Post that Facebook had clarified its policy language and the video would have its visibility reduced.
These artifacts of offensive content that may originate from one site and persist on another creates a unique problem that the big platforms’ content moderation techniques don’t really address, said Kate Klonick, assistant professor of law at St. John’s University School of Law.
“There is some liminal space in which these ideas are fomenting,” Klonick said. And it’s unclear “how much responsibility each site has to clean off their material from other sites, and in the hallways between all of these companies, you’re having this stuff getting trapped and being empty, but nonetheless the exoskeleton remains.”
Eric Goldman, an associate dean of research and professor at Santa Clara University School of Law, argued Pinterest may be taking the correct approach in some cases, in leaving content accessible. Platforms and services have a wide range of remedies to exercise before resorting to removal, Goldman said, which can have unforeseen, negative consequences.
“If you remove an item of content and there are anchors to that content somewhere else in the system, either comments are attached to it or inbound links, it breaks the conversation that might be taking place around that item of content,” Goldman told The Markup. Leaving offensive content up may be distressing, he said, but it can be important when seen in a historical context. “When items are removed, we lose a piece of our history that might be ugly, and that might be uncomfortable, but that also might be crucial to making sense of the world.”
But that seemingly dormant content, albeit somewhat harder to find, is far from innocuous.
Many of the anti-Semitic pins The Markup collected led to inactive or removed e-commerce listings, but of the 38 links that pins connected to, nearly a third led to active e-commerce sites where the items could be purchased from T-shirt vendors such as donefashion.com, funnysayingtshirts.com, myclothzoo.com, teesbuys.com, teeshirtxyz.com, and teejabs.com. We reached out to each of these sites for comment but received no responses.
Pinterest’s engineering blog regularly shares news about the company’s latest efforts to moderate content that violates its policies. Recently, the company’s trust and safety machine learning lead, Vishwakarma Singh, wrote, “Today our models detect and filter policy violations for adult content, hateful activities, medical misinformation, drugs, self-harm, and graphic violence. We plan to explicitly model other categories in our next model iteration.”
The moderation system, the blog says, uses machine learning and image recognition to look up new pins and boards that are created against models trained to identify policy violations.
In its latest biannual transparency report, Pinterest said offensive content was seen by a very small percentage of its 440 million monthly visitors. The report stated that 10 percent of users between October and December 2020 saw policy-violating pins categorized as “hateful activities.”
This article by Jon Keegan was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.