Less than a week until TNW València 🇪🇸 Get a last-minute 30% discount on your ticket

This article was published on August 15, 2019

Project Veritas releases ‘internal documents’ from Google and alleges anti-conservative bias

Project Veritas releases ‘internal documents’ from Google and alleges anti-conservative bias
Ravie Lakshmanan

A former Google employee has released nearly 1,000 documents which he says is evidence of the search giant’s anti-conservative bias on the platform.

Zachary Vorhies, the employee in question, shared the documents with Project Veritas — a right-leaning investigative journalism non-profit founded by James O’Keefe — as well as the US Department of Justice Antitrust Division, which is investigating Google for potential anticompetitive behavior.

“The reason why I collected these documents was because I saw something dark and nefarious going on with the company and I realized that there were going to not only tamper with the elections, but use that tampering with the elections to essentially overthrow the United States,” Vorhies told O’Keefe in a video interview.

The 300MB cache of internal documents spans a breadth of topics, including censorship, fake news, machine learning fairness, politics, hiring practices, leadership training, and psychological research.

What do they contain?

From what we’ve seen of the documents, there doesn’t appear to be anything particularly damning. Some of them — screenshots of email correspondence between Google employees dating back to 2017 — delve into definitions of fairness in machine learning (ML) algorithms, while stressing the need for adversarial testing to avoid biases and stereotyping.

The document dump

A few others highlight the different internal projects undertaken to design and develop inclusive ML, including one pertaining to improving YouTube’s filtering of LGBTQ content in Restricted Mode.

Recent research, in fact, has found that lack of inclusive data sets used when training hate speech detection algorithms — for example, Google’s Perspective API — frequently mislabel black American speech patterns as toxic.

In addition, both Google News and YouTube seem to maintain blacklists containing websites (thegatewaypundit.com, forwardprogressives.com, dailycaller.com) and search terms (Abortion discrimination, Abortion rally) to downrank low quality news sources, handle misinformation, and manually ban associated content from showing up across the services.

Other documents talk about downranking far-right American conspiracy website InfoWars in search results, and considering a petition to drop AdSense support for another far-right site Breitbart after the 2016 US election. In a way, it corroborates the website’s report from late last year that a group of Google employees were plotting its downfall by stripping it of ad revenues.

A question of trust

This is not the first time Vorhies has collaborated with Project Veritas. Earlier this June, he anonymously shared another trove of internal Google files revealing “algorithmic unfairness.”

But after he resumed work, the search behemoth allegedly demanded he turn over his employee badge and work laptop, warned him from disclosing any non-public information, and called the police to perform a “wellness check” on him — prompting him to go public this time.

Project Veritas, for its part, has been previously caught trying to entrap journalists with deliberately fabricated stories in an attempt to discredit media outlets. So, I’d recommend take its investigative reporting with a pinch of salt.

That isn’t to say the documents are necessarily doctored. Based on our checks of employee profiles on LinkedIn at random, it’s probably safe to say they are indeed real. Authenticity aside, it’s however possible that Vorhies and Project Veritas cherry-picked messages and files that could paint the company in a bad light.

Ultimately, questions about accountability and transparency when it comes to black box algorithms aren’t new. If anything, these documents serve to highlight the need for diverse perspectives during design and development of technological products and services that a large number of people use, and guarantee they are trained and tested exhaustively.

That these files exist is immaterial considering what’s at stake, and underscores the heightened need for fairness in algorithms as AI increasingly touches all aspects of everyday life.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with