
In the wake of the firing of Timnit Gebru and other notable AI researchers at Google, Alphabetâs circled the wagons and lawyered up. Reports flow out of Mountain View depicting teams of lawyers censoring scientific research and acting as unnamed collaborators and peer-reviewers.
Most recently, Business Insider managed to interview several researchers who painted a startling and bleak picture of what itâs like to try and conduct research under such an anti-scientific regime.
Per the article, one researcher said:
Youâve got dozens of lawyers â no doubt, highly trained lawyers â who nonetheless actually know very little about this technology ⊠and theyâre working their way through your research like English undergrads reading a poem.
The problem here is that Google isnât censoring research to avoid, say, its secrets getting out. Its lawyers are targeting scientific research that makes the company look bad.
The person quoted above added that they were specifically talking about crossing out references to âfairnessâ and âbiasâ and scientists being told to change the results of their work. Itâs not only unethical, itâs incredibly dangerous.
The tea: Googleâs AI is broken. It might be a trillion-dollar company and the most cutting-edge AI outfit on Earth, but its algorithms are biased. And thatâs dangerous.
No matter how you slice it, Googleâs AI doesnât work as well for people who donât look like the vast majority of Googleâs employees (white dudes) as it does for people who do. From Searchâs conflation of Black people and animals to the algorithms running the camera on the Pixel 6âs inability to properly process non-white skin tones, Googleâs machine-learning woes are well-documented.
This is a big problem and it isnât easy to fix. Imagine building a car that didnât work as well for Black people and women as it did for white guys, selling 200 million, and then people slowly learning their automobiles were racist.
Thereâd be a lot of feelings and emotions about what that would mean.
Googleâs current situation is a lot like that. Its products are everywhere. It canât just recall Search or put Google Ads on hold for a few days while it rethinks the entire world of deep learning to exclude bias. Why not fix world hunger and make puppies immortal while theyâre at it?
So what do you do when youâre one of the richest companies in the world and you come up against a truth so awful that its existence makes your model seem evil?
You do what big tobacco did. You find people willing to say whatâs in your companyâs best interests and you use them to stop the people telling the truth from sharing their research.
The National Institutes of Health released research in 2007 describing the role of lawyers during the big tobacco legal battles of the previous decades.
In the paper, which is titled âTobacco industry lawyers as a disease vector,â the researchers attribute the spread of diseases associated with long-term tobacco use to the tactics employed by industry lawyers.
Some key takeaways from the paper include:
- Despite their obligation to do so, tobacco companies often failed to conduct product safety research or, when research was conducted, failed to disseminate the results to the medical community and to the public.
- Tobacco company lawyers have been involved in activities having little or nothing to do with the practice of law, including gauging and attempting to influence company scientistsâ beliefs, vetting inâhouse scientific research, and instructing inâhouse scientists not to publish potentially damaging results.
- Additionally, company lawyers have taken steps to manufacture attorneyâclient privilege and workâproduct cover to assist their clients in protecting sensitive documents from disclosure, have been involved in the concealment of such documents, and have employed litigation tactics that have largely prevented successful lawsuits against their client companies.
And weâre seeing the same potential with Googleâs approach. The companyâs treating the scientific method as an optional component of research.
As researcher Jack Clark, formerly of OpenAI, pointed out on Twitter:
I like to collaborate with people in research and I do a huge amount of work on AI measurement/assessment/synthesis/analysis. Why would I try and collaborate with people at Google if I know that thereâs some invisible group of people who will get inside our research paper?
Clarkâs talking about legibility here, the idea that the researchers have their names on the papers but the censors and lawyers donât.
See, down the road a few years, if Googleâs inability to address bias or create algorithms that are fair turns out deadly at scale over time, no lawyers will be harmed in the proceeding lawsuits.
And thatâs not fair. Billions of people put their trust in Google products every day. The AI we rely on is a part of our lives that influences our decisions. Whatever Googleâs lawyers are hiding could hurt us all.
Get the TNW newsletter
Get the most important tech news in your inbox each week.