Twitter‘s algorithm for automatically cropping images attached to tweets often doesn’t focus on the important content in them. A bother, for sure, but it seems like a minor one on the surface. However, over the weekend, researchers found that the cropping algorithm might have a more serious problem: white bias.
Several users posted a lot of photos to show that in an image that has people with different colors, Twitter chooses to show folks with lighter skin after cropping those images to fit its display parameters on its site and embeds. Some of them even tried to reproduce results with fictional characters and dogs.
If you tap on these images, you’ll see an uncropped version of the image which includes more details such as another person or character. What’s odd is that even if users flipped the order of where dark-skinned and light-skinned people appeared in the image, the results were the same.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
Happens with Michael Jackson too…… pic.twitter.com/foUMcExS2P
— carter (@gnomestale) September 19, 2020
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf
— Jordan Simonovski (@_jsimonovski) September 20, 2020
I tried it with dogs. Let's see. pic.twitter.com/xktmrNPtid
— – M A R K – (@MarkEMarkAU) September 20, 2020
— BG 🔥 #SouthernCollective (@joBeeGeorgeous) September 20, 2020
However, some people noted that there might be other factors than the color of the skin. And they who tried different methods found inconsistent results.
Does Twitter's thumbnail-picker algorithm systematically prefer white faces over Black ones?
I did an experiment. It's not conclusive, but in my experiment with pictures of Barack Obama, Raphael Warnock, George W. Bush and Donald Trump, the hypothesized pattern didn't appear. pic.twitter.com/2ddcPR5CPi
— Jeremy B. Merrill (@jeremybmerrill) September 20, 2020
— garry (@garrynewman) September 20, 2020
More proof: pic.twitter.com/CeCEOTsSJ8
— Him Gajria (@himgajria) September 20, 2020
White-to-Black ratio: 40:52 (92 images)
Code used: https://t.co/qkd9WpTxbK
Final annotation: https://t.co/OviLl80Eye
(I've created @cropping_bias to run the complete the experiment. Waiting for @Twitter to approve Dev credentials) pic.twitter.com/qN0APvUY5f
— Vinay Prabhu (@vinayprabhu) September 20, 2020
Twitter’s Chief Design Officer (CDO), Dantley Davis, said that the choice of cropping sometimes takes brightness of the background into consideration.
Here's another example of what I've experimented with. It's not a scientific test as it's an isolated example, but it points to some variables that we need to look into. Both men now have the same suits and I covered their hands. We're still investigating the NN. pic.twitter.com/06BhFgDkyA
— Dantley 🔥✊🏾💙 (@dantley) September 20, 2020
In a thread, Bianca Kastl, a developer from Germany, explained that Twitter’s algorithm might be cropping the image based on saliency — an important point or part in an image that you’re likely to look at first when you see it.
Probably Twitters Crop algorithm is a pretty simple Saliency. We will see… pic.twitter.com/q4R0R8h3vh
— Bianca Kastl (@bkastl) September 20, 2020
Her theory is backed by Twitter’s 2018 blog post that explained its neural network built for image cropping. The post notes that earlier, the company took facial detection into account to crop images. However, that approach didn’t work for images that didn’t have a face in them. So the social network switched to a saliency-based algorithm.
Even if Twitter’s algorithm is not ‘racist,’ enough people have posted examples showing the algorithm appears biased towards lighter skin tones, and the results are problematic.. The company definitely needs to do some digging into their algorithm to understand the bias in its neural network. Anima Anandkumar, Director of AI research at Nvidia, pointed out that the saliency algorithm might be trained using eye-tracking of straight male participants, and that would insert more bias into the algorithm.
Recording straight men where their eyes veer when they view female pictures is encoding objectification and sexualization of women in social media @Twitter No one asks whose eyes are being tracked to record saliency. #ai#biashttps://t.co/coXwngSjiW
— Prof. Anima Anandkumar (@AnimaAnandkumar) September 20, 2020
Twitter spokesperson Liz Kelly tweeted that the firm tested the model and didn’t find any bias. She added that the company will open-source its work for others to review and replicate. It might be possible that Twitter has ignored some factors while testing, and open-sourcing the study might help them find those blind spots.
thanks to everyone who raised this. we tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we'll open source our work so others can review and replicate. https://t.co/E6sZV3xboH
— liz kelley (@lizkelley) September 20, 2020
The company’s Chief Technology Officer, Parag Agarwal, said that the model needs continuous improvements and the team is eager to learn from this experience.
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement.
Love this public, open, and rigorous test — and eager to learn from this. https://t.co/E8Y71qSLXa
— Parag Agrawal (@paraga) September 20, 2020
Light skin bias in algorithms is well documented in fields ranging from healthcare to law enforcement. So large companies like Twitter need to continuously work on their systems to get rid of it. Plus, it needs to start an open dialog with the AI community to understand its blind spots.
So you’re interested in AI? Then join our online event, TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.