Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on September 21, 2020

Why Twitter’s image cropping algorithm appears to have white bias


Why Twitter’s image cropping algorithm appears to have white bias

Twitter‘s algorithm for automatically cropping images attached to tweets often doesn’t focus on the important content in them. A bother, for sure, but it seems like a minor one on the surface. However, over the weekend, researchers found that the cropping algorithm might have a more serious problem: white bias.

Several users posted a lot of photos to show that in an image that has people with different colors, Twitter chooses to show folks with lighter skin after cropping those images to fit its display parameters on its site and embeds. Some of them even tried to reproduce results with fictional characters and dogs.

If you tap on these images, you’ll see an uncropped version of the image which includes more details such as another person or character. What’s odd is that even if users flipped the order of where dark-skinned and light-skinned people appeared in the image, the results were the same.

However, some people noted that there might be other factors than the color of the skin. And they who tried different methods found inconsistent results.

Twitter’s Chief Design Officer (CDO), Dantley Davis, said that the choice of cropping sometimes takes brightness of the background into consideration.

In a thread, Bianca Kastl, a developer from Germany, explained that Twitter’s algorithm might be cropping the image based on saliency — an important point or part in an image that you’re likely to look at first when you see it.

Her theory is backed by Twitter’s 2018 blog post that explained its neural network built for image cropping. The post notes that earlier, the company took facial detection into account to crop images. However, that approach didn’t work for images that didn’t have a face in them. So the social network switched to a saliency-based algorithm.

[Read: Are EVs too expensive? Here are 5 common myths, debunked]

Even if Twitter’s algorithm is not ‘racist,’ enough people have posted examples showing the algorithm appears biased towards lighter skin tones, and the results are problematic.. The company definitely needs to do some digging into their algorithm to understand the bias in its neural network. Anima Anandkumar, Director of AI research at Nvidia, pointed out that the saliency algorithm might be trained using eye-tracking of straight male participants, and that would insert more bias into the algorithm.

Twitter spokesperson Liz Kelly tweeted that the firm tested the model and didn’t find any bias. She added that the company will open-source its work for others to review and replicate. It might be possible that Twitter has ignored some factors while testing, and open-sourcing the study might help them find those blind spots.

The company’s Chief Technology Officer, Parag Agarwal, said that the model needs continuous improvements and the team is eager to learn from this experience.

Light skin bias in algorithms is well documented in fields ranging from healthcare to law enforcement. So large companies like Twitter need to continuously work on their systems to get rid of it. Plus, it needs to start an open dialog with the AI community to understand its blind spots.

So you’re interested in AI? Then join our online event, TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top