Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on August 3, 2021

Why Twitter wants ethical hackers to fix its algorithmic biases

The company wants to cultivate a community of ethical AI hackers


Why Twitter wants ethical hackers to fix its algorithmic biases

Twitter is applying the bug bounty model to machine learning.

The micro-blogging site has launched the industry’s first algorithmic bias bounty competition.

The challenge was created to identify potential harms in Twitter’s notorious image cropping algorithm, which was largely abandoned after exhibiting gender- and race-based biases.

The company now wants to incentivize the community to find further unidentified risks of the algorithm. The winners of the challenge will receive cash prizes of up to $3,500.

The contest is a first in the field of AI biases, but bounty programs have a long history in IT security.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Jutta Williams, Product Manager for Twitter META (Machine learning Ethics, Transparency, and Accountability), told TNW that the initiative was inspired by how research and hacker communities help the security field:

Twitter’s always been shaped by the people who use and know it best, so we want to cultivate a similar community, focused on ML ethics, to help us identify a broader range of issues than we would be able to on our own. With this challenge we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms.

Tapping into the community

The initiative is not the first time that Twitter’s sought community support for mitigating algorithmic harms.

In May, the META team shared its research and code on the image cropping algorithm’s biases so that others could investigate the issue.

The cropping algorithm estimates what people want to see first within a picture. This calculation then determines how an image is cropped to an easily viewable size.

The model was trained on human eye-tracking data to predict a saliency score on all regions of a picture. It then chooses the point with the highest score as the center of the crop.

After receiving feedback that the algorithm didn’t serve all people equitably, Twitter analyzed the model for biases. The researchers uncovered underlying issues that favored white individuals over Black people.

We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves,” Rumman Chowdhury, the head of Twitter’s META team, told TNW.

In the challenge, participants will get access to Twitter’s saliency model and the code used to generate a crop of an image. Their mission is to demonstrate potential harms that such an algorithm may produce.

Democratizing standards

A key goal of the contest is to develop community-driven standards and best practices for assessing ML models. Notably, Twitter has created a grading rubric that articulates algorithmic harms in a way that didn’t previously exist.

There’s already a large community of ethical AI hackers that Twitter hopes to tap into. Historically, however, they haven’t been incentivized to do this sort of work in the same way as whitehat security hackers.

“In fact, people have been doing this sort of work on their own for years, but haven’t been rewarded or paid for it,” said Chowdhury.

The introduction of monetary rewards will add further encouragement.

Ultimately, Chowdhury wants to foster a more inclusive and proactive approach to mitigating algorithmic risks:

Bounty programs such as this one are critical in helping raise awareness for harms and biases that might exist in algorithms that are beyond our current scope of lived experiences and understanding. We also invite a wider range of perspectives than is possible on one team or in one company; we want to open up lines of communication globally and provide a platform and incentive for more people to be engaged. 

The challenge is open for entries until 11:59PM PT on August 6. The winners will be announced at the DEF CON AI Village workshop on August 8. Anyone with a HackerOne account can participate in the competition.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with