The heart of tech is coming to the heart of the Mediterranean. Join TNW in València this March 🇪🇸

This article was published on March 30, 2020

Studies of racist algorithms don’t break anti-hacking law, court rules

Researchers can now create fake user accounts to test if algorithms discrimate against people of color

Studies of racist algorithms don’t break anti-hacking law, court rules Image by: Flazingo Photos
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

A federal court has ruled that research into racist algorithms doesn’t breach the Computer Fraud and Abuse Act (CFAA), a controversial anti-hacking law.

The US government had argued the law makes violating a website’s terms and conditions a criminal offense, which severely restricted investigations into discriminatory algorithms.

Advertisers have used these algorithms to stop people from seeing job, housing, or credit ads based on their race, gender, and age.

Researchers can investigate the companies behind them by creating fake user accounts, and then recording the adverts they receive. But if this violates the website’s terms-of-service, they could face federal prosecution.

[Read: UK police are using AI to predict who could become violent criminals]

The American Civil Liberties Union (ACLU) challenged this provision by filing a lawsuit on behalf of a group of researchers investigating online algorithms.

In a landmark decision, the court rejected the argument that the CFAA criminalizes terms-of-service violations — and ruled that the research could continue.

Exploiting the CFAA

The CFAA was introduced in 1984 to punish people for breaking into computer systems. But its notoriously vague terms have been exploited to pursue cases that go way beyond the law’s original purpose.

In recent years, it’s been used to imprison people for changing news headlines as a prank, leaking documents to WikiLeaks, and downloading academic articles hidden behind a paywall.

People have also been charged with breaking the law by creating fake user accounts — a key method of investigating discriminatory algorithms.

In the ACLU case, the researchers planned to use fake accounts to check if housing sites were preventing people of color from seeing certain listings.

When algorithms analyze profiles, browsing history, and other information bought from data brokers, they can steer users towards different ads based on specific characteristics.

The researchers wanted to find out which housing sites were doing this by creating multiple accounts with characteristics associated with different racial groups. But as many job sites ban scraping and fake accounts in their terms of service, the researchers risked facing criminal prosecution.

Now that a federal court has ruled their plans are legal, the investigation can safely go ahead.

“This decision helps ensure companies can be held accountable for civil rights violations in the digital era,” said Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“Researchers who test online platforms for discriminatory and rights-violating data practices perform a public service. They should not fear federal prosecution for conducting the 21st-century equivalent of anti-discrimination audit testing.”

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with