Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on April 22, 2020

Facebook’s political targeting machine should be cracked open by the DSA

Once in a decade chance to tackle disinformation


Facebook’s political targeting machine should be cracked open by the DSA

Europe is about to overhaul its 20-year-old eCommerce Directive and it is a once-in-a-decade chance to correct the power imbalance between platforms and users. As part of this update the Digital Services Act must address political microtargeting (PMT). 

PMT has the alarming power to derail democracy, and should not be left in the hands of private companies. In fact, in practice in the EU, there is only one company running the show: Facebook.

According to self-assessment reports, political advertisers spent €31 million (excluding the UK) on Facebook, and only €5 million on Google between March and September 2019. 

But it is not just the spend and dominance of Facebook as a social network that is concerning. The company’s role in developing and targeting adverts goes far beyond a simple presentation medium. A detailed report based on data collected during two Polish election campaigns in 2019 carried out by Panoptykon and partners, shed stark light on the role of the company.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Although the study found that Polish political parties did not engage in microtargeting to a major degree, it revealed that Facebook’s transparency and control tools offered to both researchers and users are “insufficient and superficial.” Furthermore, according to the report “our observations suggest that the role of Facebook in optimising ad delivery could have been significant.”

Currently, Facebook’s role in ad delivery optimization is murky at best.

Advertisers on Facebook can opt to select audiences on obvious factors such as age, gender, language spoken and location. But the Facebook machine also steers them towards increasingly narrower criteria such as interests, “life events” and behaviour, as well as more than 250,000 free-text attributes including for example Adult Children of Alcoholics, or Cancer Awareness.

Facebook is not merely a passive intermediary, its algorithms interpret criteria selected by advertisers and deliver ads in a way that fulfills advertisers’ objectives. In most European countries, political parties do not have access to voters’ personal data via electoral registries, but Facebook’s Custom Audience tool allows them to narrowly define segments of society without ever having to leave the Facebook platform. Thus political parties who independently would not have had the capacity to do detailed data analysis, can “outsource” it to Facebook. In 2016, the company introduced a feature allowing them to target “lookalikes” – profiles similar to a target audience.

In September 2018, the European Commission launched the Code of practice against disinformation, a self-regulatory instrument which encourages its signatories to commit to a variety of actions to tackle online disinformation. In terms of advertising, signatories, including Facebook, committed to providing transparency into political and issue-based advertising, and to helping consumers understand why they are seeing particular ads.

But by January 2019, the Commission was already calling on signatories to intensify their efforts, saying that although there had been “some progress, notably in removing fake accounts and limiting the visibility of sites that promote disinformation,” additional action was needed to ensure full transparency of political ads.

In March 2019, Facebook introduced the Ad Library (a public repository of ads in the EU), but this only goes so far, and does not disclose detailed information on targeting. 

By law, political advertisers in Poland are required to include a disclaimer indicating the entity that paid for the ad. However, despite this, almost a quarter (23.6%) of political ads on Facebook were mislabeled according to the study. Worryingly, Panoptykon says the Polish National Election Commission “does not have the tools to effectively monitor and supervise election campaigns on social media, including their financing.”

In January 2020, Facebook announced it would add “potential reach” for each political and issue ad — the estimated audience size an advertiser wanted to reach, rather than the actual reach.

Based on its research, the Panoptykon study found that the majority of ads in the Polish elections were delivered to a maximum of 50,000 people, with most ads being shown to just 1,000-4,999 users.

In terms of interests selected by advertisers, the three top ones included: “business,” “Law and Justice” (the name of the party), and “European Union,” but also more sensitive criteria such as “LGBT,” “gender,” and “climate. However key terms do not paint the whole picture.

Users are targeted by Facebook’s algorithm based on potentially thousands of distinct selectors following a formula that only the company knows. This happens at the so-called “ad optimization” stage where targets are developed using mysterious criteria beyond the advertiser’s core selection.

And the tools available to researchers and users are insufficient. “Insights about targeting for all ads should be archived in ad libraries and accessible for researchers via a fully-functional API, with minimum technical standards defined by law,” says Panoptykon.

Even the “why am I seeing this ad?” tool on Facebook is misleading, revealing only the “lowest common denominator” attribute.

For example, according to the report, during the European elections campaign in Poland in May 2019, a person who was pregnant saw a political ad referring to prenatal screenings and perinatal care. “Why am I seeing this ad?” informed her that she was targeted because she was interested in “medicine” (potential reach 668 million) rather than“pregnancy” (potential reach of 316 million). Users can only verify (check, delete, or correct) a short list of interests that the platform is willing to reveal. 

Facebook’s constant behavioral observation and algorithmic analysis raises multiple red flags, not least the amount of personal data required to make those observations. Facebook’s ad settings have been built on an “opt out” rather than an active “opt in” premise.

The Digital Services Act could address many of these issues. At the very least, regulation could prohibit PMT based on characteristics which expose our mental or physical vulnerabilities (e.g. depression, anxiety, addiction, illness). Indeed, any sort of advertising aimed at manipulating vulnerable users should be banned, particularly as there appears to be a gap between ads labelled as “political” by the platform, and ads perceived as political by researchers.

Regulation requiring greater transparency for researchers and users, opt-in rather than opt-out, tighter requirements for political advertising and recognizing PMT as a high-risk category of AI will not solve all the problems of political disinformation in society, but they would be a start!

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with