Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 14, 2021

Stanford team behind BS gaydar AI says facial recognition can expose political orientation


Stanford team behind BS gaydar AI says facial recognition can expose political orientation

Stanford researcher Michael Kosinski, the PhD behind the infamous “Gaydar” AI, is back with another phrenology-adjacent (his team swears it’s not phrenology) bit of pseudo-scientific ridiculousness. This time, they’ve published a paper indicating that a simple facial recognition algorithm can tell a person’s political affiliation.

First things first: The paper is called “Facial recognition technology can expose political orientation from naturalistic facial images.” You can read it here. Here’s a bit from the abstract:

Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ.

Second things second: These are demonstrably false statements. Before we even entertain this paper, I want to make it completely clear that there’s absolutely no merit to Kosinski and his team’s ideas here. Facial recognition technology cannot expose individuals’ political orientation.

[Related: The Stanford gaydar AI is hogwash]

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

For the sake of brevity I’ll sum up my objection in a simple statement: I once knew someone who was a liberal and then they became a conservative.

While that’s not exactly mind blowing, the point is that political orientation is a fluid concept. No two people tend to “orient” toward a specific political ideology the same way.

Also, some people don’t give a shit about politics, others have no clue what they’re actually supporting, and still others believe they agree with one party but, in their ignorance, don’t realize they actually support the ideals of a different one.

Furthermore: since we know the human face doesn’t have the ability to reconfigure itself like the creature from “The Thing,” we know that we don’t suddenly get liberal face if one of us decides to stop supporting Donald Trump and start supporting Joe Biden.

This means the researchers are claiming that liberals and conservatives express, carry, or hold themselves differently. Or they’re saying you’re born a liberal or conservative and there’s nothing you can do about it. Both statements are almost too stupid to consider.

The study claims that demographics (white people are more likely to be conservative) and labels (given by humans) were determining factors in how the AI segregates people.

In other words, the team starts with the same undeniably false premise as many comedians: that there’s only two kinds of people in the world.

According to the Stanford team, the AI can determine political affiliation with greater than 70% accuracy, which is better than chance or human prediction (both being about 55% accurate).

Here’s a an analogy for how you should interpret the Stanford team’s claims of accuracy: I can predict with 100% accuracy how many lemons in a lemon tree are aliens from another planet.

Because I’m the only person who can see the aliens in the lemons, I’m what you call a “database.” If you wanted to train an AI to see the aliens in the lemons, you’d need to give your AI access to me.

I could stand there, next to your AI, and point at all the lemons that have aliens in them. The AI would take notes, beep out the AI equivalent of “mm hmm, mm hmm” and start figuring out what it is about the lemons I’m pointing at that makes me think there’s aliens in them.

Eventually the AI would look at a new lemon tree and try to guess which lemons I would think have lemons in them. If it were 70% accurate at guessing which lemons I think have lemons in them, it would still be 0% accurate at determining which lemons have aliens in them. Because lemons don’t have aliens in them.

That, readers, is what the Stanford team has done here and with its silly gaydar. They’ve taught an AI to make inferences that don’t exist because (this is the important part): there’s no definable scientifically-measurable attribute for political party. Or queerness. One cannot measure liberalness or conservativeness because, like gayness, there is no definable threshold.

Let’s do gayness first so you can appreciate how stupid it is to say that a person’s facial makeup or expression can determine such intimate details about a person’s core being.

  1. If you’ve never had sex with a member of the same sex are you gay? There are “straight” people who’ve never had sex.
  2. If you’re not romantically attracted to members of the same sex are you gay? There are “straight” people who’ve never been romantically attracted to members of the opposite sex.
  3. If you used to be gay but stopped, are you straight or gay?
  4. If you used to be straight but stopped, are you straight or gay?
  5. Who is the governing body that determines if you’re straight or gay?
  6. If you have romantic relations and sex with members of the same sex but you tell people you’re straight are you gay or straight?
  7. Do bisexuals, asexuals, pansexuals, demisexuals, gay-for-pay, straight-for-a-date, or just generally confused people exist? Who tells them whether they’re gay or straight?

As you can see, queerness isn’t a rational commodity like “energy” or “number of apples on that table over there.”

The Stanford team used “ground truth” as a measure of gayness by comparing pictures of people who said “I’m gay” to pictures of people who said “I’m straight” and then fiddled with the AI’s parameters (like tuning in an old radio signal) until they got the highest possible accuracy.

Think of it like this: I show you sheet of portraits and say “point to the ones that like World of Warcraft.” When you’re done, if you didn’t guess better than pure chance or the human sitting next to you I say “nope, try again.”

This goes on for thousands and thousands of tries until one day I exclaim “eureka!” when you manage to finally get it right.

You have not learned how to tell World of Warcraft players from their portraits, you’ve merely learned to get that sheet right. When the next sheet comes along, you’ve got a literal 50/50 chance of guessing correctly whether a person in any given portrait is a WoW player or not.

The Stanford team can’t define queerness or political orientation like cat-ness. You can say that’s a cat and that’s a dog because we can objectively define the nature of exactly what a cat is. The only way you can determine whether someone is gay, straight, liberal, or conservative is to ask them. Otherwise you’re merely observing how they look and act and deciding whether you believe they are liberal or queer or whatnot. 

The Stanford team is asking an AI to do something no human can do – namely, predict someone’s political affiliation or sexual orientation based on the way they look.

The bottom line here is that these silly little systems use basic algorithms and neural network technology from half-a-decade ago. They’re not smart, they’re just perverting the literal same technology used to determine if something’s a hotdog or not.

There is no positive use-case for this.

Worse, the authors seem to be drinking their own Kool Aid. They admit their work is dangerous, but they don’t seem to understand why. Per this Tech Crunch article, Kosinski (referring to the gaydar study) says:

We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.

We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.

No, the results aren’t scary because they can out queers. They’re dangerous because they could be misused by people who believe they can. Predictive policing isn’t dangerous because it works, it’s dangerous because it doesn’t work: it simply excuses historical policing patterns. And this latest piece of silly AI development from the Stanford team isn’t dangerous because it can determine your political affiliation. It’s dangerous because people might believe it can, and there’s no good use for a system designed to breach someone’s core ideological privacy, whether it works or not.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with