Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on March 5, 2021

Scientists made an AI that reads your mind so it can generate portraits you’ll find attractive


Scientists made an AI that reads your mind so it can generate portraits you’ll find attractive

A team of researchers recently developed a mind-reading AI that uses an individual’s personal preferences to generate portraits of attractive people who don’t exist.

Computer-generated beauty truly is in the AI of the beholder.

The big idea: Scientists from the University of Helsinki and the University of Copenhagen today published a paper detailing a system by which a brain-computer-interface is used to transmit data to an AI system which then interprets that data and uses it to train an image generator.

According to a press release from the University of Helsinki:

Initially, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. The images were shown, one at a time, to 30 volunteers who were asked to pay attention to faces they found attractive while their brain responses were recorded via electroencephalography (EEG) …

The researchers analysed the EEG data with machine learning techniques, connecting individual EEG data through a brain-computer-interface (BCI) to a generative neural network.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Once the user’s preferences were interpreted, the machine then generated a new series of images, tweaked to be more attractive to the individual whose data it was trained on. Upon review, the researchers found that 80% of the personalized images generated by the machines stood up to the attractiveness test.

Background: Sentiment analysis is a big deal in AI, but this is a bit different. Typically, machine learning systems designed to observe human sentiment use cameras and rely on facial recognition. That makes them unreliable for use with the general public, at best.

But this system relies on a direct link up to our brainwaves. And that means it should be a fairly reliable indicator of positive or negative sentiment. In other words: the base idea seems sound enough in that you look at a picture you find pleasing and then an AI tries to make more pictures that trigger the same brain response.

Quick take: You could attempt to hypothetically extrapolate the potential uses for such an AI all day long and never decide whether it was ethical or not. On the one hand, there’s a treasure trove of psychological insight to be gleaned from a machine that can abstract what we like about a given image without relying on us to consciously understand it.

But, on the other hand, based on what bad actors can do with just a tiny sprinkling of data, it’s absolutely horrifying to think of what a company such as Facebook (that’s currently developing its own BCIs) or a political influence machine like Cambridge Analytica could do with an AI system that knows how to skip someone’s conscious mind and appeal directly to the part of their brain that likes stuff. 

You can read the whole paper here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with