You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on January 19, 2018

Clever coder uses AI to make disturbingly cool music videos


Clever coder uses AI to make disturbingly cool music videos

What happens when you take a perfectly good neural network and, figuratively, stick a screwdriver in its brain? You get melancholy glitch-art music videos that turn talking heads into digital puppets.

A machine learning developer named Jeff Zito made a series of music videos using a deep learning network based on Face2Face. Originally developed to generate stunningly realistic image transfers, like controlling a digital Obama in real-time using your own facial movements, this project takes it in a different direction.

Sometimes the best AI isn’t good enough. When it comes to art, for example, computations and algorithms often don’t matter as much as chaos and noise do. By fiddling with the network’s controls – essentially introducing less-than-optimum parameters — Zito was able to generate stark videos that remind us of everything weird about Max Headroom.

We reached out to Zito to find out where his inspiration came from, he told us:

The intention was to create art, absolutely. Training these networks with hi-def images takes days on the cloud, which unfortunately is not free, so there’s not a lot of room to experiment in a purposeless way. We had a few unsuccessful attempts, which in this backwards world means producing content that’s too accurate and sterile, before we started to understand what kind of content to use and how to utilize it effectively.

 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The faces in the video are the AI’s interpretation of the ‘style’ of source videos used as input. In a project overview Zito describes the process:

We use image-to-image translation, as outlined first in the pix2pix paper, to create a model from existing footage (like an interview with Italian philosopher Julius Evola or ‘Human Ken Doll’ Rodrigo Alves) and allow the performer to control and express themself, both through the body and appearance of another person and through the fragmented and distorted perception of a slightly sick neural network.

The music behind the project is from artist Lord Over who’s described by Zito as “a reclusive, somewhat shy artist.” According to an overview of the project:

Themes surrounding technology, humanity and identity are a constant thread throughout their work … we were looking for a way to obscure their face while allowing them to perform and emote as they might in real life.

Artificial Intelligence isn’t solely the domain of the Microsofts and Googles of the world. Anyone can play around with it as a personal hobby, or artistic endeavor, and still provide valuable contributions to the AI community.

And when individuals turn the status quo on its head, by showing us something we’ve seen before in an entirely different way, it helps the entire field.

Those videos are still creepy AF tho.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with