Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 5, 2018

MIT made a psychopathic AI based on a Hitchcock thriller


MIT made a psychopathic AI based on a Hitchcock thriller

A team of mad scientists from MIT’s ghoulish Media Lab have created an AI-powered version of Alfred Hitchcock’s mother-loving murderer Norman Bates.

Norman is an intentionally-biased neural network designed to caption images. We’ve covered similar AI projects (this one was hilarious, and here’s one that fools humans) but MIT’s wasn’t trained like they were. It was, for lack of a better term, abused.

Rather than feed it good wholesome data obtained with the purest of intent, MIT researchers “exposed it to the darkest corners of Reddit” where it developed a macabre world view:

We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death.

The result? Norman is pretty messed up.

Credit: MIT Media Lab

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

 

The researchers fed it a series of Rorshact test images and compared its answers with that of a different neural network trained on more traditional data.

Credit: MIT Media Lab

Most developers design machine learning products to improve lives or streamline business. But MIT’s Media Lab has made it an annual habit to reveal disturbing neural networks, terrifying human-machine collaborations, and other spooky AI-powered creations. Previous entries include 2016’s Nightmare Machine, an AI that applies hellish filters to images, and 2017’s collaborative storytelling machine Shelly.

Credit: MIT Media Lab
MIT’s Nightmare Machine creates beautiful horror.

This year’s offering comes at a particularly sensitive time for tech, where unscrupulous use of data has become a hot-button topic. The point of Norman, according to the researchers, is to show the difference that biased data can make when developers are building an AI.

And in the wake of the Cambridge Analytica data scandal, which allegedly affected the results of both the Brexit campaign and the 2016 US presidential elections, it’s more evident now than ever that biased data can lead to unfavorable results.

Norman serves as a reminder that, as its creators put it, “when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.” Another way of putting it: bullshit in, bullshit out.

You can learn more about Norman, and see more of its morbid captions, here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.