Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 26, 2015

Google AI research features a chilling chatbot who hates children


Google AI research features a chilling chatbot who hates children

We know Google is big on AI research – it acquired Deepmind last year, has partnered with Oxford University and is packing intelligence into everything from self-driving cars to Google Now. But sometimes that can be a little creepy.

A research paper published this week, blandly titled ‘A Neural Conversational Model,’ and authored by Oriol Vinyals of the Google Brain team and Quoc V. Le, a Google researcher at Stanford, includes some unsettling human/computer interactions.

The project looked at conversational modeling (predicting what will come next in a dialogue), which is key to natural language understanding and machine intelligence.

It involved testing a model with a dataset made up of IT helpdesk questions and another, more complex one, comprised of excerpts from movie transcripts.

When the computer was providing IT support, it did a pretty good job [Note – grammar and spelling rendered as shown in the research paper]:

Machine: what is the error that you are running please
Human: i am seeing an error related to vpn
Machine: what is the error message that you are getting when connecting to vpn using network connect?
Human: connection refused or something like that
Machine: may i know the version of network connect you connect?
Human: i am not sure i know that
Machine: is the network connect prompting for any user certificate
Machine: ?
Human: yes
Machine: are you at home or at work at the moment?
Machine: office?
Human: i am at home
Machine: try this
Machine: goto page and enter your name and password
Human: done
Machine: try logging in with and check if its working fine now
Human: yes, now it works !

But when the computer was tasked with responding to the movie dialogue, the conversations are unsettling in the extreme:

Human: what is the purpose of life?
Machine: to serve the greater good.
Human: what is the purpose of living?
Machine: to live forever.

Hello, Skynet. It gets worse when the conversation moves on to morality:

Human: what is moral?
Machine: what empowered humanity, what intellectual the essence is .
Human: what is immoral?
Machine: the fact that you have a child.

Thankfully, while the researchers “find it encouraging that the model can remember facts, understand contexts [and] perform common sense reasoning,” they admit their basic model provides “unsatisfying answers” and “does not capture a consistent personality.”

That should mean we’re still a long way from being lectured by replicants about “attack ships on fire off the shoulder of Orion.” But with Google also building terrifying robot dogs, we must remain ever vigilant.

A Neural Conversational Model [PDF via Arxiv]

Read next: Boston Dynamics shows off new robot dog, dooms us all by kicking it

Image credit: Bubbye on Imgur

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with