Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on February 17, 2016

Researchers are teaching robots to be good by getting them to read kids stories


Researchers are teaching robots to be good by getting them to read kids stories

There’s no manual for being a good human, but greeting strangers as you walk by in the morning, saying thank you and opening doors for people are probably among the top things we know we should do, even if we sometimes forget.

But where on earth do you learn stuff like that? Well, some researchers at the Georgia Institute of Technology reckon a lot of it is down to the stories we’re read as kids and now they’re using that idea to teach robots how to be ‘good people’ too.

Using a previous project that saw a computer automatically gather ‘correct’ story narratives from the Web, researchers Mark Riedl and Brent Harrison are now teaching them how to take the role of the “protagonist” so that they make the right choices.

When faced with a series of choices when acting on behalf of humans – rob the pharmacy or pick up the prescription? –  a “value-aligned reward signal” can now be produced as the computer plots the outcome of each scenario.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Robbing the store might be the fastest and cheapest way to get the meds, but value-alignment learned from stories enable the robot to plot and then choose the right way to behave.

Riedl, associate professor and director of the Entertainment Intelligence Lab, calls this a “primitive first step toward general moral reasoning in AI.”

The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature. We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.

The team said the main limitation of their work at present is it can only be applied to robots performing a limited range of tasks for humans, rather than general AI. And they warn:

Even with value alignment, it may not be possible to prevent all harm to human beings, but we believe that an artificial intelligence that has been encultured—that is, has adopted the values implicit to a particular culture or society—will strive to avoid psychotic-appearing behavior except under the most extreme circumstances.

Using Stories to Teach Human Values to Artificial Agents [Georgia Institute of Technology via CNET]

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top