Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on December 7, 2018

Yale teaches robots not to mess with people’s stuff


Yale teaches robots not to mess with people’s stuff

When it comes to getting a quality education, a robot could do far worse than a program at Yale. Machine learning researchers at the Ivy-League university recently started teaching robots about the nuances of social interaction. And there’s no better place to start than with possessions.

One of the earliest social constructs that humans learn is the idea of ownership. That’s my bottle. Gimme that teddy bear. I want that candy bar and I will make your life a living hell if you don’t buy it for me right now.

Robots, on the other hand, don’t have a grain of Veruca Salt in them, because ownership is a human idea. Still, if you want a robot to avoid touching your stuff or interacting with something, you typically have to hard code some sort of limitation. If we want them to assist us, clean up our trash, or assemble our Ikea furniture they’re going to have to understand that some objects are everyone’s and others are off limits.

But nobody has time to teach a robot every single object in the world and program ownership associations for each one. According to the team’s white paper:

For example, an effective collaborative robot should be able to distinguish and track the permissions of an unowned tool versus a tool that has been temporarily shared by a collaborator. Likewise, a trash-collecting robot should know to discard an empty soda can, but not a cherished photograph, or even an unopened soda can, without having these permissions exhaustively enumerated for every possible object.

The Yale team developed a learning system to train a robot to learn and understand ownership in context. This allows it to develop its own rules, on the fly, based on observing humans and responding to their instructions.

Credit: Yale
Baxter the robot learns to avoid touching the belongings of Yale researcher Xuan Tan.

The researchers created four distinct algorithms to power the robot’s concept of ownership. The first enables the robot to understand a positive example. If a researcher says “that’s mine” the robot knows it shouldn’t touch that object. The second algorithm does the opposite, it let’s the machine know an object isn’t associated when a person says “that’s not mine.”

Finally, the third and fourth algorithms give the machine the ability to add or subtract rules to its concept of ownership if it’s told something has changed. Theoretically, this would allow the robot to process changes in ownership without needing the machine learning equivalent of a software update and reboot.

Robots will only be useful to humans if they can integrate themselves into our lives unobtrusively. If a machine doesn’t know how to “act” around humans, or follow social norms, it’ll eventually become disruptive.

Nobody wants the cleaning bot to snatch a coffee cup out of their hand because it detected a dirty dish, or to throw away everything on their messy desk because it can’t distinguish between clutter and garbage.

The Yale team acknowledges that this work is in its infancy. Despite the fact that the algorithms (which you can get a deeper look at in the white paper) presented create a robust platform to work from, they only address a very basic framework for the concept of ownership.

Next, the researchers hope to teach robots to understand ownership beyond the capacity of just its own actions. This would include, presumably, prediction algorithms to determine how other people and agents are likely to observe social norms related to ownership.

The future will be built by robots but, thanks to researchers like the ones at Yale, they’ll know it belongs to humans.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with