This article was published on November 25, 2015

These defiant robots are learning to reject human orders


These defiant robots are learning to reject human orders

Robots that we interact with in today’s society are programmed to carry out tasks without deviation. The robots of the future might just be a little more defiant.

Researchers at the Tufts University Human-Robot Interaction Lab in Massachusetts are trying something that many-a-science fiction movie warned against — teaching a robot to say “no.”

As humans, when we’re asked to do something we evaluate the command using “felicity conditions.” Simply put, felicity conditions are the processes we run through to determine context, capacity and trustworthiness of the person giving the command.

According to IEEE Spectrum, felicity conditions for a robot could look like this.

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Numbers one, two and three are pretty self-explanatory.

Number four, “social role and obligation,” has the robot objectively determine whether the person giving the command has the authority to do so.

Number five, “normative permissibility,” is a rather scientific-sounding way of saying the robot shouldn’t do things it believes to be dangerous to itself or humans.

The conditions above are important not just to teach robots when they should say “no” to humans, but to provide a framework for evaluation that allows the robot to explain why it rejected an order.

In the video below, you’ll see an example of this in action.

The human orders the robot to walk forward, a command the robot rejects because it understands it would fall off the table. The human can then use this objection to modify the framework of the command in order to make the robot more comfortable.

Another example scenario is shown in the video below. The robot is instructed to walk into what it perceives to be an obstacle and rejects the command. The human then asks the robot to turn off its obstacle detection and the robot declines because the human doesn’t have the appropriate level of authority, or trust, in order to make that command.

In the final example, we see the same scenario as the video above only with a human that does have the appropriate trust or authorization to make the command.

In current applications, a robot that does what it’s ordered is a good thing.

Robots wielding torches for welding, screwing down bits of heavy machinery or moving scraps of metal from one conveyor to the next are necessary to complete a task. Any deviation from these commands could lead to human injury or broken hardware.

Other machines, like the Google Car, will have to make on-the-fly decisions that involve quick calculations and precise judgements based on an ever-changing environment.

In science fiction, we’ve all been led to believe a robot that doesn’t follow orders is a bad thing. In a real life application, it could just save your life.

Robots are learning to say “no” to human orders [Quartz]

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top