Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on September 19, 2018

Teaching robots to predict the future


Teaching robots to predict the future Image by: agsandrew / Shutterstock

Future-predicting robots are all the rage this year in machine learning circles, but today’s deep learning techniques can only take the research so far. That’s why some ambitious AI developers are turning to an already established prediction engine for inspiration: The human brain.

Researchers around the world are closing in on the development of a truly autonomous robot. Sure, there’s plenty of robotics that can do amazing things without human intervention. But none of them are ready to be released, unsupervised, into the wild where they’re free to move about and occupy the same spaces as human members of the public.

And think about it, would you be willing to trust a robot not to smash into you in a hallway, or crash through a window and plummet to its, or the person it lands on’s, death in a world where 63 percent of people are afraid of driverless cars?

The way we’re going to bridge the gap between what people do instinctively — like moving out of the way of one another without the need to strategize with strangers, or avoiding leaping out of a window as a method for collision-avoidance — and what robots are currently capable of, is to figure out why we are the way we are, and how we can make them more like us.

One scientist in particular making advances in this area is Alan Winfield. He’s been working on making smarter robots for years. Back in 2014, on his personal blog, he said:

For a couple of years I’ve been thinking about robots with internal models. Not internal models in the classical control-theory sense, but simulation based models; robots with a simulation of themselves and their environment inside themselves, where that environment could contain other robots or, more generally, dynamic actors. The robot would have, inside itself, a simulation of itself and the other things, including robots, in its environment.

This might seem like old news four years later (which may as well be 50 in the field of AI) but his continuing work in the field shows some pretty amazing results. In a paper published just a few months ago he proposes that robots working in emergency services – think medical response robots – which could need the ability to move swiftly through a crowd, are an incredible safety risk to any humans in their vicinity. What good is a rescue robot that runs over a crowd of bystanders?

Rather than rely on flashing lights, sirens, voice warnings, and other methods which require humans to be the “smart” party which recognizes danger, Winfield and scientists like him want robots to simulate every move, internally, before acting.

The current version of his work is showcased in a “hallway experiment” he worked on. In it, a robot uses internal simulation modeling to determine what humans are going to do next while traversing an enclosed space — like a hotel hallway. It takes longer for it to cross the hallway while running the simulation – 50 percent longer to be exact – but it also shows a marked improvement in collision-avoidance accuracy over other systems.

Early work in the field suggested that artificial neural networks – like GANs – would bring machine learning predictions to the field of robotics, and they have, but it’s not enough. AI that only responds to another entity’s actions will never be anything other than reactionary. And it certainly won’t cut it for machines to simply say “my bad” after crushing you.

The function of our brains that predicts the emotional state, motivations, and next actions a person, animal, or object will take is called the “theory of mind.” It’s how you know that a red-faced person who raises their hand is about to slap you, or how you can predict a car is about to crash into another vehicle seconds before it happens.

No, we’re not all psychics who’ve evolved the ability to tap into the consciousness of the future – or any other mumbo-jumbo that fortune tellers might have you believe. We’re just really, really smart compared to machines.

Your average four-year-old creates internal simulation models that make Google or Nvidia’s best AI look like it was developed on a broken abacus. Seriously, kids are way smarter than robots, computers, or any artificial neural network in existence.

That’s because we’re designed to avoid things like pain and death. Robots don’t care if they fall into a pool of water, get beaten up, or injure themselves falling off stage. And if nobody teaches them not to, they’ll make the same mistakes over and over until they no longer function.

Even advanced AI, which most of us would describe as “machines that can learn,” can’t actually “learn” unless its told what it should know. If you want to stop your robot from killing itself, you typically have to predict what kind of situations it’ll get itself into and then reward it for overcoming or avoiding them.

The problem with this method of AI development is apparent in cases such as the Tesla Autopilot software that mistook a large truck for a cloud and smashed into it, killing the human that was “driving” it.

In order to move the field forward and develop the kind of robots mankind has dreamt about since the days of “Rosie” the robot maid from “The Jetsons,” researchers like Winfield are trying to replicate our innate theory of mind with simulation-based internal modeling.

We might be years away from a robot that can function entirely autonomously in the real world without a tether or “safety zone.” But if Winfield, and the rest of the really smart people developing machines that “learn,” can figure out the secret sauce behind our own theory of mind: We may finally get the robot butler, maid, or chauffeur of our dreams.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top