Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on December 2, 2019

How babies can teach AI to understand classical and quantum physics


How babies can teach AI to understand classical and quantum physics

A team of researchers from MIT recently tapped the amazing potential of the human brain to develop an AI model that understands physics as good as some humans. And by some, we mean three-month-old babies.

It might not sound like much, but at three months old an infant has a basic grasp of how physical things work. They understand advanced concepts such as solidity and permanence – objects typically don’t pass through one another or disappear – and they can predict motion. To study this, researchers show infants videos of objects acting the way they should, such as passing behind an object and emerging on the other side, and others where they seemingly break the laws of physics.

What scientists have learned is that babies exhibit varying levels of surprise when objects don’t act the way they should.

MIT researcher Kevin Smith said:

By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport. We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.

The big idea for the MIT team was to train AI to recognize whether a physical event should be considered surprising or not and then to express that surprise in its output. Per an MIT press release:

Coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” [per paper coauthor Tomer Ullman]. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.

Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.

Classical physics is hard. The myriad predictions and calculations involved in figuring out what’s going to happen next in any given sequence of events are incredibly complex and require massive amounts of compute for non-AI systems. Unfortunately, even AI systems are beginning to produce diminishing returns under classical computing paradigms. In order to push forward, it’s likely we’ll have to abandon the current brute-force method of cramming data into a black box and then using hundreds or thousands of processing units in tandem to tune and tease useful outputs out of an artificial neural network. 

Some experts believe we need a quantum solution that can time travel, or arrive at multiple outputs at once, and then surface answers autonomously like the human brain. This puts us in a bit of a “Catch 22,” because our understanding of the human brain, artificial neural networks, and quantum physics are all considered incomplete. The hope is that continued research in all three fields will work as a rising tide that lift all ships. 

For now, scientists hope that artificial curiosity and codifying ‘surprise’ helps to bridge the gap between the human brain and artificial neural networks. Eventually this novel, exploration-based method of learning could be combined with quantum computing technology to create the basis for “thinking” machines.

We may have a long way to go before any of this happens, but today’s research represents the initial baby steps towards human-level AI. For a deeper dive into the MIT team’s work check out its conference paper here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with