Join us at TNW Conference 2022 for insights into the future of tech →

Human-centric AI news and analysis

This article was published on January 29, 2021

MIT’s new ‘liquid’ neural network learns on the job — so robots can adapt to changing conditions

The system draws inspiration from a tiny worm

MIT’s new ‘liquid’ neural network learns on the job — so robots can adapt to changing conditions
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW — Thomas covers AI in all its iterations. Likes Werner Herzog films and Arsenal FC. Writer at Neural by TNW — Thomas covers AI in all its iterations. Likes Werner Herzog films and Arsenal FC.

MIT researchers have invented an adaptive “liquid” neural network that could improve decision-making in self-driving cars and medical diagnosis.

The algorithm adjusts to changes experienced by real-world systems by changing their underlying equations as they receive new data.

“This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” said Ramin Hasani, the study paper’s lead author. “The potential is really significant.”

[Read: How this company leveraged AI to become the Netflix of Finland]

Hasani said the system is inspired by a tiny worm — the C. elegans:

It only has 302 neurons in its nervous system yet it can generate unexpectedly complex dynamics.

The code was influenced by the way the C. elegans’ neurons activate and communicate with each other through electrical impulses.

Hasani structed his neural network so that the parameters can change over time based on the results of a nested set of differential equations.

This allows it to continue learning after the training phase, making it more resilient to unexpected situations, like heavy rain covering a camera on a self-driving car.

Future plans

The liquid network’s small number of highly expressive neurons also makes it easier to interpret its decisions.

“Just [by] changing the representation of a neuron, you can really explore some degrees of complexity you couldn’t explore otherwise,” said Hasani.

In tests, the network performed promisingly in predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns.

Its small size also significantly reduced the computing costs.

Hasani said he now wants to prepare the system for real-world applications:

We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process. The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.

You can read the study paper on the pre-print server arXiv.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with