This article was published on September 12, 2018

MIT taught a neural network how to show its work


MIT taught a neural network how to show its work

MIT’s Lincoln Laboratory Intelligence and Decision Technologies Group yesterday unveiled a neural network capable of explaining its reasoning. It’s the latest attack on the black box problem, and a new tool for combating biased AI.

Dubbed the Transparency by Design Network (TbD-net), MIT’s latest machine learning marvel is a neural network designed to answer complex questions about images. The network parses a query by breaking it down into subtasks that are handled by individual modules.

If you asked it to determine the color of “the large square” in a picture showing several different shapes of varying size and color, for example, it would start by having a module capable of looking for “large” objects process the information and then display a heatmap indicating which objects it believes to be large. Next it would scan the image with a module that determines which of the large objects were squares. And finally, it would then use a module to determine the large square’s color and output the answer along with a visual representation of the process by which it came to that conclusion.

According to an MIT press release:

The researchers evaluated the model using a visual question-answering dataset consisting of 70,000 training images and 700,000 questions, along with test and validation sets of 15,000 images and 150,000 questions. The initial model achieved 98.7 percent test accuracy on the dataset, which, according to the researchers, far outperforms other neural module network–based approaches.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

A 98.7 percent accuracy rating – with the ability to show its work – is incredible for an image recognition AI. But, even more astounding, is the fact that the researchers were able to use feedback from the network’s explanations of its reasoning to tweak the system and achieve a near-perfect 99.1 percent accuracy.

This isn’t the first attempt we’ve seen at taking AI out of the black box. Earlier this year TNW reported on a similar network by Facebook researchers. But MIT’s network’s incredible accuracy proves that performance doesn’t necessarily have to get thrown out the window if you want transparency.

Experts believe that biased AI is among the chief technological concerns of our time. Recent research indicates that deep learning systems can develop prejudicial bias on their own. And that’s to say nothing of the dangers of embedded human bias representing itself in machine learning code.

The fact of the matter is that a machine which has the potential to harm the lives of humans, such as self driving cars or neural networks that determine sentencing for convicted lawbreakers, shouldn’t be trusted unless it can tell us how it arrives at its conclusions.

MIT’s TbD-net, according to the research we’ve seen, is the gold standard for creating AI we can understand and trust.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top