This article was published on July 17, 2020

AI researchers say we’ve squeezed nearly as much out of modern computers as we can


AI researchers say we’ve squeezed nearly as much out of modern computers as we can Image by: Rog01

Deep learning’s reached the end of its rope. At least according to a group of researchers from MIT, Underwood International College, the MIT-IBM Watson AI Lab and the University of Brasilia who recently conducted an audit of more than 1,000 pre-print papers on arXiv.

We’ve run out of compute, basically. The researchers claim we’ll we’ll soon reach a point where it’s no longer economically or environmentally feasible to continue scaling deep learning systems.

Per the team’s paper:

Progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable. Thus, continued progress in these applications will require dramatically more computationally-efficient methods, which will either have to come from changes to deep learning or from moving to other machine learning methods.

This might come as a shock to TensorFlow users and AI hobbyists running impressive neural networks on GPUS or home computers, but training large scale models is a power-intensive, expensive proposition. Clever algorithms and dedicated hardware can only take things so far.

If, for example, you want to train a huge state-of-the-art system like OpenAI’s big bad text generator, GPT-2, you’ll be spending a lot of money and potentially doing some serious damage to the environment.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Credit: MIT

The above chart, a screenshot from the MIT team’s research paper, shows what popular deep learning systems like ImageNet cost us in terms of environmental, computational, and financial expenditure.

Based on current trends, the researchers feel we’ll soon reach a point where achieving further benchmarks – such as reaching higher accuracy with ImageNet – will no longer be cost-effective under the current paradigm.

Quick take: The field of AI’s been staring down the barrel of this gun for a long time. Arguably, machine learning algorithms have been held back by compute since the 1950s. Thanks to a few modern tricks, we’ve enjoyed a spurt of growth for the past decade or so that’s led to one of the most exciting periods for technology in human history.

It might look like the party is coming to an end, but I wouldn’t get out your AI winter coat just yet. If you hadn’t noticed, there’s like a bazillion people working in and entering the field of AI. If there’s a way forward we’ll find it.

The MIT researchers believe we’ll come up with better algorithms and “other machine learning methods,” to solve our power struggle. Perhaps most interestingly, they also speculate that quantum computing could help bushwhack a path forward.

For more information check out the team’s pre-print paper here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with