This article was published on June 10, 2020

Tech can’t keep up with Moore’s Law forever, so software devs must prepare


Tech can’t keep up with Moore’s Law forever, so software devs must prepare Image by: Unsplash: Zan

This article was originally published on .cult by Doug Neale .cult is a media platform for untold developer stories, where developers can read content around the softer side of development and watch documentaries about the tech they love. You can read this original piece here.

Computers have revolutionized the modern world. But what has contributed more? Hardware or software? As much as I’d like to claim my field, it has been the computer chip that has changed the world.

For the past fifty years, the silicon chip has improved at an exponential rate. This trend is known as Moore’s Law, because Gordon Moore, the founder of Intel, correctly predicted in 1975 that the number of transistors on a computer chip would double every two years. This so-called “doubling effect” has resulted in faster, cheaper, and more power-efficient computer chips. It’s because of Moore’s law that we have all our favorite modern tech, including personal computers, laptops, and smartphones.

However, as hardware got faster, software got slower. A big, new toolshed was built for us, so we naturally crammed new things in. We added new features. We made computationally expensive graphics. We created easier-to-use programming languages so we could have more, sooner. And so software is slowing down, but no one notices because the hardware keeps up with the demand.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

But there’s a problem: it can’t keep up much longer. Computer chips only get faster if we can make the transistor smaller. Sometime this decade we will hit that limit. We are down to the atomic level, and until we see a breakthrough in other transistor technologies, we’ll be stuck with the speeds we’ve got.

This means we need to rethink the way we make software, and The MIT Technology review believes we are not prepared. Our society depends on technological advancement and we need software to get better, now that hardware no longer can. Does this mean we all have to labor over 20th-century style coding, meticulously optimizing every line of code? Perhaps the cushy ride is over for developers.

Entrepreneur Marc Andreessen is not so worried. In his interview “Why should I be optimistic about the future,” he reassured us that we are prepared.

To begin with, we’ve got cloud computing at our disposal. Unlike decades ago, we can now scale an application across many servers automatically. Instead of focusing on the output of one chip, we can focus on getting “good at using lots of chips to do things” according to Andreessen. He says that utilizing cloud computing for efficiency is what we’ve seen in the AI and cryptocurrency worlds, suggesting that more and more use cases will depend on distributed processing architectures.

This aligns with the expectation that mobile phone processing power will shift to the cloud, once network technologies like WiFi 6 and 5G reduce latencies. Phones would become “thin client” devices where most of the hardware is not in the device but on a server.

While we may not find ourselves returning to soul-crushing, low-level codebases, the next generation of developers will still need to adapt. Modern technologies like neural networks and the blockchain may be commonplace in architecture diagrams. These techniques will continue to drive progress, even without the transistor doubling effect.

And so we, just like Andreessen, should remain confident that with these approaches “we’ve got decades of advances ahead, which aren’t purely dependent on classic Moore’s law.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with