How Much Data Does Google hold And How on Earth Do They Do It

How Much Data Does Google hold And How on Earth Do They Do It

You get thousands of search results before you are done typing your query, millions of people around the globe streaming YouTube videos, and complex search engine optimization. How does Google do it?

You’ll need to have the capacity to hold a ridiculous amount of data to be able to do what Google does. Since Google won’t release the information, there have been a few attempts to estimate just how much data the internet giant stores.

‘What If?’ tried to answer this question, “If all digital data were stored on punch cards, how big would Google’s data warehouse be?” They arrived at a rough estimate using electricity consumption and capital expenditure on construction. According to them, total capital expenditure on building amounted to over 12 billion dollars, while Google’s biggest data centres were put between half a billion to a billion dollars. On their website, Google lists 15 data centres across the Americas, Asia, and Europe, some of which are still under construction.

Considering that the company is so focused on saving energy, their data centres spend only 10 – 20% of their electricity on cooling and other overheads. In 2010, Google divulged that they consume about 258 megawatts of power. Fast forward to 2013, the company was set to purchase over 300 megawatts at just three sites, more than they had consumed in all their centres in 2010. Based on these figures, ‘What If?’ guessed that Google was running between 1.8 and 2.4 million servers. This is 2017, and the figures would be exponentially higher by now.

To give a clearer picture, they assumed that if each server had a couple of 2 terabyte disks attached, and then factoring in the amount of cold storage the company could have, Google could hold close to 10 – 15 exabytes. An exabyte is equivalent to 1 million terabytes. Colin Carson does the math and equates that with 30 million personal computers. Again, the figures would probably have grown since then. When ‘What If?’ estimated what this would amount to in terms of punch cards, they came up with 3 miles of punch cards to cover the whole region of New England.

An Engineering Powerhouse

Google operates a phenomenal network of servers and fibre-optic cables, and they do this with unrivalled speed and efficiency. With over a dozen data centres around the globe housing monstrous machines, Google has more than managed to remain one of the most crucial tools for everyday life. The company indexes 20 billion web pages and processes more than 3 billion search queries a day.

Wired’s Steven Levy visited one of Google’s data centres in 2012 and writes that in 1999, when Google was housed in the Exodus server facility in Santa Clara, their system crashed on Mondays and took up to 3.5 seconds to produce search results. Messy and chaotic as their array was, Google was increasingly indexing the web and gathering as much online information as possible for their search engine. They were processing millions of search queries every week and running AdWords, which was just as computation-heavy as their search service.

They got more servers, ensuring search results were delivered faster. The faster they got, the more popular they became, and the greater the load. The company was including other services such as their mail service, which would require much more storage, and at the time, it was becoming much more expensive to lease data centres. What was the solution? Building their own data centres.

Google not only had to build data centres, they had to do it in a more cost-effective way. They named their mission Willpower. Normally, data centres consume a lot of power and are said to consume as much as 1.5 percent of all the electricity in the world. Because the machines generate a ridiculous amount of heat, giant air conditioning units are used to cool them off, sucking up a lot of energy.

Google ingeniously got rid of this problem when they realized that the cold aisle in front of the machines could be kept at a tolerable temperature of about 80 degrees while the hot aisle in the rear could be left at 120 degrees. They could allow the heat get absorbed by water-filled coils while the water would be pumped out of the building, cooled, and then pumped back inside. Guess Andy Decan of searchengineexperts.co.uk, was right when he said, “not all servers are created equally.”

The company was further able to reduce electricity loss by about 15 percent when it removed enormous UPS systems that leaked electricity and required dedicated cooling systems. Instead, redesigned racks to create room for backup batteries next to each server.

Google may be more popular as an internet company but it has been making its own servers, starting in 1999 when they bought parts from an electronics shop and built servers for only 30 percent of what they cost at the time. They have also been building their own networking equipment, but what is to end to all of this?

Their ingenious innovations allow the systems run more effectively and helps the company save more. The more money they save, the more they can spend on the infrastructure they need to run so smoothly. Meanwhile, it doesn’t end at the data centres. What’s the point in having such enormous storage without a fast and reliable means to transmit all that data?

In the early 2000s, Google took advantage of the failure of some telecom operations and started buying abandoned fibre-optic networks. Now, the company has built a monumental network of glass.

Still, this wouldn’t be enough to handle a service like YouTube. Google has mini data centres where it packs tons of popular videos. This allows users stream videos from locations closer to them.

It also extends to software. Google has built a software system that lets it control its innumerable servers as if they were one giant computer.

This post is part of our contributor series. It is written and published independently of TNW.

Read next: A CEO shares the how, where, when and why of their SaaS company going global