You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on September 2, 2009

Google explains what went down with Gfail


Google explains what went down with Gfail

Screen-shot-2009-09-01-at-20.33.02-600x532GMail went GFail today, the service was completely inaccessible to web users across the globe for almost two hours.

The result saw masses and masses of users complaining via Twitter and similar services. In all fairness to Google, they handled the uproar well and kept users updated via the Google Apps dashboard and posted regular updates throughout the downtime.

A few hours later, Google has published a post detailing precisely what happened and what they plan to do to prevent similar issues in the future.

Ben Treynor, Google’s VP Engineering and Site Reliability explains.

This morning (Pacific Time) we took a small fraction of Gmail’s servers offline to perform routine upgrades. This isn’t in itself a problem — we do this all the time, and Gmail’s web interface runs in many locations and just sends traffic to other locations when one is offline.

However, as we now know, we had slightly underestimated the load which some recent changes (ironically, some designed to improve service availability) placed on the request routers — servers which direct web queries to the appropriate Gmail server for response. At about 12:30 pm Pacific a few of the request routers became overloaded and in effect told the rest of the system “stop sending us traffic, we’re too slow!”. This transferred the load onto the remaining request routers, causing a few more of them to also become overloaded, and within minutes nearly all of the request routers were overloaded. As a result, people couldn’t access Gmail via the web interface because their requests couldn’t be routed to a Gmail server. IMAP/POP access and mail processing continued to work normally because these requests don’t use the same routers.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

What are they doing to prevent similar issues in the future?

Some of the actions are straightforward and are already done — for example, increasing request router capacity well beyond peak demand to provide headroom. Some of the actions are more subtle — for example, we have concluded that request routers don’t have sufficient failure isolation (i.e. if there’s a problem in one datacenter, it shouldn’t affect servers in another datacenter) and do not degrade gracefully (e.g. if many request routers are overloaded simultaneously, they all should just get slower instead of refusing to accept traffic and shifting their load).

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top