Franklin Morris is a senior writer for Rackspace.
If you were hoping to catch the recent premiere of “Game of Thrones” on HBO Go, you were likely met instead with the “spinning wheel of non-loading hopelessness and despair”—a name some customers have bestowed upon HBO’s error message.
It was a case of history repeating itself. Just a few weeks earlier, “True Detective” took down the network’s streaming service for the entire evening of its highly anticipated finale.
That kind of customer reaction is predictable, so why wasn’t it predicted? Why don’t streaming services simply scale up when they know we’ll all be watching?
Streaming media demands performance
When latency strikes, where exactly does the break in performance occur?
Streaming services are an integral part of how we consume content online, and they’re growing year-over-year. Streaming video already accounts for more than half of all Internet traffic, and that number is projected to jump as high as 70 percent by 2017 according to Cisco.
With customer expectations at an all-time high, streaming services are rising to meet those expectations, upgrading their cloud and dedicated infrastructure to include higher performance servers. And although customers are seeing major performance gains, that’s just one piece of the performance puzzle.
The truth is, even if HBO Go had done everything right on its end for the premiere of “Game of Thrones,” online fans would have still suffered an outage if the bottleneck occurred upstream, somewhere on the ISP’s core network.
Tracing a request’s journey
To understand where streaming video often breaks, it helps to explore the winding trajectory a video request takes. For a request to get from your living room to the server and back it must often pass through dozens of different networks, each owned by a company that’s made significant investments in network infrastructure.
This infrastructure is the backbone of the Internet.
When you click on “House of Cards” or “True Detective,” the request whirs out of your living room to a kind of “terminal”—a box that might sit on a telephone pole or on the corner of your street serving as a hub for all of the cable in your area. It makes a few other quick stops at various nodes, which convert and manage traffic for your region. Then it checks in briefly (very briefly, for fractions of a millisecond) at a carrier hotel, also called a colocation center.
Carrier hotels are massive, secure data centers packed with cables and servers. Some of the pipes and servers belong to Internet service providers, while others belong to companies like Google or Netflix.
Somewhere in this hotel there’s a place called the “meet-me room.” It’s where the various strands of the Web meet, shake hands, and pass off requests to one another.
This is where the Internet service provider networks end and your streaming video provider’s network takes over the request. Your ISP hands off the request to Netflix or HBO Go (or one of their networking partners), that company pulls the video from their own servers, then hands it back to your ISP right there in the meet-me room.
Here’s the point where everything starts to break down. Regardless of the extent to which streaming video providers upgrade and optimize their own infrastructure, they hit a kind of glass ceiling of performance when it comes time to pass the video back off to the ISP’s network. The video will only reach your TV screen as quickly as the ISP’s legacy network infrastructure will allow.
Performance is caught in the middle
The meet-me room may look like an ordinary data center, but it’s really a battleground. This handoff from one company’s server to another is a serious point of contention that is dragging down the speed and quality of video before it ever hits your screen.
The problem is a function of peering, the process by which streaming video providers and ISPs negotiate their upstream and downstream traffic. The guy on his couch watching “Game of Thrones” is consuming downstream traffic, while the services like Netflix, Hulu and HBO Go produce mostly upstream traffic. They’re the ones uploading stuff to the Internet; the guy on the couch is the one downloading it.
If the traffic is mostly equal both ways the ISPs normally allow the peering free of charge. But when the relationship is lopsided, say in the case of Netflix who accounts for 30 percent of the total Internet traffic in the United States, ISPs aren’t as likely to favor a friendly, handshake deal. They’ve already spent hundreds of millions of dollars updating their infrastructure.
If Netflix wants something better than its current network—they contend—then it should be Netflix footing the bill for improvements.
Right now Netflix relies on Cogent to connect its network to the ISPs, but the points of intersection between Cogent and the ISP networks are maxing out. They need to be upgraded, but won’t be until all the parties can agree on who will pay for it.
Netflix is slowly cutting deals with the ISPs, one at a time, aimed at connecting their servers more directly to the ISP’s networks (it signed an agreement with Comcast in January), but until all parties are satisfied, you won’t see performance improve across the board.
The result is that, for now, video has to travel a longer distance to get from the server to your house every time you request it. That means more and longer periods of buffering.
Breaking the bottleneck
Until the Internet’s core network infrastructure is upgraded to meet customer demand, streaming companies and customers alike will continue to endure outages and latency when traffic spikes.
In the grand scheme of performance, a network is only as strong as its weakest link. That’s why it’s important that companies on both sides of this argument move toward higher performance—from the infrastructure to the bandwidth and everything in between.
Whatever the solution—whether a new peering agreement or a technological advancement on the network side—let’s just hope it happens before the Targaryen dragons make their way into Westeros.