It’s the height of rush hour and all the freeways are clogged but you still have to visit a client across town.
You enter the destination into you GPS but you navigation app sends you into the worst of the gridlock where you spend the next hour edging your way down an on-ramp.
Win a trip to Amsterdam!
We've teamed up with Product Hunt to offer you the chance to win an all expense paid trip to TNW Conference 2017!
That’s also the same situation facing us when we log onto the internet.
Despite massive technological advances since the days of dial up, we’re still grappling with slow page response, timeouts, and networking delays.
So why does the internet still suck in 2016?
Web page latency
There are so many apps running in the background today due to ads, cookies, scripts and various personalization features that the simple loading of a page entails many journeys back and forth over the Net before it is fully ready. This can slow traffic to a crawl.
The problem is worsened by the internet’s decades old routing and transport protocols that could be characterized as the Waze-like app from Hell.
Instead of sending data around congestion, it directs traffic along pre-determined paths using rules that are directly contributing to increasingly slower speeds and bad connections.
Poor upstream performance
While technology such as content delivery networks (CDNs), wide area network (WAN) acceleration, and various cloud services may have alleviated some of these headaches, users still have to accept buffering when streaming video, jitter on video-conferences and dropped connections when uploading a large file to the cloud.
CDNs are optimized for downstream, but they don’t do much of anything for upstream data. This shows up especially in bandwidth-constraining applications and services such as hosting, file sharing, rich media, advertising, and gaming – which are continually stunted by poorly performing connections.
I experienced this difficulty first-hand during networked Xbox gaming between Israel and the US. It just didn’t make sense why performance was so poor even if the game itself didn’t require much bandwidth and the underlying protocols were supposed to handle the delay. They were supposed to, when in fact, the game didn’t allow you to have users from both the US and Europe.
The internet remains a black box
There’s more visibility into LANs, WANs, and IT systems than ever before; yet, the Web remains largely a black box.
As Forrest Gump says, “You never know what you’re going to get.”
One minute the prescribed route might be just fine, and then suddenly it becomes a bottleneck. Once you transmit something, you have no control over the route in which it travels. You just have to wait while it heads over some pipe or other and through a collection of networks, regardless of whether that is the best route for your data or not.
It’s all laid out according to an arcane set of rules that have more to do with who gets paid for what than they have with efficiency.
Premium is expensive
Cloud operators and others who want their apps to have low latency and high performance have to deploy hardware, software, and a plethora of other services to assure performance in other regions.
This might mean building or renting a data center to cope with inconsistency.
After all, a great app in one region or part of the world might work poorly in another area due to big geographic differences on the internet.
Take Lexifone, a cloud-based VoIP company that wishes to offer services globally, while only using a single data center in Europe.
Latency and packet loss would be too great in areas such as Asia to maintain an acceptable voice quality.
Attempting to guarantee SLAs on the internet is a bit like running a 30-minute guaranteed pizza delivery business via the bus system.
A bus might come, but then again, it might not. You’d end up giving out an awful lot of free pizzas.
Relatively few can afford to throw hardware (and additional data centers) at the problem and remain profitable. For conferencing, we have first-hand experience in how difficult this can be between offices in San Francisco and Tel-Aviv. Add a few mobile users to each end, and the garbled voices and pixelated faces become unbearable.
The internet we deserve
So what should the internet be like?
It should provide end-to-end visibility, and traffic should be directed to the fastest path, so that users get the best quality experience.
The ability to take real-time (or close to real-time) internet congestion into account and determine a faster route is vital to achieving Internet speed, reliability and performance users want. As advanced applications and services are becoming cache-less, the reliance on real-time data becomes even greater as decision-based routing will need to be nearly instantaneous.
The cloud can play a big role in realizing this vision. Just think about how it’s revolutionized many areas of IT in the past decade.
The vast compute power of modern processors can now be accessed via the cloud to solve problems in hours that used to take weeks. The storage woes of the modern enterprise have been alleviated by offloading data to cloud-based services.
Now it’s the network’s turn. By harnessing the cloud, it’s possible to resolve many of the riddles that lie behind internet sluggishness.