A group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have found a way to speed up the Web without actually increasing the connection throughput or making fundamental code changes.
It created Polaris, a framework that determines how to overlap the objects being downloaded by a page and minimize the amounts of time a site fetches individual resources. The framework creates a dependency graph of the page, then uses that to determine when each object should be loaded.
New York, are you ready?
We’re building Momentum: an all killer, no filler event this November.
Each individual browser request to grab a new resource can add “up to 100 milliseconds” according to PhD student Ravi Netravali. The group likens the way Polaris works to a travelling businessperson:
When you visit one city, you sometimes discover more cities you have to visit before going home. If someone gave you the entire list of cities ahead of time, you could plan the fastest possible route. Without the list, though, you have to discover new cities as you go, which results in unnecessary zig-zagging between far-away cities.
The group acknowledged that similar dependency trackers have existed before, but mimicked the way a browser loads pages and didn’t catch subtle dependencies.
The Polaris paper is authored by graduate student Ameesh Goyal and professor Hari Balakrishnan, as well as Harvard professor James Mickens and has been worked on since 2014 across more than 200 of the most popular websites, including The New York Times and Weather.com.
It’s been proven that there’s a deep link between performance and user activity – an Amazon study found that for every 100-millisecond of delay, it’d lose one percent of profits. If a simple technique can reduce load times by a third, it’ll be rapidly adopted by almost everyone.