Rub shoulders with leading experts and industry disruptors at TNW Conference →

The heart of tech

This article was published on August 16, 2017


    MIT researchers use machine learning to kill video buffering

    MIT researchers use machine learning to kill video buffering
    Abhimanyu Ghoshal
    Story by

    Abhimanyu Ghoshal

    Managing Editor

    Abhimanyu is TNW's Managing Editor, and is all about personal devices, Asia's tech ecosystem, as well as the intersection of technology and Abhimanyu is TNW's Managing Editor, and is all about personal devices, Asia's tech ecosystem, as well as the intersection of technology and culture. Hit him up on Twitter, or write in: [email protected].

    Don’t you just hate it when the YouTube clip you’re trying to watch pauses midway to buffer, or drastically lowers the resolution to a pixelated mess? A group of MIT researchers believe they’ve figured out a solution to those annoyances plaguing millions of people a day.

    Using machine learning, the Pensieve system figures out the optimal algorithm to use for delivering video at the best possible resolution while avoiding buffering breaks, no matter what connection you’re on.

    That’s kind of what YouTube and Netflix already strive to do, but the researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) say that the systems currently have to make a trade-off between the quality of the video versus how often it has to rebuffer in order to prepare the next segment of the clip for viewing.

    By using an AI to learn what algorithm works best in various conditions – including, for example, instances when you’re heading into a tunnel where connectivity is sketchy, and when you’re in a crowded area with thousands of other network users – Pensieve is said to cut rebuffering by up to 30 percent.

    The team says it’s tested its system with just a month’s worth of video content; exposing it to more data, like Netflix’s entire catalog, could help boost its performance even further. The technology could also prove useful in applications like streaming high-resolution VR content.

    The researchers will present their paper at the upcoming SIGCOMM Conference in Los Angeles, and also plan to open-source the project subsequently. You can learn more about how it works on this page.