Machine learning has been trotted out as a trend to watch for many years now. But there’s good reason to talk about it in the context of 2020. And that’s thanks to developments like TensorFlow.js: an end-to-end open source machine learning library that is capable of, among other features, running pre-trained AI directly in a web browser.
Why the excitement? It means that AI is becoming a more fully integrated part of the web; a seemingly small and geeky detail that could have far reaching consequences.
Sure, we’ve already got examples a plenty of web tools that use AI: speech recognition, sentiment analysis, image recognition, and natural language processing are no longer earth-shatteringly new. But these tools generally offload the machine learning task to a server, wait for it to compute and then send back the results.
That’s fine and dandy for tasks that can forgive small delays (you know the scenario: you type a text in English, then patiently wait a second or two to get it translated into another language). But this browser-to-server-to-browser latency is the kiss of death for more intricate and creative applications.
Face-based AR lenses, for example, need to instantaneously and continually track the user’s face, making any delay an absolute no-go. But latency is also a major pain in simpler applications too.
The pain point
Not so long ago, I tried to develop a web-app that, through a phone’s back-facing camera, was constantly on the lookout for a logo; the idea being that when the AI recognizes the logo, the site unlocks. Simple, right? You’d think so. But even this seemingly straight-forward task meant constantly taking camera snapshots and posting them to servers so that the AI could recognize the logo.
The task had to be completed at breakneck speed so that the logo was never missed when the user’s phone moved. This resulted in tens of kilobytes being uploaded from the user’s phone every two seconds. A complete waste of bandwidth and a total performance killer.
But because TensorFlow.js brings TensorFlow’s server-side AI solution directly into the web, if I were to build this project today, I could run a pre-trained model that lets the AI recognize the given logo in the user’s phone browser. No data upload needed and detection could run a couple times per second, not a painful once every two seconds.
Less latency, more creativity
The more complex and interesting the machine learning application, the closer to zero latency we need to be. So with the latency-removing TensorFlow.js, AI’s creative canvas suddenly widens; something beautifully demonstrated by the Experiments with Google initiative. Its human skeleton tracking and emoji scavenger hunt projects show how developers can get much more inventive when machine learning becomes a properly integrated part of the web.
The skeleton tracking is especially interesting. Not only does it provide an inexpensive alternative to Microsoft Kinect, it also brings it directly onto the web. We could even go as far as developing a physical installation that reacts to movement using web technologies and a standard webcam.
The emoji scavenger hunt, on the other hand, shows how mobile websites running TensorFlow.js can suddenly become aware of the phone’s user context: where they are, what they see in front of them. So it can contextualize the information displayed as a result.
This potentially has far-reaching cultural implications too. Why? Because people will soon begin to understand mobile websites more as “assistants” than mere “data providers.” It’s a trend that started with Google Assistant and Siri-enabled mobile devices.
But now, thanks to true web AI, this propensity to see mobiles as assistants will become fully entrenched once websites – especially mobile websites – start performing instantaneous machine learning. It could trigger a societal change in perception, where people will expect websites to provide utter relevance for any given moment, but with minimal intervention and instruction.
The future is now
Hypothetically speaking, we could also use true web AI to develop websites that adapt to people’s ways of using them. By combining TensorFlow.js with the Web Storage API, a website could gradually personalize its color palette to appeal more to each user’s preferences. The site’s layout could be adjusted to be more useful. Even its contents could be tweaked to better suit each individual’s needs. And all on the fly.
Or imagine a mobile retail website that watches the user’s environment through the camera and then adjusts its offering to match the user’s situation? Or what about creative web campaigns that analyze your voice, like Google’s Freddie Meter?
With all these tantalizing possibilities on the brink of becoming a reality, it’s a pity we’ve had to wait so long for a proper web-side machine learning solution. Then again, it was this insufficient AI performance on mobile devices that encouraged TensorFlow’s (as in server-side TensorFlow – the .js version’s predecessor) product development into being a truly integrated part of the web. And now that we finally have the gift of true web machine learning, 2020 could well be the year that developers unleash their AI creativity.
Published January 2, 2020 — 08:00 UTC