Google publishes hundreds of research papers every year covering everything from algorithms to education and an upcoming project set to be presented later this month seeks to be the best AI assistant you ever had.
You may have already run into difficulty when trying to use something like Google Now because some commands only work when you’re connected to the Web.
Another conference. “Great.”
This one’s different, trust us. Our new event for New York is focused on quality, not quantity.
So the Google Research team is working on creating a new, personalized system for voice commands, as well as dictation, that works on your device rather than being dealt with server-side, all without taking up too much room on your handset.
To train it, they used around 2,000 hours of anonymized Google voice search traffic, totalling more than 100 million utterances, while adding in noise from YouTube to imitate real-life speaking conditions.
By applying a number of complex computational and compression models, the team has come up with a local voice system that runs on average seven times faster than real-time on a Nexus 5, all while taking up just 20.3 MB of storage.
Like Google Now voice commands, it can handle proper names and other device-specific information for tasks – like sending an email to a contact – and instructions can be logged, stored and then dealt with when you’re back online if necessary.
The findings will be presented at the 41st International Conference on Acoustics, Speech and Signal Processing, which is happening from March 20.