Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on September 18, 2018

Google improves its song recognition service by using AI from the Pixel 2


Google improves its song recognition service by using AI from the Pixel 2

Offline song recognition (also known as Now Playing) is one of the most cherished features of Google Pixel 2 flagship. Some of the powerful AI tech behind it is now part of the similar Sound Search feature present in Google Search and Google Assistant, allowing it to deliver faster and more accurate results on any device that supports these services.

While Now Playing works offline, Sound Search requires an internet connection. To use the latter, simply start a voice query on your phone, and a “What’s this song?” prompt will pop up if there’s music playing around you; tap it to have Google figure out which track it is.

To recognize songs on Pixel 2, first, the AI generated a “fingerprint” of an eight-second long audio clip – recorded through the device’s mic – by creating seven two-second embeddings (small sound sample groups) at one-second interval. Then it searched the on-device database twice for a match: The first lookup is fast and inaccurate, while the second is a detailed search. Google updates this database frequently to include new songs.

Google Pixel AI

Google explained that Sound Search works on a larger scale than Now Playing, making it challenging to perform song search faster. But since it’s a server-side operation, it isn’t limited by the computational power available on a mobile device the way Now Playing is.

So, Google’s team has introduced three key changes to improve Sound Search:

  • Quadrupled the size of the neural networks to convert sound recorded from a mic to embedded spaces.
  • Increased the density of embeddings by fingerprinting audio every 0.5 seconds instead of 1 second for faster and accurate matching.
  • Changed weight of the index in the database to identify popular songs quickly.

From my brief tests in a cafe today, I found Sound Search to be more accurate, as it correctly identified songs like The Middle by Cimorelli and a Starboy cover by Rajiv Dhall.

That should make for better song recognition capabilities than before – and hopefully negate the need for third-party apps that do the same thing, like Shazam and SoundHound. Google’s AI team said that the next step for Sound Search is to recognize songs better in noisy environments. We can’t wait to see that roll out.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top