The streaming giant has patented tech that analyzes speech and background noise to suggest new content based on your “emotional state, gender, age, or accent.”
It describes various ways of analyzing audio signals picked up by a microphone to understand who you are and how you feel:
For example, the tone of voice may be more upbeat, high-pitched and/or exciting for users that have been assigned the personality trait of extroversion.
It also proposes using “intonation, stress, [and] rhythm” to infer your mood, “a combination of vocal tract length and pitch” to estimate your age, and environmental metadata to detect whether you’re alone or in a group.
These insights could be used to recommend songs in a variety of ways:
In one example, the output might simply be to play the next content. In another example, the output might be a recommendation on a visual display… In another example aspect, the output is a display of recommended next music tracks corresponding to the preferences.
The patent may not find its way into the platform, but it does offer a glimpse into the future of music recommendations.
Don’t worry, be happy?
Spotify has always stressed that it recommends a diverse range of music — because there are only so many times you can listen to the Black Eyed Peas before you go van Gogh on your ears. One, to be precise.
Still, the idea of determining music tastes based on demographic data sounds rather restrictive.
Just because I’m a 35-year-old male from the UK, it doesn’t mean I like Norah Jones. Maybe I wanna get down with the kidz and put on some K-pop? I absolutely don’t, but you get what I mean.
The mood-based recommendations sound more promising — but also more disturbing.
If I’m wallowing in self-pity, would Spotify recommend some Morrissey to push me deeper into the darkness, or Walking on Sunshine to drag me out of my hole? Either way, I wouldn’t wanna rely on algorithms for psychological support.