Amazon announced this week that it’s integrating new features into Alexa, which will allow developers of Skills to make it sound more expressive and human.
Alexa now supports Speech Synthesis Markup Language (SSML), which allows developers to control the tone, timing, and emotion of its voice. With this update, Alexa will whisper, emphasize, and even bleep inappropriate words if the developer chooses. As Amazon says, “These SSML features provide a more natural voice experience.”
I’m struggling to find why Alexa whispering to me would be useful, especially if it takes more time to develop. Still, a more human sound could lead to a more pleasant user experience. It reminds me of how Microsoft sold Cortana on her smooth tones rather than Siri’s staccato cadence.
Human-sounding voices would probably go a long way towards making Alexa one of the personalized, JARVIS-like personal assistants we heard discussed at SXSW.
But this isn’t something that’s going to be integrated with Alexa. It’s something that Skill developers can use, but not something I can see being part of Alexa’s daily use.
via TechCrunch
Get the TNW newsletter
Get the most important tech news in your inbox each week.