Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on July 26, 2013

Apple is reportedly creating its own speech tech for Siri to eliminate dependence on Nuance


Apple is reportedly creating its own speech tech for Siri to eliminate dependence on Nuance

We may now know why Apple quietly set up shop in Boston earlier this year, if a report from Xconomy is to be believed. According to the site, Apple is developing a team of top speech technologists in Boston to eventually eliminate its dependence on Nuance for Siri.

The likelihood of this is actually extremely high — we’ve illustrated why below.

As a reminder, Nuance is the Boston-based multinational software maker which powers Apple’s voice recognition feature in Siri. Nuance has recently seen at least two of its speech scientists leave the company and join Apple. In other words, Apple is pulling talent from Nuance and putting them to work in its own backyard.

Currently, as Xconomy details, Apple’s Boston team publicly includes former Nuance employee Gunnar Evermann, who has a history of developing speech recognition technology; Larry Gillick, whose title is “Chief Speech Scientist, Siri at Apple;” and Don McAllaster, another ex-Nuance employee whose title at Apple is simply “Senior Research Scientist.” There are also a handful of other former Nuance employees currently at Apple, but not based in Boston, including Caroline Labrecque and Rongqing Huang.

Given how clear these titles are (again: “Chief Speech Scientist, Siri”), Apple is certainly developing some sort of speech technology in Boston. The only thing that’s currently unconfirmed is if Apple is strategically distancing itself from Nuance. Apple has a history of eliminating third-party ties to become self-reliant, and Nuance just might be next on Apple’s list.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with