Professor Stephen Hawking’s speedy new Intel speech system is built on SwiftKey

Professor Stephen Hawking’s speedy new Intel speech system is built on SwiftKey

Professor Stephen Hawking’s new bespoke communications system uses technology from SwiftKey to assist him in writing and talking. The company’s text prediction technology, similar to what’s found in its consumer iOS and Android apps, has been integrated into the scientist’s system.

Professor Hawking has suffered from motor neurone disease for the majority of his life and uses the computer to communicate with the world. The updated Intel-built system integrates SwiftKey software allowing him to now accurately choose entire words rather than individual characters.

The new Intel-designed system replaces the professor’s previous communications equipment which he had been using for decades. The collaboration was revealed at a London seminar where Professor Hawking spoke with representatives from Intel and SwiftKey.

Professor Hawking said:

“Medicine has not been able to cure me, so I rely on technology to help me communicate and live. Intel has been supporting me for almost 20 years, allowing me to do what I love every day. The development of this system has the potential to improve the lives of disabled people around the world and is leading the way in terms of human interaction and the ability to overcome communication boundaries that once stood in the way.”

Intel Labs spent three years working with the scientist to develop the new system called ACAT (Assistive Context Aware Toolkit). The company is opening up the solution to researchers and technologists in January and says it can be customized to assist others with motor neurone disease and quadriplegia.

Professor Hawking’s typing speed is twice as fast with the new system and there is a 10x improvement in common tasks, such as easier, more accurate and faster browsing, editing, managing and navigating the Web, emails and documents; opening a new document; and saving, editing and switching between tasks.

His existing cheek sensor is detected by an infrared switch mounted to his glasses which helps him select a character on the computer. That’s where SwiftKey comes in, greatly improving the system’s ability to learn and predict Professor Hawking’s next characters and words. He now has to type less than 20 percent of all characters.

“Our involvement with Professor Hawking and his team began approximately two years ago, and it soon became clear that there was scope for improvement in his text-to-speech system,” says Joe Osborne, team lead for the SwiftKey SDK.

“We were given access to a corpus of text generated by Professor Hawking himself from which we built a customised language model. This was a hand-built model that allowed us to personalise his language – that is, predict his next words from words he had previously inputted – from a more varied source of data such as emails and transcripts from speeches and books. Working with the team at Intel, we developed and they validated the model in tests.”

Intel Labs | SwiftKey



Read next: Snapchat now lets users and businesses create their own location-based geofilters