As part of ensuring the Web is available to all people on any device, W3C published a new standard on February 10th to enable interactions beyond the familiar keyboard and mouse. EMMA, the Extensible MultiModal Annotation Markup Language, promotes the development of rich Web applications that can be adapted to accept more input modes (such as handwriting, natural language, and gestures) and output modes (such as synthesised speech) at lower cost.
EMMA was developed by the Multimodal Interaction Working Group which included these W3C Members: Aspect Communications, AT&T, Cisco Systems, Department of Information and Communication Technology – University of Trento, Deutsche Telekom AG, France Telecom, Genesys Telecommunications Laboratories, German Research Center for Artificial Intelligence (DFKI) Gmbh, Hewlett Packard Company, Institut National de Recherche en Informatique et en Automatique, International Webmasters Association / HTML Writers Guild (IWA-HWG), Korea Association of Information & Telecommunication, Korea Institute of Science & Technology (KIST), Kyoto Institute of Technology, Loquendo, S.p.A., Microsoft Corp., Nuance Communications, Inc., Openstream, Inc., Siemens AG, Université catholique de Louvain, V-Enable, Inc., Voxeo, and Waterloo Maple.
“As a common language for representing multimodal input, EMMA lays a cornerstone upon which more advanced architectures and technologies can be developed to enable natural multimodal interactions. We are glad that EMMA has become a W3C Recommendation and pleased with the capabilities that EMMA brings to the multimodal interactions over the Web.”
— Wu Chou, Director, Avaya Labs Research, Avaya
Learn more about Multimodal Interaction Activity at W3C.
Get the TNW newsletter
Get the most important tech news in your inbox each week.