Building multimodal applications with EMMA

  • Authors:
  • Michael Johnston

  • Affiliations:
  • AT&T Labs Research, Florham Park, NJ, USA

  • Venue:
  • Proceedings of the 2009 international conference on Multimodal interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimodal interfaces combining natural modalities such as speech and touch with dynamic graphical user interfaces can make it easier and more effective for users to interact with applications and services on mobile devices. However, building these interfaces remains a complex and high specialized task. The W3C EMMA standard provides a representation language for inputs to multimodal systems facilitating plug-and-play of system components and rapid prototyping of interactive multimodal systems. We illustrate the capabilities of the EMMA standard through examination of its use in a series of mobile multimodal applications for the iPhone.