MATCH: an architecture for multimodal dialogue systems

  • Authors:
  • Michael Johnston;Srinivas Bangalore;Gunaranjan Vasireddy;Amanda Stent;Patrick Ehlen;Marilyn Walker;Steve Whittaker;Preetam Maloor

  • Affiliations:
  • AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ;AT&T Labs - Research, NJ

  • Venue:
  • ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailored text planning to enable rapid prototyping of multimodal interfaces with flexible input and adaptive output. Our testbed application MATCH (Multimodal Access To City Help) provides a mobile multimodal speech-pen interface to restaurant and sub-way information for New York City.