Handling uncertainty in multimodal pervasive computing applications

  • Authors:
  • Marie-Luce Bourguet

  • Affiliations:
  • Computer Science Department, Queen Mary, University of London, Mile End Road, London E1 4NS, UK

  • Venue:
  • Computer Communications
  • Year:
  • 2008

Quantified Score

Hi-index 0.25

Visualization

Abstract

Multimodal interaction can improve accessibility to pervasive computing applications. However, recognition-based interaction techniques used in multimodal interfaces (e.g. speech and gesture recognition) are still error prone. Recognition errors and misinterpretations can compromise the security, robustness, and efficiency of pervasive computing applications. In this paper we briefly review the various error handling strategies that can be found in the multimodal literature. We then discuss the new challenges arising from novel affective and context-aware applications for error correction. We show that traditional multimodal error handling strategies are ill adapted to pervasive computing applications, where the computing devices have become invisible, and when users may not be aware of their own behaviour. Finally, we present an original experimental study into users' synchronisation of speech and pen inputs in error correction. The results of the study suggest that users are likely to modify their synchronisation patterns in the belief that it can help error resolution. This study is a first step towards a better understanding of spontaneous user strategies for error correction in multimodal interfaces and pervasive environments.