Visual display, pointing, and natural language: the power of multimodal interaction

  • Authors:
  • Antonella De Angeli;Walter Gerbino;Giulia Cassano;Daniela Petrelli

  • Affiliations:
  • University of Trieste, Via dell'Università 7, Trieste - Italy;University of Trieste, Via dell'Università 7, Trieste - Italy;University of Trieste, Via dell'Università 7, Trieste - Italy;IRST - Istituto per la Ricerca Scientifica e Tecnologica, Povo (Trento) - Italy

  • Venue:
  • AVI '98 Proceedings of the working conference on Advanced visual interfaces
  • Year:
  • 1998

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper examines user behavior during multimodal human-computer interaction (HCI). It discusses how pointing, natural language, and graphical layout should be integrated to enhance the usability of multimodal systems. Two experiments were run to study simulated systems capable of understanding written natural language and mouse-supported pointing gestures. Results allowed to: (a) develop a taxonomy of communication acts aimed at identifying targets; (b) determine the conditions under which specific referent identification strategies are likely to be produced; (c) suggest guidelines for designing effective multimodal interfaces; (d) show that performance is strongly influenced by interface graphical layout and by user expertise. Our study confirms the value of simulation as a tool for building HCI models and supports the basic idea that linguistic, visual, and motor cues can be integrated to favor effective multimodal communication.