Expression constraints in multimodal human-computer interaction

  • Authors:
  • Sandrine Robbe-Reiter;Noëlle Carbonell;Pierre Dauchy

  • Affiliations:
  • LORIA, BP 239, F54506, Vandœuvre-lès-Nancy Cedex, France;LORIA, BP 239, F54506, Vandœuvre-lès-Nancy Cedex, France;IMASSA-CERMA, BP 73, 91223 Brétigny-sur-Orge Cedex, France

  • Venue:
  • Proceedings of the 5th international conference on Intelligent user interfaces
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and gestures on a touchscreen. However, present speech recognizers and natural language interpreters cannot yet process spontaneous speech accurately. These limitations make it necessary to impose constraints on users' speech inputs. Thus, ergonomic studies are needed to provide user interface designers with efficient guidelines for the definition of usable speech constraints.We evolved a method for designing oral and multimodal (speech + 2D gestures) command languages, which could be interpreted reliably by present systems, and easy to learn through human-computer interaction (HCI). The empirical study presented here contributes to assessing the usability of such artificial languages in a realistic software environment. Analyses of the multimodal protocols collected indicate that all subjects were able to assimilate rapidly the given expression constraints, mainly while executing simple interactive tasks; in addition, these constraints, which had no noticeable effect on the subjects' activities, had a limited influence on their use of modalities.These results contribute to the validation of the method we propose for the design of tractable and usable multimodal command languages.