User representations of computer systems in human-computer speech interaction
International Journal of Man-Machine Studies
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Speech versus keying in command and control applications
International Journal of Human-Computer Studies
Dialogue design in speech-mediated data-entry: the role of syntactic constraints and feedback
International Journal of Human-Computer Studies
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
User performance and acceptance of a speech interface in a health assessment task
International Journal of Human-Computer Studies
Usability Engineering
INTERACT '97 Proceedings of the IFIP TC13 Interantional Conference on Human-Computer Interaction
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Web on the wall: insights from a multimodal interaction elicitation study
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.00 |
Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and gestures on a touchscreen. However, present speech recognizers and natural language interpreters cannot yet process spontaneous speech accurately. These limitations make it necessary to impose constraints on users' speech inputs. Thus, ergonomic studies are needed to provide user interface designers with efficient guidelines for the definition of usable speech constraints.We evolved a method for designing oral and multimodal (speech + 2D gestures) command languages, which could be interpreted reliably by present systems, and easy to learn through human-computer interaction (HCI). The empirical study presented here contributes to assessing the usability of such artificial languages in a realistic software environment. Analyses of the multimodal protocols collected indicate that all subjects were able to assimilate rapidly the given expression constraints, mainly while executing simple interactive tasks; in addition, these constraints, which had no noticeable effect on the subjects' activities, had a limited influence on their use of modalities.These results contribute to the validation of the method we propose for the design of tractable and usable multimodal command languages.