Extending State Transition Diagrams for the Specification of Human-Computer Interaction
IEEE Transactions on Software Engineering - Annals of discrete mathematics, 24
Probabilistic state machines: dialog management for inputs with uncertainty
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
A design space for multimodal systems: concurrent processing and data fusion
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
ACM '69 Proceedings of the 1969 24th national conference
QuickSet: multimodal interaction for simulation set-up and control
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
TINLAP '75 Proceedings of the 1975 workshop on Theoretical issues in natural language processing
Multi-Modal Definite Clause Grammar
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
Design of communication in multimodal web interfaces
Proceedings of the 27th ACM international conference on Design of communication
Context-aware querying for multimodal search engines
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Hi-index | 0.00 |
The design and evaluation of multimodal interaction is difficult. For designers in industry, developing multimodal interaction systems is a big challenge. Although past researches have presented various methodologies, they have addressed only specific cases of multimodality and failed to generalise their methodologies to a range of applications. In this paper, we present a usability framework for the design and evaluation of multimodal interaction. First, in the early phase of multimodality design, elementary multimodal commands are elicited using traditional usability techniques. Second, based on the CARE (Complementarity, Assignment, Redundancy, and Equivalence) properties and the FSM (Finite State Machine) formalism, the original set of elementary commands is automatically expanded to form a more comprehensive set of multimodal commands. Third, this new set of multimodal commands is evaluated in two ways: user-testing and error-robustness evaluation. This framework acts as a structured and general methodology both for designing and evaluating multimodal interaction. We expect that it will help designers to produce more usable multimodal systems.