Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
Applying the Wizard of Oz Technique to the Study of Multimodal Systems
EWHCI '93 Selected papers from the Third International Conference on Human-Computer Interaction
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
Tangible multimodal interfaces for safety-critical applications
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
ButterflyNet: a mobile capture and access system for field biology research
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Human perception of intended addressee during computer-assisted meetings
Proceedings of the 8th international conference on Multimodal interfaces
Proceedings of the 8th international conference on Multimodal interfaces
Implicit user-adaptive system engagement in speech and pen interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing and Evaluating Mobile Interaction: Challenges and Trends
Foundations and Trends in Human-Computer Interaction
Hi-index | 0.00 |
The present paper reports on the design and performance of a novel dual-Wizard simulation infrastructure that has been used effectively to prototype next-generation adaptive and implicit multimodal interfaces for collaborative groupwork. This high-fidelity simulation infrastructure builds on past development of single-wizard simulation tools for multiparty multimodal interactions involving speech, pen, and visual input [1]. In the new infrastructure, a dual-wizard simulation environment was developed that supports (1) real-time tracking, analysis, and system adaptivity to a user's speech and pen paralinguistic signal features (e.g., speech amplitude, pen pressure), as well as the semantic content of their input. This simulation also supports (2) transparent user training to adapt their speech and pen signal features in a manner that enhances the reliability of system functioning, i.e., the design of mutually-adaptive interfaces. To accomplish these objectives, this new environment also is capable of handling (3) dynamic streaming digital pen input. We illustrate the performance of the simulation infrastructure during longitudinal empirical research in which a user-adaptive interface was designed for implicit system engagement based exclusively on users' speech amplitude and pen pressure [2]. While using this dual-wizard simulation method, the wizards responded successfully to over 3,000 user inputs with 95-98% accuracy and a joint wizard response time of less than 1.0 second during speech interactions and 1.65 seconds during pen interactions. Furthermore, the interactions they handled involved naturalistic multiparty meeting data in which high school students were engaged in peer tutoring, and all participants believed they were interacting with a fully functional system. This type of simulation capability enables a new level of flexibility and sophistication in multimodal interface design, including the development of implicit multimodal interfaces that place minimal cognitive load on users during mobile, educational, and other applications.