Multimodal user interfaces in the Open Agent Architecture
Proceedings of the 2nd international conference on Intelligent user interfaces
Multimodal interfaces for dynamic interactive maps
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
MVIEWS: multimodal tools for the video analyst
IUI '98 Proceedings of the 3rd international conference on Intelligent user interfaces
Readings in agents
Multimodal Maps: An Agent-Based Approach
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
CommandTalk: a spoken-language interface for battlefield simulations
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Integration and synchronization of input modes during multimodal human-computer interaction
ReferringPhenomena '97 Referring Phenomena in a Multimedia Context and their Computational Treatment
Turning pervasive computing into mediated spaces
IBM Systems Journal
Suede: a Wizard of Oz prototyping tool for speech user interfaces
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Evaluation of a multimodal interface for 3D terrain visualization
Proceedings of the conference on Visualization '02
Sentient Map and Its Application to E-Learning
COMPSAC '00 24th International Computer Software and Applications Conference
Speech technology on trial: Experiences from the August system
Natural Language Engineering
Hi-index | 0.00 |
Inspired by a Wizard of Oz (WOZ) simulation experiment, we developed a working prototype of a system that enables users to interact with a map display through synergistic combinations of pen and voice. To address many of the issues raised by multimodal fusion, our implementation employed a distributed multi-agent framework to coordinate parallel competition and cooperation among processing components. Since then, the agent-based infrastructure has been enhanced with a collaboration technology, creating a framework in which multiple humans and automated agents can naturally interact within the same graphical workspace.Our current endeavour is the leveraging of this architecture to create a unified implementation framework for simultaneously developing both WOZ simulated systems and their fully-automated counterparts. Bootstrapping effects made possible by such an approach are illustrated by an experiment currently under way in our laboratory: as a naive subject draws, writes, and speaks requests to a (simulated) interactive map, a hidden Wizard responds as efficiently as possible using our best fullyautomated system, through either standard graphical interface devices or multimodal combinations of pen and voice. The input choices made by both subject and Wizard are invaluable, and the data collected from each can be applied directly to evaluating and improving the automated part of the system.