Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Spoken dialogue technology: enabling the conversational user interface
ACM Computing Surveys (CSUR)
Context based multimodal fusion
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
An Online Algorithm for Applying Reinforcement Learning to Handle Ambiguity in Spoken Dialogues
TAMC '09 Proceedings of the 6th Annual Conference on Theory and Applications of Models of Computation
Fusion engines for multimodal input: a survey
Proceedings of the 2009 international conference on Multimodal interfaces
Hi-index | 0.00 |
Multimodal, speech-enabled systems pose different research problems when compared to unimodal, voice-only dialogue systems. One of the important issues is the question of how a multimodal interface should look like in order to make the multimodal interaction natural and smooth, while keeping it manageable from the system perspective. Another central issue concerns algorithms for multimodal dialogue management. This paper presents a solution that relies on adapting an existing unimodal, vocal dialogue management framework to make it able to cope with multimodality. An experimental multimodal system, Archivus, is described together with discussion of the required changes to the unimodal dialogue management algorithms. Results of pilot Wizard of Oz experiments with Archivus focusing on system efficiency and user behaviour are presented1.