TSD '99 Proceedings of the Second International Workshop on Text, Speech and Dialogue
An Improved Fusion Design of Audio-Gesture for Multi-modal HCI Based on Web and WPS
ICESS '07 Proceedings of the 3rd international conference on Embedded Software and Systems
MMSDS: ubiquitous computing and WWW-Based multi-modal sentential dialog system
EUC'06 Proceedings of the 2006 international conference on Embedded and Ubiquitous Computing
Speech and gesture recognition-based robust language processing interface in noise environment
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
This article describes an architectural framework of a multimodal dialogue system. The framework is based on separation between the semantic and the syntactic parts of the dialogue. The semantics of the human-computer conversation is captured using a formal language. It describes the conversation by means of sequences of dialogue elements. The paper further elaborates how to derive the syntactic features of the conversation. A two-layer architecture has been proposed for the dialogue system. The upper layer, called sequencer, works with the description of the whole dialogue. The lower layer (driver dock) deals with individual dialogue elements. A prototype has been implemented to demonstrate the main benefits of our framework. These are adaptability and extensibility. The adaptability involves multiple modes of communication, where the modalities can be changed even during the course of a dialogue. Due to layering, the applications can be easily extended to additional modes and user interface devices.