POPL '87 Proceedings of the 14th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
The logic of typed feature structures
The logic of typed feature structures
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
TRIPs: an integrated intelligent problem-solving assistant
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Model-based and empirical evaluation of multimodal interactive error correction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A framework and toolkit for the construction of multimodal learning interfaces
A framework and toolkit for the construction of multimodal learning interfaces
The efficiency of multimodal interaction for a map-based task
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Construct Algebra: analytical dialog management
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 06
Hi-index | 0.00 |
Much research has been directed towards developing multimodal interfaces in the past twenty years. Many current multimodal systems, however, can only handle multimodal inputs at the sentence level. The move towards multimodal dialogue significantly increases the complexity of the system as the representations of the input now range over time and input modality. We developed a framework to address this problem consisting of three parts. First, we propose to use multidimensional feature structures, a straightforward extension of typed feature structures, as a uniform representational formalism in which the semantic content stemming from all input modalities can be expressed. Second, we extend the feature structure formalism by an object-oriented framework that allows the back-end application to keep track of the state of the representation under discussion. And third, we propose an informational characterization of dialogue states through a constraint logic program whose constraint system consists of the multidimentional feature structures. The multimodal dialogue manager uses the characterization of dialogue states to decide on an appropriate strategy.