Cogniac: a discourse processing engine
Cogniac: a discourse processing engine
CommandTalk: a spoken-language interface for battlefield simulations
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Gemini: a natural language system for spoken-language understanding
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
The CommandTalk spoken dialogue system
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Practical issues in compiling typed unification grammars for speech recognition
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Overriding errors in a speech and gaze multimodal architecture
Proceedings of the 9th international conference on Intelligent user interfaces
Resolving ambiguities of a gaze and speech interface
Proceedings of the 2004 symposium on Eye tracking research & applications
A Gaze and Speech Multimodal Interface
ICDCSW '04 Proceedings of the 24th International Conference on Distributed Computing Systems Workshops - W7: EC (ICDCSW'04) - Volume 7
Robust object-identification from inaccurate recognition-based inputs
Proceedings of the working conference on Advanced visual interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analyzing and predicting focus of attention in remote collaborative tasks
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Proceedings of the 13th international conference on Intelligent user interfaces
A pen and speech-based storytelling system for Chinese children
Computers in Human Behavior
A Story Authoring System for Children
Edutainment '09 Proceedings of the 4th International Conference on E-Learning and Games: Learning by Playing. Game-based Education System Design and Development
A multimodal 3D storytelling system for Chinese children
Edutainment'07 Proceedings of the 2nd international conference on Technologies for e-learning and digital entertainment
Context-based word acquisition for situated dialogue in a virtual world
Journal of Artificial Intelligence Research
Utilizing visual attention for cross-modal coreference interpretation
CONTEXT'05 Proceedings of the 5th international conference on Modeling and Using Context
A multimodal fusion framework for children's storytelling systems
Edutainment'06 Proceedings of the First international conference on Technologies for E-Learning and Digital Entertainment
Hi-index | 0.00 |
Most computational spoken dialogue systems take a "literary" approach to reference resolution. With this type of approach, entities that are mentioned by a human interactor are unified with elements in the world state based on the same principles that guide the process during text interpretation. In human-to-human interaction, however, referring is a much more collaborative process. Participants often under-specify their referents, relying on their discourse partners for feedback if more information is needed to uniquely identify a particular referent. By monitoring eye-movements during this interaction, it is possible to improve the performance of a spoken dialogue system on referring expressions that are underspecified according to the literary model. This paper describes a system currently under development that employs such a strategy.