Some computer science issues in ubiquitous computing
Communications of the ACM - Special issue on computer augmented environments: back to the real world
Developing a context-aware electronic tourist guide: some issues and experiences
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Understanding and Using Context
Personal and Ubiquitous Computing
XWand: UI for intelligent spaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
SenSay: A Context-Aware Mobile Phone
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Challenges in adopting speech recognition
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
The CommandTalk spoken dialogue system
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Semantic coherence scoring using an ontology
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Ubiquitous talker: spoken language interaction with real world objects
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Contextual coherence in natural language processing
CONTEXT'03 Proceedings of the 4th international and interdisciplinary conference on Modeling and using context
Disambiguating speech commands using physical context
Proceedings of the 9th international conference on Multimodal interfaces
Contextual partitioning for speech recognition
ACM Transactions on Embedded Computing Systems (TECS)
Hi-index | 0.00 |
In this paper, we propose a robust natural language interface called CASIS for controlling devices in an intelligent environment. CASIS is novel in a sense that it integrates physical context acquired from the sensors embedded in the environment with traditionally used context to reduce the system error rate and disambiguate deictic references and elliptical inputs. The n-best result of the speech recognizer is re-ranked by a score calculated using a Bayesian network consisting of information from the input utterance and context. In our prototype system that uses device states, brightness, speaker location, chair occupancy, speech direction and action history as context, the system error rate has been reduced by 41% compared to a baseline system that does not leverage on context information.